Updates from: 04/23/2022 01:05:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
Previously updated : 01/25/2022 Last updated : 04/21/2022 -+
For more information, see the article [Deploy Conditional Access App Control for
## Sign-in frequency
-Sign-in frequency defines the time period before a user is asked to sign in again when attempting to access a resource.
+Sign-in frequency defines the time period before a user is asked to sign in again when attempting to access a resource. Administrators can select a period of time (hours or days) or choose to require reauthentication every time.
Sign-in frequency setting works with apps that have implemented OAUTH2 or OIDC protocols according to the standards. Most Microsoft native apps for Windows, Mac, and Mobile including the following web applications follow the setting.
For more information, see the article [Configure authentication session manageme
## Disable resilience defaults (Preview)
-During an outage, Azure AD will extend access to existing sessions while enforcing Conditional Access policies. If a policy cannot be evaluated, access is determined by resilience settings.
+During an outage, Azure AD will extend access to existing sessions while enforcing Conditional Access policies. If a policy can't be evaluated, access is determined by resilience settings.
If resilience defaults are disabled, access is denied once existing sessions expire. For more information, see the article [Conditional Access: Resilience defaults](resilience-defaults.md).
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Previously updated : 10/23/2020 Last updated : 04/21/2022 -+
Sign-in frequency defines the time period before a user is asked to sign in agai
The Azure Active Directory (Azure AD) default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire: users that are trained to enter their credentials without thinking can unintentionally supply them to a malicious credential prompt.
-It might sound alarming to not ask for a user to sign back in, in reality any violation of IT policies will revoke the session. Some examples include (but are not limited to) a password change, an incompliant device, or account disable. You can also explicitly [revoke usersΓÇÖ sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken). The Azure AD default configuration comes down to ΓÇ£donΓÇÖt ask users to provide their credentials if security posture of their sessions has not changedΓÇ¥.
+It might sound alarming to not ask for a user to sign back in, in reality any violation of IT policies will revoke the session. Some examples include (but aren't limited to) a password change, an incompliant device, or account disable. You can also explicitly [revoke usersΓÇÖ sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken). The Azure AD default configuration comes down to ΓÇ£donΓÇÖt ask users to provide their credentials if security posture of their sessions hasn't changedΓÇ¥.
The sign-in frequency setting works with apps that have implemented OAUTH2 or OIDC protocols according to the standards. Most Microsoft native apps for Windows, Mac, and Mobile including the following web applications comply with the setting.
The sign-in frequency setting works with apps that have implemented OAUTH2 or OI
- Dynamics CRM Online - Azure portal
-The sign-in frequency setting works with SAML applications as well, as long as they do not drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
+The sign-in frequency setting works with SAML applications as well, as long as they don't drop their own cookies and are redirected back to Azure AD for authentication on regular basis.
### User sign-in frequency and multi-factor authentication
Example 2:
- At 00:45, the user returns from their break and unlocks the device. - At 01:45, the user is prompted to sign in again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator since the last sign-in happened at 00:45.
+### Require reauthentication every time (preview)
+
+There are scenarios where customers may want to require a fresh authentication, every time before a user performs specific actions. Sign-in frequency has a new option for **Every time** in addition to hours or days.
+
+The public preview supports the following scenarios:
+
+- Require user reauthentication during [Intune device enrollment](/mem/intune/fundamentals/deployment-guide-enrollment), regardless of their current MFA status.
+- Require user reauthentication for risky users with the [require password change](concept-conditional-access-grant.md#require-password-change) grant control.
+- Require user reauthentication for risky sign-ins with the [require multi-factor authentication](concept-conditional-access-grant.md#require-multi-factor-authentication) grant control.
+
+When administrators select **Every time**, it will require full reauthentication when the session is evaluated.
+
+> [!NOTE]
+> An early preview version included the option to prompt for Secondary authentication methods only at reauthentication. This option is no longer supported and should not be used.
+ ## Persistence of browsing sessions A persistent browser session allows users to remain signed in after closing and reopening their browser window.
-The Azure AD default for browser session persistence allows users on personal devices to choose whether to persist the session by showing a ΓÇ£Stay signed in?ΓÇ¥ prompt after successful authentication. If browser persistence is configured in AD FS using the guidance in the article [AD FS Single Sign-On Settings](/windows-server/identity/ad-fs/operations/ad-fs-single-sign-on-settings#enable-psso-for-office-365-users-to-access-sharepoint-online
-), we will comply with that policy and persist the Azure AD session as well. You can also configure whether users in your tenant see the ΓÇ£Stay signed in?ΓÇ¥ prompt by changing the appropriate setting in the company branding pane in Azure portal using the guidance in the article [Customize your Azure AD sign-in page](../fundamentals/customize-branding.md).
+The Azure AD default for browser session persistence allows users on personal devices to choose whether to persist the session by showing a ΓÇ£Stay signed in?ΓÇ¥ prompt after successful authentication. If browser persistence is configured in AD FS using the guidance in the article [AD FS Single Sign-On Settings](/windows-server/identity/ad-fs/operations/ad-fs-single-sign-on-settings#enable-psso-for-office-365-users-to-access-sharepoint-online), we'll comply with that policy and persist the Azure AD session as well. You can also configure whether users in your tenant see the ΓÇ£Stay signed in?ΓÇ¥ prompt by changing the appropriate setting in the company branding pane in Azure portal using the guidance in the article [Customize your Azure AD sign-in page](../fundamentals/customize-branding.md).
## Configuring authentication session controls
Conditional Access is an Azure AD Premium capability and requires a premium lice
> > Before enabling Sign-in Frequency, make sure other reauthentication settings are disabled in your tenant. If "Remember MFA on trusted devices" is enabled, be sure to disable it before using Sign-in frequency, as using these two settings together may lead to prompting users unexpectedly. To learn more about reauthentication prompts and session lifetime, see the article, [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
+## Policy deployment
+
+To make sure that your policy works as expected, the recommended best practice is to test it before rolling it out into production. Ideally, use a test tenant to verify whether your new policy works as intended. For more information, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+ ### Policy 1: Sign-in frequency control
-1. [Sign in](https://portal.azure.com) to the Azure portal.
-1. Search for **Azure AD Conditional Access**.
-1. Select **Policies**.
-1. Select **+ New policy**.
-1. Select **Create new policy**.
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
1. Choose all required conditions for customerΓÇÖs environment, including the target cloud apps. > [!NOTE] > It is recommended to set equal authentication prompt frequency for key Microsoft Office apps such as Exchange Online and SharePoint Online for best user experience.
-1. Go to **Access Controls** > **Session** and click **Sign-in frequency**
-1. Enter the required value of days and hours in the first text box
-1. Select a value of **Hours** or **Days** from dropdown
-1. Save your policy
+1. Under **Access controls** > **Session**.
+ 1. Select **Sign-in frequency**.
+ 1. Enter the required value of days or hours in the first text box.
+ 1. Select a value of **Hours** or **Days** from dropdown.
+1. Save your policy.
![Conditional Access policy configured for sign-in frequency](media/howto-conditional-access-session-lifetime/conditional-access-policy-session-sign-in-frequency.png)
-On Azure AD registered Windows devices sign in to the device is considered a prompt. For example, if you have configured the sign-in frequency to 24 hours for Office apps, users on Azure AD registered Windows devices will satisfy the sign-in frequency policy by signing in to the device and will be not prompted again when opening Office apps.
+On Azure AD registered Windows devices, sign in to the device is considered a prompt. For example, if you've configured the sign-in frequency to 24 hours for Office apps, users on Azure AD registered Windows devices will satisfy the sign-in frequency policy by signing in to the device and will be not prompted again when opening Office apps.
### Policy 2: Persistent browser session
-1. [Sign in](https://portal.azure.com) to the Azure portal.
-1. Search for **Azure AD Conditional Access**.
-1. Select **Policies**.
-1. Select **+ New policy**.
-1. Select **Create new policy**.
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
1. Choose all required conditions. > [!NOTE] > Please note that this control requires to choose ΓÇ£All Cloud AppsΓÇ¥ as a condition. Browser session persistence is controlled by authentication session token. All tabs in a browser session share a single session token and therefore they all must share persistence state.
-1. Go to **Access Controls** > **Session** and click **Persistent browser session**
-1. Select a value from dropdown
-1. Save you policy
+1. Under **Access controls** > **Session**.
+ 1. Select **Persistent browser session**.
+ 1. Select a value from dropdown.
+1. Save your policy.
![Conditional Access policy configured for persistent browser](media/howto-conditional-access-session-lifetime/conditional-access-policy-session-persistent-browser.png) > [!NOTE] > Persistent Browser Session configuration in Azure AD Conditional Access will overwrite the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane in the Azure portal for the same user if you have configured both policies.
-## Validation
+### Policy 3: Sign-in frequency control every time risky user
+
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users and groups**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md).
+ 1. Select **Done**.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Conditions** > **User risk**, set **Configure** to **Yes**. Under **Configure user risk levels needed for policy to be enforced** select **High**, then select **Done**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require password change**, and select **Select**.
+1. Under **Session controls** > **Sign-in frequency**, select **Every time (preview)**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After administrators confirm your settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
+
+### Validation
Use the What-If tool to simulate a login from the user to the target application and other conditions based on how you configured your policy. The authentication session management controls show up in the result of the tool. ![Conditional Access What If tool results](media/howto-conditional-access-session-lifetime/conditional-access-what-if-tool-result.png)
-## Policy deployment
+## Prompt tolerance
-To make sure that your policy works as expected, the recommended best practice is to test it before rolling it out into production. Ideally, use a test tenant to verify whether your new policy works as intended. For more information, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more often than once every five minutes. If the user has done MFA in the last 5 minutes, and they hit another Conditional Access policy that requires reauthentication, we won't prompt the user. Over-promoting users for reauthentication can impact their productivity and increase the risk of users approving MFA requests they didnΓÇÖt initiate. We highly recommend using ΓÇ£Sign-in frequency ΓÇô every timeΓÇ¥ only for specific business needs.
## Known issues - If you configure sign-in frequency for mobile devices, authentication after each sign-in frequency interval could be slow (it can take 30 seconds on average). Also, it could happen across various apps at the same time.
To make sure that your policy works as expected, the recommended best practice i
## Next steps
-* If you are ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
There are a few pre-requisites for publisher verification, some of which will ha
- An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization.
+- The Azure AD tenant where the app is registered must be associated with the Partner Global account. If it's not the primary tenant associated with the PGA, follow the steps to [set up the MPN partner global account as a multi-tenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
+ - An app registered in an Azure AD tenant, with a [Publisher Domain](howto-configure-publisher-domain.md) configured. - The domain of the email address used during MPN account verification must either match the publisher domain configured on the app or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) added to the Azure AD tenant.
active-directory Tutorial V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-auth-code.md
Next, implement a small [Express](https://expressjs.com/) web server to serve yo
``` 2. Next, create file named *server.js* and add the following code:
- ```JavaScript
- const express = require('express');
- const morgan = require('morgan');
- const path = require('path');
- const argv = require('yargs')
- .usage('Usage: $0 -p [PORT]')
- .alias('p', 'port')
- .describe('port', '(Optional) Port Number - default is 3000')
- .strict()
- .argv;
-
- const DEFAULT_PORT = 3000;
-
- //initialize express.
- const app = express();
-
- // Initialize variables.
- let port = DEFAULT_PORT; // -p {PORT} || 3000;
- if (argv.p) {
- port = argv.p;
- }
-
- // Configure morgan module to log all requests.
- app.use(morgan('dev'));
-
- // Set the front-end folder to serve public assets.
- app.use("/lib", express.static(path.join(__dirname, "../../lib/msal-browser/lib")));
-
- // Setup app folders
- app.use(express.static('app'));
-
- // Set up a route for https://docsupdatetracker.net/index.html.
- app.get('*', function (req, res) {
- res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html'));
- });
-
- // Start the server.
- app.listen(port);
- console.log(`Listening on port ${port}...`);
- ```
-
-You now have a small webserver to serve your SPA. After completing the rest of the tutorial, the file and folder structure of your project should look similar to the following:
-
-```
-msal-spa-tutorial/
-Γö£ΓöÇΓöÇ app
-│   ├── authConfig.js
-│   ├── authPopup.js
-│   ├── authRedirect.js
-│   ├── graphConfig.js
-│   ├── graph.js
-│   ├── https://docsupdatetracker.net/index.html
-│   └── ui.js
-ΓööΓöÇΓöÇ server.js
-```
+ :::code language="js" source="~/ms-identity-javascript-v2/server.js":::
## Create the SPA UI
msal-spa-tutorial/
In the *https://docsupdatetracker.net/index.html* file, add the following code:
- ```html
- <!DOCTYPE html>
- <html lang="en">
- <head>
- <meta charset="UTF-8">
- <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no">
- <title>Tutorial | MSAL.js JavaScript SPA</title>
-
- <!-- IE support: add promises polyfill before msal.js -->
- <script type="text/javascript" src="//cdn.jsdelivr.net/npm/bluebird@3.7.2/js/browser/bluebird.min.js"></script>
- <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.0.0-beta.4/js/msal-browser.js" integrity="sha384-7sxY2tN3GMVE5jXH2RL9AdbO6s46vUh9lUid4yNCHJMUzDoj+0N4ve6rLOmR88yN" crossorigin="anonymous"></script>
-
- <!-- adding Bootstrap 4 for UI components -->
- <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
- <link rel="SHORTCUT ICON" href="https://c.s-microsoft.com/favicon.ico?v2" type="image/x-icon">
- </head>
- <body>
- <nav class="navbar navbar-expand-lg navbar-dark bg-primary">
- <a class="navbar-brand" href="/">Microsoft identity platform</a>
- <div class="btn-group ml-auto dropleft">
- <button type="button" id="SignIn" class="btn btn-secondary" onclick="signIn()">
- Sign In
- </button>
- </div>
- </nav>
- <br>
- <h5 class="card-header text-center">JavaScript SPA calling Microsoft Graph API with MSAL.js</h5>
- <br>
- <div class="row" style="margin:auto" >
- <div id="card-div" class="col-md-3" style="display:none">
- <div class="card text-center">
- <div class="card-body">
- <h5 class="card-title" id="WelcomeMessage">Please sign-in to see your profile and read your mails</h5>
- <div id="profile-div"></div>
- <br>
- <br>
- <button class="btn btn-primary" id="seeProfile" onclick="seeProfile()">See Profile</button>
- <br>
- <br>
- <button class="btn btn-primary" id="readMail" onclick="readMail()">Read Mail</button>
- </div>
- </div>
- </div>
- <br>
- <br>
- <div class="col-md-4">
- <div class="list-group" id="list-tab" role="tablist">
- </div>
- </div>
- <div class="col-md-5">
- <div class="tab-content" id="nav-tabContent">
- </div>
- </div>
- </div>
- <br>
- <br>
-
- <!-- importing bootstrap.js and supporting js libraries -->
- <script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha384-J6qa4849blE2+poT4WnyKhv5vZF5SrPo0iEjwBvKU7imGFAV0wwj1yYfoRSJoZ+n" crossorigin="anonymous"></script>
- <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script>
- <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6" crossorigin="anonymous"></script>
-
- <!-- importing app scripts (load order is important) -->
- <script type="text/javascript" src="./authConfig.js"></script>
- <script type="text/javascript" src="./graphConfig.js"></script>
- <script type="text/javascript" src="./ui.js"></script>
-
- <!-- <script type="text/javascript" src="./authRedirect.js"></script> -->
- <!-- uncomment the above line and comment the line below if you would like to use the redirect flow -->
- <script type="text/javascript" src="./authPopup.js"></script>
- <script type="text/javascript" src="./graph.js"></script>
- </body>
- </html>
- ```
+ :::code language="html" source="~/ms-identity-javascript-v2/app/https://docsupdatetracker.net/index.html":::
2. Next, also in the *app* folder, create a file named *ui.js* and add the following code. This file will access and update DOM elements.
- ```JavaScript
- // Select DOM elements to work with
- const welcomeDiv = document.getElementById("WelcomeMessage");
- const signInButton = document.getElementById("SignIn");
- const cardDiv = document.getElementById("card-div");
- const mailButton = document.getElementById("readMail");
- const profileButton = document.getElementById("seeProfile");
- const profileDiv = document.getElementById("profile-div");
-
- function showWelcomeMessage(account) {
- // Reconfiguring DOM elements
- cardDiv.style.display = 'initial';
- welcomeDiv.innerHTML = `Welcome ${account.username}`;
- signInButton.setAttribute("onclick", "signOut();");
- signInButton.setAttribute('class', "btn btn-success")
- signInButton.innerHTML = "Sign Out";
- }
-
- function updateUI(data, endpoint) {
- console.log('Graph API responded at: ' + new Date().toString());
-
- if (endpoint === graphConfig.graphMeEndpoint) {
- const title = document.createElement('p');
- title.innerHTML = "<strong>Title: </strong>" + data.jobTitle;
- const email = document.createElement('p');
- email.innerHTML = "<strong>Mail: </strong>" + data.mail;
- const phone = document.createElement('p');
- phone.innerHTML = "<strong>Phone: </strong>" + data.businessPhones[0];
- const address = document.createElement('p');
- address.innerHTML = "<strong>Location: </strong>" + data.officeLocation;
- profileDiv.appendChild(title);
- profileDiv.appendChild(email);
- profileDiv.appendChild(phone);
- profileDiv.appendChild(address);
-
- } else if (endpoint === graphConfig.graphMailEndpoint) {
- if (data.value.length < 1) {
- alert("Your mailbox is empty!")
- } else {
- const tabList = document.getElementById("list-tab");
- tabList.innerHTML = ''; // clear tabList at each readMail call
- const tabContent = document.getElementById("nav-tabContent");
-
- data.value.map((d, i) => {
- // Keeping it simple
- if (i < 10) {
- const listItem = document.createElement("a");
- listItem.setAttribute("class", "list-group-item list-group-item-action")
- listItem.setAttribute("id", "list" + i + "list")
- listItem.setAttribute("data-toggle", "list")
- listItem.setAttribute("href", "#list" + i)
- listItem.setAttribute("role", "tab")
- listItem.setAttribute("aria-controls", i)
- listItem.innerHTML = d.subject;
- tabList.appendChild(listItem)
-
- const contentItem = document.createElement("div");
- contentItem.setAttribute("class", "tab-pane fade")
- contentItem.setAttribute("id", "list" + i)
- contentItem.setAttribute("role", "tabpanel")
- contentItem.setAttribute("aria-labelledby", "list" + i + "list")
- contentItem.innerHTML = "<strong> from: " + d.from.emailAddress.address + "</strong><br><br>" + d.bodyPreview + "...";
- tabContent.appendChild(contentItem);
- }
- });
- }
- }
- }
- ```
+ :::code language="js" source="~/ms-identity-javascript-v2/app/ui.js":::
## Register your application
If you'd like to use a different port, enter `http://localhost:<port>`, where `<
Create a file named *authConfig.js* in the *app* folder to contain your configuration parameters for authentication, and then add the following code:
-```javascript
-const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
- redirectUri: "Enter_the_Redirect_Uri_Here",
- },
- cache: {
- cacheLocation: "sessionStorage", // This configures where your cache will be stored
- storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
- }
-};
-
-// Add scopes here for ID token to be used at Microsoft identity platform endpoints.
-const loginRequest = {
- scopes: ["openid", "profile", "User.Read"]
-};
-
-// Add scopes here for access token to be used at Microsoft Graph API endpoints.
-const tokenRequest = {
- scopes: ["User.Read", "Mail.Read"]
-};
-```
-
-Modify the values in the `msalConfig` section as described here:
--- `Enter_the_Application_Id_Here`: The **Application (client) ID** of the application you registered.-- `Enter_the_Cloud_Instance_Id_Here`: The Azure cloud instance in which your application is registered.
- - For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com`.
- - For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).
-- `Enter_the_Tenant_info_here` should be one of the following:
- - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
- - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
- - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`.
- - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
-- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000`.-
-The `authority` value in your *authConfig.js* should be similar to the following if you're using the global Azure cloud:
-
-```javascript
-authority: "https://login.microsoftonline.com/common",
-```
Still in the *app* folder, create a file named *graphConfig.js*. Add the following code to provide your application the configuration parameters for calling the Microsoft Graph API:
-```javascript
-// Add the endpoints here for Microsoft Graph API services you'd like to use.
-const graphConfig = {
- graphMeEndpoint: "Enter_the_Graph_Endpoint_Here/v1.0/me",
- graphMailEndpoint: "Enter_the_Graph_Endpoint_Here/v1.0/me/messages"
-};
-```
Modify the values in the `graphConfig` section as described here:
graphMailEndpoint: "https://graph.microsoft.com/v1.0/me/messages"
In the *app* folder, create a file named *authPopup.js* and add the following authentication and token acquisition code for the login pop-up:
-```JavaScript
-// Create the main myMSALObj instance
-// configuration parameters are located at authConfig.js
-const myMSALObj = new msal.PublicClientApplication(msalConfig);
-
-let username = "";
-
-function loadPage() {
- /**
- * See here for more info on account retrieval:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
- */
- const currentAccounts = myMSALObj.getAllAccounts();
- if (currentAccounts === null) {
- return;
- } else if (currentAccounts.length > 1) {
- // Add choose account code here
- console.warn("Multiple accounts detected.");
- } else if (currentAccounts.length === 1) {
- username = currentAccounts[0].username;
- showWelcomeMessage(currentAccounts[0]);
- }
-}
-
-function handleResponse(resp) {
- if (resp !== null) {
- username = resp.account.username;
- showWelcomeMessage(resp.account);
- } else {
- loadPage();
- }
-}
-
-function signIn() {
- myMSALObj.loginPopup(loginRequest).then(handleResponse).catch(error => {
- console.error(error);
- });
-}
-
-function signOut() {
- const logoutRequest = {
- account: myMSALObj.getAccountByUsername(username)
- };
-
- myMSALObj.logout(logoutRequest);
-}
-
-function getTokenPopup(request) {
- /**
- * See here for more info on account retrieval:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
- */
- request.account = myMSALObj.getAccountByUsername(username);
- return myMSALObj.acquireTokenSilent(request).catch(error => {
- console.warn("silent token acquisition fails. acquiring token using redirect");
- if (error instanceof msal.InteractionRequiredAuthError) {
- // fallback to interaction when silent call fails
- return myMSALObj.acquireTokenPopup(request).then(tokenResponse => {
- console.log(tokenResponse);
-
- return tokenResponse;
- }).catch(error => {
- console.error(error);
- });
- } else {
- console.warn(error);
- }
- });
-}
-
-function seeProfile() {
- getTokenPopup(loginRequest).then(response => {
- callMSGraph(graphConfig.graphMeEndpoint, response.accessToken, updateUI);
- profileButton.classList.add('d-none');
- mailButton.classList.remove('d-none');
- }).catch(error => {
- console.error(error);
- });
-}
-
-function readMail() {
- getTokenPopup(tokenRequest).then(response => {
- callMSGraph(graphConfig.graphMailEndpoint, response.accessToken, updateUI);
- }).catch(error => {
- console.error(error);
- });
-}
-
-loadPage();
-```
### Redirect Create a file named *authRedirect.js* in the *app* folder and add the following authentication and token acquisition code for login redirect:
-```javascript
-// Create the main myMSALObj instance
-// configuration parameters are located at authConfig.js
-const myMSALObj = new msal.PublicClientApplication(msalConfig);
-
-let accessToken;
-let username = "";
-
-// Redirect: once login is successful and redirects with tokens, call Graph API
-myMSALObj.handleRedirectPromise().then(handleResponse).catch(err => {
- console.error(err);
-});
-
-function handleResponse(resp) {
- if (resp !== null) {
- username = resp.account.username;
- showWelcomeMessage(resp.account);
- } else {
- /**
- * See here for more info on account retrieval:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
- */
- const currentAccounts = myMSALObj.getAllAccounts();
- if (currentAccounts === null) {
- return;
- } else if (currentAccounts.length > 1) {
- // Add choose account code here
- console.warn("Multiple accounts detected.");
- } else if (currentAccounts.length === 1) {
- username = currentAccounts[0].username;
- showWelcomeMessage(currentAccounts[0]);
- }
- }
-}
-
-function signIn() {
- myMSALObj.loginRedirect(loginRequest);
-}
-
-function signOut() {
- const logoutRequest = {
- account: myMSALObj.getAccountByUsername(username)
- };
-
- myMSALObj.logout(logoutRequest);
-}
-
-function getTokenRedirect(request) {
- /**
- * See here for more info on account retrieval:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
- */
- request.account = myMSALObj.getAccountByUsername(username);
- return myMSALObj.acquireTokenSilent(request).catch(error => {
- console.warn("silent token acquisition fails. acquiring token using redirect");
- if (error instanceof msal.InteractionRequiredAuthError) {
- // fallback to interaction when silent call fails
- return myMSALObj.acquireTokenRedirect(request);
- } else {
- console.warn(error);
- }
- });
-}
-
-function seeProfile() {
- getTokenRedirect(loginRequest).then(response => {
- callMSGraph(graphConfig.graphMeEndpoint, response.accessToken, updateUI);
- profileButton.classList.add('d-none');
- mailButton.classList.remove('d-none');
- }).catch(error => {
- console.error(error);
- });
-}
-
-function readMail() {
- getTokenRedirect(tokenRequest).then(response => {
- callMSGraph(graphConfig.graphMailEndpoint, response.accessToken, updateUI);
- }).catch(error => {
- console.error(error);
- });
-}
-```
### How the code works
The `acquireTokenSilent` method handles token acquisition and renewal without an
Create file named *graph.js* in the *app* folder and add the following code for making REST calls to the Microsoft Graph API:
-```javascript
-// Helper function to call Microsoft Graph API endpoint
-// using authorization bearer token scheme
-function callMSGraph(endpoint, token, callback) {
- const headers = new Headers();
- const bearer = `Bearer ${token}`;
-
- headers.append("Authorization", bearer);
-
- const options = {
- method: "GET",
- headers: headers
- };
-
- console.log('request made to Graph API at: ' + new Date().toString());
-
- fetch(endpoint, options)
- .then(response => response.json())
- .then(response => callback(response, endpoint))
- .catch(error => console.log(error));
-}
-```
In the sample application created in this tutorial, the `callMSGraph()` method is used to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller. This method adds the acquired token in the *HTTP Authorization header*. In the sample application created in this tutorial, the protected resource is the Microsoft Graph API *me* endpoint which displays the signed-in user's profile information.
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Security defaults allow registration and use of Azure AD Multi-Factor Authentica
- *** App passwords are only available in per-user MFA with legacy authentication scenarios only if enabled by administrators. > [!WARNING]
-> Do not disable methods for your organization if you are using Security Defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
+> Do not disable methods for your organization if you are using security defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
### Backup administrator accounts
active-directory Concept Understand Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/concept-understand-roles.md
Previously updated : 11/11/2021 Last updated : 04/22/2022
This article explains what Azure AD roles are and how they can be used.
## How Azure AD roles are different from other Microsoft 365 roles
-There are many different services in Microsoft 365, such as Azure AD and Intune. Some of these services have their own role-based access control systems;, specifically:
+There are many different services in Microsoft 365, such as Azure AD and Intune. Some of these services have their own role-based access control systems, specifically:
-- Azure AD-- Exchange-- Intune-- Defender for Cloud-- Compliance Center
+- Azure Active Directory (Azure AD)
+- Microsoft Exchange
+- Microsoft Intune
- Microsoft Defender for Cloud Apps-- Commerce
+- Microsoft 365 Defender portal
+- Compliance portal
+- Cost Management + Billing
Other services such as Teams, SharePoint, and Managed Desktop donΓÇÖt have separate role-based access control systems. They use Azure AD roles for their administrative access. Azure has its own role-based access control system for Azure resources such as virtual machines, and this system is not the same as Azure AD roles. ![Azure RBAC versus Azure AD roles](./media/concept-understand-roles/azure-roles-azure-ad-roles.png)
-When we say separate role-based access control system. it means there is a different data store where role definitions and role assignments are stored. Similarly, there is a different policy decision point where access checks happen. For more information , see [Roles for Microsoft 365 services in Azure AD](m365-workload-docs.md) and [Classic subscription administrator roles, Azure roles, and Azure AD roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
+When we say separate role-based access control system. it means there is a different data store where role definitions and role assignments are stored. Similarly, there is a different policy decision point where access checks happen. For more information, see [Roles for Microsoft 365 services in Azure AD](m365-workload-docs.md) and [Classic subscription administrator roles, Azure roles, and Azure AD roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
## Why some Azure AD roles are for other services
Azure AD built-in roles differ in where they can be used, which fall into the fo
- **Azure AD-specific roles**: These roles grant permissions to manage resources within Azure AD only. For example, User Administrator, Application Administrator, Groups Administrator all grant permissions to manage resources that live in Azure AD. - **Service-specific roles**: For major Microsoft 365 services (non-Azure AD), we have built service-specific roles that grant permissions to manage all features within the service. For example, Exchange Administrator, Intune Administrator, SharePoint Administrator, and Teams Administrator roles can manage features with their respective services. Exchange Administrator can manage mailboxes, Intune Administrator can manage device policies, SharePoint Administrator can manage site collections, Teams Administrator can manage call qualities and so on.-- **Cross-service roles**: There are some roles that span services. We have two global roles - Global Administrator and Global Reader. All Microsoft 365 services honor these two roles. Also, there are some security-related roles like Security Administrator and Security Reader that grant access across multiple security services within Microsoft 365. For example, using Security Administrator roles in Azure AD, you can manage Microsoft 365 Defender portal, Microsoft Defender Advanced Threat Protection, and Microsoft Defender for Cloud Apps. Similarly, in the Compliance Administrator role you can manage Compliance-related settings in Microsoft 365 Compliance Center, Exchange, and so on.
+- **Cross-service roles**: There are some roles that span services. We have two global roles - Global Administrator and Global Reader. All Microsoft 365 services honor these two roles. Also, there are some security-related roles like Security Administrator and Security Reader that grant access across multiple security services within Microsoft 365. For example, using Security Administrator roles in Azure AD, you can manage Microsoft 365 Defender portal, Microsoft Defender Advanced Threat Protection, and Microsoft Defender for Cloud Apps. Similarly, in the Compliance Administrator role you can manage Compliance-related settings in Compliance portal, Exchange, and so on.
![The three categories of Azure AD built-in roles](./media/concept-understand-roles/role-overlap-diagram.png)
Category | Role
- | - Azure AD-specific roles | Application Administrator<br>Application Developer<br>Authentication Administrator<br>B2C IEF Keyset Administrator<br>B2C IEF Policy Administrator<br>Cloud Application Administrator<br>Cloud Device Administrator<br>Conditional Access Administrator<br>Device Administrators<br>Directory Readers<br>Directory Synchronization Accounts<br>Directory Writers<br>External ID User Flow Administrator<br>External ID User Flow Attribute Administrator<br>External Identity Provider Administrator<br>Groups Administrator<br>Guest Inviter<br>Helpdesk Administrator<br>Hybrid Identity Administrator<br>License Administrator<br>Partner Tier1 Support<br>Partner Tier2 Support<br>Password Administrator<br>Privileged Authentication Administrator<br>Privileged Role Administrator<br>Reports Reader<br>User Administrator Cross-service roles | Global Administrator<br>Compliance Administrator<br>Compliance Data Administrator<br>Global Reader<br>Security Administrator<br>Security Operator<br>Security Reader<br>Service Support Administrator
-Service-specific roles | Azure DevOps Administrator<br>Azure Information Protection Administrator<br>Billing Administrator<br>CRM Service Administrator<br>Customer LockBox Access Approver<br>Desktop Analytics Administrator<br>Exchange Service Administrator<br>Insights Administrator<br>Insights Business Leader<br>Intune Service Administrator<br>Kaizala Administrator<br>Lync Service Administrator<br>Message Center Privacy Reader<br>Message Center Reader<br>Modern Commerce User<br>Network Administrator<br>Office Apps Administrator<br>Power BI Service Administrator<br>Power Platform Administrator<br>Printer Administrator<br>Printer Technician<br>Search Administrator<br>Search Editor<br>SharePoint Service Administrator<br>Teams Communications Administrator<br>Teams Communications Support Engineer<br>Teams Communications Support Specialist<br>Teams Devices Administrator<br>Teams Administrator
+Service-specific roles | Azure DevOps Administrator<br>Azure Information Protection Administrator<br>Billing Administrator<br>CRM Service Administrator<br>Customer Lockbox Access Approver<br>Desktop Analytics Administrator<br>Exchange Service Administrator<br>Insights Administrator<br>Insights Business Leader<br>Intune Service Administrator<br>Kaizala Administrator<br>Lync Service Administrator<br>Message Center Privacy Reader<br>Message Center Reader<br>Modern Commerce User<br>Network Administrator<br>Office Apps Administrator<br>Power BI Service Administrator<br>Power Platform Administrator<br>Printer Administrator<br>Printer Technician<br>Search Administrator<br>Search Editor<br>SharePoint Service Administrator<br>Teams Communications Administrator<br>Teams Communications Support Engineer<br>Teams Communications Support Specialist<br>Teams Devices Administrator<br>Teams Administrator
## Next steps
active-directory List Role Assignments Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/list-role-assignments-users.md
Follow these steps to list Azure AD roles assigned to a user using PowerShell.
Install-module -name Microsoft.Graph ```
-3. In a PowerShell window, Use [Connect-MgGraph](/graph/powershell/get-started) to sign into and use Microsoft Graph PowerShell cmdlets.
+3. In a PowerShell window, Use [Connect-MgGraph](/powershell/microsoftgraph/get-started) to sign into and use Microsoft Graph PowerShell cmdlets.
```powershell Connect-MgGraph
active-directory Adem Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adem-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with ADEM'
+description: Learn how to configure single sign-on between Azure Active Directory and ADEM.
++++++++ Last updated : 04/13/2022++++
+# Tutorial: Azure AD SSO integration with ADEM
+
+In this tutorial, you'll learn how to integrate ADEM with Azure Active Directory (Azure AD). When you integrate ADEM with Azure AD, you can:
+
+* Control in Azure AD who has access to ADEM.
+* Enable your users to be automatically signed-in to ADEM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ADEM single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* ADEM supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add ADEM from the gallery
+
+To configure the integration of ADEM into Azure AD, you need to add ADEM from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ADEM** in the search box.
+1. Select **ADEM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for ADEM
+
+Configure and test Azure AD SSO with ADEM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ADEM.
+
+To configure and test Azure AD SSO with ADEM, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ADEM SSO](#configure-adem-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ADEM test user](#create-adem-test-user)** - to have a counterpart of B.Simon in ADEM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **ADEM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://cloud.patch.eu/adem/sso`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://cloud.patch.eu/adem/sso/module.php/saml/sp/saml2-acs.php/default-sp`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://cloud.patch.eu/adem/sso`
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ADEM.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ADEM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ADEM SSO
+
+To configure single sign-on on **ADEM** side, you need to send the **App Federation Metadata Url** to [ADEM support team](mailto:info@deproefritplanner.nl). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ADEM test user
+
+In this section, you create a user called Britta Simon in ADEM. Work with [ADEM support team](mailto:info@deproefritplanner.nl) to add the users in the ADEM platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to ADEM Sign-on URL where you can initiate the login flow.
+
+* Go to ADEM Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the ADEM tile in the My Apps, this will redirect to ADEM Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure ADEM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Beatrust Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beatrust-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Beatrust'
+description: Learn how to configure single sign-on between Azure Active Directory and Beatrust.
++++++++ Last updated : 04/13/2022++++
+# Tutorial: Azure AD SSO integration with Beatrust
+
+In this tutorial, you'll learn how to integrate Beatrust with Azure Active Directory (Azure AD). When you integrate Beatrust with Azure AD, you can:
+
+* Control in Azure AD who has access to Beatrust.
+* Enable your users to be automatically signed-in to Beatrust with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Beatrust single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Beatrust supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Beatrust from the gallery
+
+To configure the integration of Beatrust into Azure AD, you need to add Beatrust from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Beatrust** in the search box.
+1. Select **Beatrust** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Beatrust
+
+Configure and test Azure AD SSO with Beatrust using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Beatrust.
+
+To configure and test Azure AD SSO with Beatrust, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Beatrust SSO](#configure-beatrust-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Beatrust test user](#create-beatrust-test-user)** - to have a counterpart of B.Simon in Beatrust that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Beatrust** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://beatrust.com`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://auth.beatrust.com/__/auth/handler`
+
+ c. In the **Sign-on URL** text box, type a URL using of the following pattern:
+ `https://beatrust.com/<org_key>
+
+ > [!NOTE]
+ > The Sign-on URL value is not real. Update the value with the actual Sign-on URL. Contact [Beatrust Client support team](mailto:support@beatrust.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Beatrust** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Beatrust.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Beatrust**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Beatrust SSO
+
+To configure single sign-on on **Beatrust** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Beatrust support team](mailto:support@beatrust.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Beatrust test user
+
+In this section, you create a user called Britta Simon in Beatrust. Work with [Beatrust support team](mailto:support@beatrust.com) to add the users in the Beatrust platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Beatrust Sign-on URL where you can initiate the login flow.
+
+* Go to Beatrust Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Beatrust tile in the My Apps, this will redirect to Beatrust Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Beatrust you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Lcvista Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lcvista-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with LCVista | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with LCVista'
description: Learn how to configure single sign-on between Azure Active Directory and LCVista.
Previously updated : 02/25/2019 Last updated : 04/21/2022
-# Tutorial: Azure Active Directory integration with LCVista
+# Tutorial: Azure AD SSO integration with LCVista
-In this tutorial, you learn how to integrate LCVista with Azure Active Directory (Azure AD).
-Integrating LCVista with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate LCVista with Azure Active Directory (Azure AD). When you integrate LCVista with Azure AD, you can:
-* You can control in Azure AD who has access to LCVista.
-* You can enable your users to be automatically signed-in to LCVista (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to LCVista.
+* Enable your users to be automatically signed-in to LCVista with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with LCVista, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* LCVista single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* LCVista single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* LCVista supports **SP** initiated SSO
+* LCVista supports **SP** initiated SSO.
-## Adding LCVista from the gallery
+## Add LCVista from the gallery
To configure the integration of LCVista into Azure AD, you need to add LCVista from the gallery to your list of managed SaaS apps.
-**To add LCVista from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **LCVista**, select **LCVista** from result panel then click **Add** button to add the application.
-
- ![LCVista in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with LCVista based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in LCVista needs to be established.
-
-To configure and test Azure AD single sign-on with LCVista, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure LCVista Single Sign-On](#configure-lcvista-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create LCVista test user](#create-lcvista-test-user)** - to have a counterpart of Britta Simon in LCVista that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **LCVista** in the search box.
+1. Select **LCVista** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-### Configure Azure AD single sign-on
+## Configure and test Azure AD SSO for LCVista
-In this section, you enable Azure AD single sign-on in the Azure portal.
+Configure and test Azure AD SSO with LCVista using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LCVista.
-To configure Azure AD single sign-on with LCVista, perform the following steps:
+To configure and test Azure AD SSO with LCVista, perform the following steps:
-1. In the [Azure portal](https://portal.azure.com/), on the **LCVista** application integration page, select **Single sign-on**.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure LCVista SSO](#configure-lcvista-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create LCVista test user](#create-lcvista-test-user)** - to have a counterpart of B.Simon in LCVista that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Configure single sign-on link](common/select-sso.png)
+## Configure Azure AD SSO
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Single sign-on select mode](common/select-saml-option.png)
+1. In the Azure portal, on the **LCVista** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![LCVista Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.lcvista.com/rainier/login`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<subdomain>.lcvista.com`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<subdomain>.lcvista.com/rainier/login`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [LCVista Client support team](https://lcvista.com/contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [LCVista Client support team](https://lcvista.com/contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with LCVista, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
- b. Azure Ad Identifier
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to LCVista.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **LCVista**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure LCVista Single Sign-On
+## Configure LCVista SSO
-1. Sign on to your LCVista application as an administrator.
+1. Log into your LCVista application as an administrator.
2. In the **SAML Config** section, check the **Enable SAML login** and enter the details as mentioned in below image.
- ![Configure Single Sign-On](./media/lcvista-tutorial/tutorial_lcvista_config.png)
+ ![Configure Single Sign-On](./media/lcvista-tutorial/configuration.png)
a. In the **Entity ID** textbox, paste **Azure Ad Identifier** value, which you have copied from the Azure portal.
To configure Azure AD single sign-on with LCVista, perform the following steps:
e. Click **Save** to save the settings.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to LCVista.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **LCVista**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **LCVista**.
-
- ![The LCVista link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create LCVista test user In this section, you create a user called Britta Simon in LCVista. Work with [LCVista Client support team](https://lcvista.com/contact) to add the users in the LCVista platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the LCVista tile in the Access Panel, you should be automatically signed in to the LCVista for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to LCVista Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to LCVista Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the LCVista tile in the My Apps, this will redirect to LCVista Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure LCVista you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Manabipocket Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/manabipocket-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Manabi Pocket | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Manabi Pocket'
description: Learn how to configure single sign-on between Azure Active Directory and Manabi Pocket.
Previously updated : 04/02/2019 Last updated : 04/21/2022
-# Tutorial: Azure Active Directory integration with Manabi Pocket
+# Tutorial: Azure AD SSO integration with Manabi Pocket
-In this tutorial, you learn how to integrate Manabi Pocket with Azure Active Directory (Azure AD).
-Integrating Manabi Pocket with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Manabi Pocket with Azure Active Directory (Azure AD). When you integrate Manabi Pocket with Azure AD, you can:
-* You can control in Azure AD who has access to Manabi Pocket.
-* You can enable your users to be automatically signed-in to Manabi Pocket (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Manabi Pocket.
+* Enable your users to be automatically signed-in to Manabi Pocket with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Manabi Pocket, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Manabi Pocket single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Manabi Pocket single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Manabi Pocket supports **SP** initiated SSO
+* Manabi Pocket supports **SP** initiated SSO.
-## Adding Manabi Pocket from the gallery
+## Add Manabi Pocket from the gallery
To configure the integration of Manabi Pocket into Azure AD, you need to add Manabi Pocket from the gallery to your list of managed SaaS apps.
-**To add Manabi Pocket from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Manabi Pocket**, select **Manabi Pocket** from result panel then click **Add** button to add the application.
-
- ![Manabi Pocket in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Manabi Pocket based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Manabi Pocket needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Manabi Pocket** in the search box.
+1. Select **Manabi Pocket** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Manabi Pocket, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Manabi Pocket
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Manabi Pocket Single Sign-On](#configure-manabi-pocket-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Manabi Pocket test user](#create-manabi-pocket-test-user)** - to have a counterpart of Britta Simon in Manabi Pocket that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Manabi Pocket using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Manabi Pocket.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Manabi Pocket, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Manabi Pocket SSO](#configure-manabi-pocket-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Manabi Pocket test user](#create-manabi-pocket-test-user)** - to have a counterpart of B.Simon in Manabi Pocket that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Manabi Pocket, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Manabi Pocket** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **Manabi Pocket** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Manabi Pocket Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL:
- `https://ed-cl.com/`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<SERVER-NAME>.ed-cl.com/<TENANT-ID>/idp/provider`
+
+ b. In the **Sign on URL** text box, type the URL:
+ `https://ed-cl.com/`
> [!NOTE] > The Identifier value is not real. Update this value with the actual Identifier. Contact [Manabi Pocket Client support team](mailto:info-ed-cl@ntt.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
To configure Azure AD single sign-on with Manabi Pocket, perform the following s
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Manabi Pocket Single Sign-On
-
-To configure single sign-on on **Manabi Pocket** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Manabi Pocket support team](mailto:info-ed-cl@ntt.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Manabi Pocket.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Manabi Pocket**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Manabi Pocket**.
-
- ![The Manabi Pocket link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Manabi Pocket.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Manabi Pocket**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure Manabi Pocket SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Manabi Pocket** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Manabi Pocket support team](mailto:info-ed-cl@ntt.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Manabi Pocket test user In this section, you create a user called Britta Simon in Manabi Pocket. Work with [Manabi Pocket support team](mailto:info-ed-cl@ntt.com) to add the users in the Manabi Pocket platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Manabi Pocket tile in the Access Panel, you should be automatically signed in to the Manabi Pocket for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Manabi Pocket Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Manabi Pocket Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Manabi Pocket tile in the My Apps, this will redirect to Manabi Pocket Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Manabi Pocket you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Nomadesk Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nomadesk-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Nomadesk | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Nomadesk'
description: Learn how to configure single sign-on between Azure Active Directory and Nomadesk.
Previously updated : 03/05/2019 Last updated : 04/21/2022
-# Tutorial: Azure Active Directory integration with Nomadesk
+# Tutorial: Azure AD SSO integration with Nomadesk
-In this tutorial, you learn how to integrate Nomadesk with Azure Active Directory (Azure AD).
-Integrating Nomadesk with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Nomadesk with Azure Active Directory (Azure AD). When you integrate Nomadesk with Azure AD, you can:
-* You can control in Azure AD who has access to Nomadesk.
-* You can enable your users to be automatically signed-in to Nomadesk (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Nomadesk.
+* Enable your users to be automatically signed-in to Nomadesk with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Nomadesk, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Nomadesk single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Nomadesk single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Nomadesk supports **SP** initiated SSO
+* Nomadesk supports **SP** initiated SSO.
-* Nomadesk supports **Just In Time** user provisioning
+* Nomadesk supports **Just In Time** user provisioning.
-## Adding Nomadesk from the gallery
+## Add Nomadesk from the gallery
To configure the integration of Nomadesk into Azure AD, you need to add Nomadesk from the gallery to your list of managed SaaS apps.
-**To add Nomadesk from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Nomadesk**, select **Nomadesk** from result panel then click **Add** button to add the application.
-
- ![Nomadesk in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Nomadesk** in the search box.
+1. Select **Nomadesk** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Nomadesk based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Nomadesk needs to be established.
+## Configure and test Azure AD SSO for Nomadesk
-To configure and test Azure AD single sign-on with Nomadesk, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Nomadesk using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Nomadesk.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Nomadesk Single Sign-On](#configure-nomadesk-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Nomadesk test user](#create-nomadesk-test-user)** - to have a counterpart of Britta Simon in Nomadesk that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Nomadesk, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Nomadesk SSO](#configure-nomadesk-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Nomadesk test user](#create-nomadesk-test-user)** - to have a counterpart of B.Simon in Nomadesk that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Nomadesk, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Nomadesk** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Nomadesk** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Nomadesk Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://secure.nomadesk.com/saml/<instancename>`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://mynomadesk.com/logon/saml/<TENANTID>`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://secure.nomadesk.com/saml/<instancename>`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Nomadesk Client support team](mailto:support@nomadesk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Nomadesk Client support team](mailto:support@nomadesk.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with Nomadesk, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Nomadesk Single Sign-On
-
-To configure single sign-on on **Nomadesk** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Nomadesk support team](mailto:support@nomadesk.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Nomadesk.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Nomadesk**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Nomadesk**.
-
- ![The Nomadesk link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Nomadesk.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Nomadesk**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure Nomadesk SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Nomadesk** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Nomadesk support team](mailto:support@nomadesk.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Nomadesk test user
In this section, a user called Britta Simon is created in Nomadesk. Nomadesk sup
>[!NOTE] >If you need to create a user manually, you need to contact the [Nomadesk support team](mailto:support@nomadesk.com).
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Nomadesk tile in the Access Panel, you should be automatically signed in to the Nomadesk for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Nomadesk Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Nomadesk Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Nomadesk tile in the My Apps, this will redirect to Nomadesk Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Nomadesk you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Peoplecart Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peoplecart-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Peoplecart | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Peoplecart'
description: Learn how to configure single sign-on between Azure Active Directory and Peoplecart.
Previously updated : 03/14/2019 Last updated : 04/21/2022
-# Tutorial: Azure Active Directory integration with Peoplecart
+# Tutorial: Azure AD SSO integration with Peoplecart
-In this tutorial, you learn how to integrate Peoplecart with Azure Active Directory (Azure AD).
-Integrating Peoplecart with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Peoplecart with Azure Active Directory (Azure AD). When you integrate Peoplecart with Azure AD, you can:
-* You can control in Azure AD who has access to Peoplecart.
-* You can enable your users to be automatically signed-in to Peoplecart (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Peoplecart.
+* Enable your users to be automatically signed-in to Peoplecart with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Peoplecart, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Peoplecart single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Peoplecart single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Peoplecart supports **SP** initiated SSO
+* Peoplecart supports **SP** initiated SSO.
-## Adding Peoplecart from the gallery
+## Add Peoplecart from the gallery
To configure the integration of Peoplecart into Azure AD, you need to add Peoplecart from the gallery to your list of managed SaaS apps.
-**To add Peoplecart from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Peoplecart**, select **Peoplecart** from result panel then click **Add** button to add the application.
-
- ![Peoplecart in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Peoplecart** in the search box.
+1. Select **Peoplecart** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Peoplecart based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Peoplecart needs to be established.
+## Configure and test Azure AD SSO for Peoplecart
-To configure and test Azure AD single sign-on with Peoplecart, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Peoplecart using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Peoplecart.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Peoplecart Single Sign-On](#configure-peoplecart-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Peoplecart test user](#create-peoplecart-test-user)** - to have a counterpart of Britta Simon in Peoplecart that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Peoplecart, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Peoplecart SSO](#configure-peoplecart-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Peoplecart test user](#create-peoplecart-test-user)** - to have a counterpart of B.Simon in Peoplecart that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Peoplecart, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Peoplecart** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Peoplecart** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Peoplecart Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<tenantname>.peoplecart.com`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<tenantname>.peoplecart.com/SignIn.aspx`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<tenantname>.peoplecart.com`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Peoplecart Client support team](https://peoplecart.com/ContactUs.aspx) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Peoplecart Client support team](https://peoplecart.com/ContactUs.aspx) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with Peoplecart, perform the following step
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Peoplecart Single Sign-On
-
-To configure single sign-on on **Peoplecart** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Peoplecart support team](https://peoplecart.com/ContactUs.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Peoplecart.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Peoplecart**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Peoplecart**.
-
- ![The Peoplecart link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Peoplecart.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Peoplecart**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure Peoplecart SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Peoplecart** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Peoplecart support team](https://peoplecart.com/ContactUs.aspx). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Peoplecart test user In this section, you create a user called Britta Simon in Peoplecart. Work with [Peoplecart support team](https://peoplecart.com/ContactUs.aspx) to add the users in the Peoplecart platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Peoplecart tile in the Access Panel, you should be automatically signed in to the Peoplecart for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Peoplecart Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Peoplecart Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Peoplecart tile in the My Apps, this will redirect to Peoplecart Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Peoplecart you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Pluto Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pluto-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Pluto'
+description: Learn how to configure single sign-on between Azure Active Directory and Pluto.
++++++++ Last updated : 04/13/2022++++
+# Tutorial: Azure AD SSO integration with Pluto
+
+In this tutorial, you'll learn how to integrate Pluto with Azure Active Directory (Azure AD). When you integrate Pluto with Azure AD, you can:
+
+* Control in Azure AD who has access to Pluto.
+* Enable your users to be automatically signed-in to Pluto with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Pluto single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Pluto supports **SP** initiated SSO.
+* Pluto supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Pluto from the gallery
+
+To configure the integration of Pluto into Azure AD, you need to add Pluto from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Pluto** in the search box.
+1. Select **Pluto** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Pluto
+
+Configure and test Azure AD SSO with Pluto using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Pluto.
+
+To configure and test Azure AD SSO with Pluto, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Pluto SSO](#configure-pluto-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Pluto test user](#create-pluto-test-user)** - to have a counterpart of B.Simon in Pluto that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Pluto** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://api.pluto.bio`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://api.pluto.bio/auth/social/complete/saml/`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://pluto.bio/login/<organization-shortname>`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign-on URL. Contact [Pluto Client support team](mailto:support@pluto.bio) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Pluto application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/edit-attribute.png)
+
+1. In addition to above, Pluto application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute |
+ |-| |
+ | email | user.mail |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Pluto** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Pluto.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Pluto**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Pluto SSO
+
+To configure single sign-on on **Pluto** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Pluto support team](mailto:support@pluto.bio). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Pluto test user
+
+In this section, a user called Britta Simon is created in Pluto. Pluto supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Pluto, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Pluto Sign-on URL where you can initiate the login flow.
+
+* Go to Pluto Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Pluto tile in the My Apps, this will redirect to Pluto Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Pluto you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Recurly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/recurly-tutorial.md
Previously updated : 02/08/2022 Last updated : 04/21/2022
Follow these steps to configure single sign-on for your **Recurly** site.
a. In **PROVIDER NAME**, select **Azure**.
- b. In the **SAML ISSUER ID** textbox, paste the **Identifier URL** value which you have copied from the Azure portal.
+ b. In the **SAML ISSUER ID** textbox, paste the **Application(Client ID)** value from the Azure portal.
c. In the **LOGIN URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
active-directory Smart360 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smart360-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Smart360'
+description: Learn how to configure single sign-on between Azure Active Directory and Smart360.
++++++++ Last updated : 04/13/2022++++
+# Tutorial: Azure AD SSO integration with Smart360
+
+In this tutorial, you'll learn how to integrate Smart360 with Azure Active Directory (Azure AD). When you integrate Smart360 with Azure AD, you can:
+
+* Control in Azure AD who has access to Smart360.
+* Enable your users to be automatically signed-in to Smart360 with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Smart360 single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Smart360 supports **SP** initiated SSO.
+* Smart360 supports **Just In Time** user provisioning.
+
+## Add Smart360 from the gallery
+
+To configure the integration of Smart360 into Azure AD, you need to add Smart360 from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Smart360** in the search box.
+1. Select **Smart360** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Smart360
+
+Configure and test Azure AD SSO with Smart360 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Smart360.
+
+To configure and test Azure AD SSO with Smart360, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Smart360 SSO](#configure-smart360-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Smart360 test user](#create-smart360-test-user)** - to have a counterpart of B.Simon in Smart360 that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Smart360** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `urn:sso:<CustomerName>:smart360:primary`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CustomerName>.smart360.biz/smart360/saml/SSO`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CustomerName>.smart360.biz`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Smart360 Client support team](mailto:support@smart360.biz) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Your Smart360 application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open **User Attributes** dialog.
+
+ ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
+
+1. In addition to above, Smart360 application expects few more attributes to be passed back in SAML response. In the **User Claims** section on the **User Attributes** dialog, perform the following steps to add SAML token attribute as shown in the below table:
+
+ | Name | Source Attribute |
+ | -- | |
+ | role | user.assignedroles |
+
+ > [!NOTE]
+ > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to know how to configure Role in Azure AD.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Smart360.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Smart360**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Smart360 SSO
+
+To configure single sign-on on **Smart360** side, you need to send the **App Federation Metadata Url** to [Smart360 support team](mailto:support@smart360.biz). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Smart360 test user
+
+In this section, a user called Britta Simon is created in Smart360. Smart360 supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Smart360, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Smart360 Sign-on URL where you can initiate the login flow.
+
+* Go to Smart360 Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Smart360 tile in the My Apps, this will redirect to Smart360 Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Smart360 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Talentlms Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/talentlms-tutorial.md
Previously updated : 09/14/2021 Last updated : 04/21/2022 # Tutorial: Azure AD SSO integration with TalentLMS
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `http://<tenant-name>.talentlms.com`
+ `<tenant-name>.talentlms.com`
b. In the **Sign on URL** text box, type a URL using the following pattern: `https://<tenant-name>.TalentLMSapp.com`
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure TalentLMS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure TalentLMS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Cis Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-ubuntu.md
+
+ Title: Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark
+description: Learn how AKS applies the CIS benchmark
++ Last updated : 04/20/2022++
+# Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark
+
+As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security OS configuration applied to Ubuntu imaged used by AKS. This security configuration is based on the Azure Linux security baseline which aligns with CIS benchmark. For more information about AKS security, see Security concepts for applications and clusters in Azure Kubernetes Service (AKS). For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md). For more information on the CIS benchmark, see [Center for Internet Security (CIS) Benchmarks][cis-benchmarks]. For more information on the Azure security baselines for Linux, see [Linux security baseline][linux-security-baseline].
+
+## Ubuntu LTS 18.04
+
+AKS clusters are deployed on host virtual machines, which run an operating system with built-in secure configurations. This operating system is used for containers running on AKS. This host operating system is based on an **Ubuntu 18.04.LTS** image with security configurations applied.
+
+As a part of the security-optimized operating system:
+
+* AKS provides a security-optimized host OS by default, but no option to select an alternate operating system.
+* The security-optimized host OS is built and maintained specifically for AKS and is **not** supported outside of the AKS platform.
+* Some unnecessary kernel module drivers have been disabled in the OS to reduce the attack surface area.
+
+> [!NOTE]
+> Unrelated to the CIS benchmarks, Azure applies daily patches, including security patches, to AKS virtual machine hosts.
+
+The goal of the secure configuration built into the host OS is to reduce the surface area of attack and optimize for the deployment of containers in a secure manner.
+
+The following are the results from the [CIS Ubuntu 18.04 LTS Benchmark v2.1.0][cis-benchmark-ubuntu] recommendations.
+
+Recommendations can have one of the following reasons:
+
+* *Potential Operation Impact* - Recommendation was not applied because it would have a negative effect on the service.
+* *Covered Elsewhere* - Recommendation is covered by another control in Azure cloud compute.
+
+The following are CIS rules implemented:
+
+| CIS paragraph number | Recommendation description|Status| Reason |
+||||||
+| 1 | Initial Setup |||
+| 1.1 | Filesystem Configuration |||
+| 1.1.1 | Disable unused filesystems |||
+| 1.1.1.1 | Ensure mounting of cramfs filesystems is disabled | Pass ||
+| 1.1.1.2 | Ensure mounting of freevxfs filesystems is disabled | Pass ||
+| 1.1.1.3 | Ensure mounting of jffs2 filesystems is disabled | Pass ||
+| 1.1.1.4 | Ensure mounting of hfs filesystems is disabled | Pass ||
+| 1.1.1.5 | Ensure mounting of hfsplus filesystems is disabled | Pass ||
+| 1.1.1.6 | Ensure mounting of udf filesystems is disabled | Fail | Potential Operational Impact |
+| 1.1.2 | Ensure /tmp is configured | Fail | |
+| 1.1.3 | Ensure nodev option set on /tmp partition | Fail | |
+| 1.1.4 | Ensure nosuid option set on /tmp partition | Pass ||
+| 1.1.5 | Ensure noexec option set on /tmp partition | Pass ||
+| 1.1.6 | Ensure /dev/shm is configured | Pass ||
+| 1.1.7 | Ensure nodev option set on /dev/shm partition | Pass ||
+| 1.1.8 | Ensure nosuid option set on /dev/shm partition | Pass ||
+| 1.1.9 | Ensure noexec option set on /dev/shm partition | Fail | Potential Operational Impact |
+| 1.1.12 | Ensure /var/tmp partition includes the nodev option | Pass ||
+| 1.1.13 | Ensure /var/tmp partition includes the nosuid option | Pass ||
+| 1.1.14 | Ensure /var/tmp partition includes the noexec option | Pass ||
+| 1.1.18 | Ensure /home partition includes the nodev option | Pass ||
+| 1.1.19 | Ensure nodev option set on removable media partitions | Not Applicable ||
+| 1.1.20 | Ensure nosuid option set on removable media partitions | Not Applicable ||
+| 1.1.21 | Ensure noexec option set on removable media partitions | Not Applicable ||
+| 1.1.22 | Ensure sticky bit is set on all world-writable directories | Fail | Potential Operation Impact |
+| 1.1.23 | Disable Automounting | Pass ||
+| 1.1.24 | Disable USB Storage | Pass ||
+| 1.2 | Configure Software Updates |||
+| 1.2.1 | Ensure package manager repositories are configured | Pass | Covered Elsewhere |
+| 1.2.2 | Ensure GPG keys are configured | Not Applicable ||
+| 1.3 | Filesystem Integrity Checking |||
+| 1.3.1 | Ensure AIDE is installed | Fail | Covered Elsewhere |
+| 1.3.2 | Ensure filesystem integrity is regularly checked | Fail | Covered Elsewhere |
+| 1.4 | Secure Boot Settings |||
+| 1.4.1 | Ensure permissions on bootloader config are not overridden | Fail | |
+| 1.4.2 | Ensure bootloader password is set | Fail | Not Applicable|
+| 1.4.3 | Ensure permissions on bootloader config are configured | Fail | |
+| 1.4.4 | Ensure authentication required for single user mode | Fail | Not Applicable |
+| 1.5 | Additional Process Hardening |||
+| 1.5.1 | Ensure XD/NX support is enabled | Not Applicable ||
+| 1.5.2 | Ensure address space layout randomization (ASLR) is enabled | Pass ||
+| 1.5.3 | Ensure prelink is disabled | Pass ||
+| 1.5.4 | Ensure core dumps are restricted | Pass ||
+| 1.6 | Mandatory Access Control |||
+| 1.6.1 | Configure AppArmor |||
+| 1.6.1.1 | Ensure AppArmor is installed | Pass ||
+| 1.6.1.2 | Ensure AppArmor is enabled in the bootloader configuration | Fail | Potential Operation Impact |
+| 1.6.1.3 | Ensure all AppArmor Profiles are in enforce or complain mode | Pass ||
+| 1.7 | Command Line Warning Banners |||
+| 1.7.1 | Ensure message of the day is configured properly | Pass ||
+| 1.7.2 | Ensure permissions on /etc/issue.net are configured | Pass ||
+| 1.7.3 | Ensure permissions on /etc/issue are configured | Pass ||
+| 1.7.4 | Ensure permissions on /etc/motd are configured | Pass ||
+| 1.7.5 | Ensure remote login warning banner is configured properly | Pass ||
+| 1.7.6 | Ensure local login warning banner is configured properly | Pass ||
+| 1.8 | GNOME Display Manager |||
+| 1.8.2 | Ensure GDM login banner is configured | Pass ||
+| 1.8.3 | Ensure disable-user-list is enabled | Pass ||
+| 1.8.4 | Ensure XDCMP is not enabled | Pass ||
+| 1.9 | Ensure updates, patches, and additional security software are installed | Pass ||
+| 2 | Services |||
+| 2.1 | Special Purpose Services |||
+| 2.1.1 | Time Synchronization |||
+| 2.1.1.1 | Ensure time synchronization is in use | Pass ||
+| 2.1.1.2 | Ensure systemd-timesyncd is configured | Not Applicable | AKS uses ntpd for timesync |
+| 2.1.1.3 | Ensure chrony is configured | Fail | Covered Elsewhere |
+| 2.1.1.4 | Ensure ntp is configured | Pass ||
+| 2.1.2 | Ensure X Window System is not installed | Pass ||
+| 2.1.3 | Ensure Avahi Server is not installed | Pass ||
+| 2.1.4 | Ensure CUPS is not installed | Pass ||
+| 2.1.5 | Ensure DHCP Server is not installed | Pass ||
+| 2.1.6 | Ensure LDAP server is not installed | Pass ||
+| 2.1.7 | Ensure NFS is not installed | Pass ||
+| 2.1.8 | Ensure DNS Server is not installed | Pass ||
+| 2.1.9 | Ensure FTP Server is not installed | Pass ||
+| 2.1.10 | Ensure HTTP server is not installed | Pass ||
+| 2.1.11 | Ensure IMAP and POP3 server are not installed | Pass ||
+| 2.1.12 | Ensure Samba is not installed | Pass ||
+| 2.1.13 | Ensure HTTP Proxy Server is not installed | Pass ||
+| 2.1.14 | Ensure SNMP Server is not installed | Pass ||
+| 2.1.15 | Ensure mail transfer agent is configured for local-only mode | Pass ||
+| 2.1.16 | Ensure rsync service is not installed | Fail | |
+| 2.1.17 | Ensure NIS Server is not installed | Pass ||
+| 2.2 | Service Clients |||
+| 2.2.1 | Ensure NIS Client is not installed | Pass ||
+| 2.2.2 | Ensure rsh client is not installed | Pass ||
+| 2.2.3 | Ensure talk client is not installed | Pass ||
+| 2.2.4 | Ensure telnet client is not installed | Fail | |
+| 2.2.5 | Ensure LDAP client is not installed | Pass ||
+| 2.2.6 | Ensure RPC is not installed | Fail | Potential Operational Impact |
+| 2.3 | Ensure nonessential services are removed or masked | Pass | |
+| 3 | Network Configuration |||
+| 3.1 | Disable unused network protocols and devices |||
+| 3.1.2 | Ensure wireless interfaces are disabled | Pass ||
+| 3.2 | Network Parameters (Host Only) |||
+| 3.2.1 | Ensure packet redirect sending is disabled | Pass ||
+| 3.2.2 | Ensure IP forwarding is disabled | Fail | Not Applicable |
+| 3.3 | Network Parameters (Host and Router) |||
+| 3.3.1 | Ensure source routed packets are not accepted | Pass ||
+| 3.3.2 | Ensure ICMP redirects are not accepted | Pass ||
+| 3.3.3 | Ensure secure ICMP redirects are not accepted | Pass ||
+| 3.3.4 | Ensure suspicious packets are logged | Pass ||
+| 3.3.5 | Ensure broadcast ICMP requests are ignored | Pass ||
+| 3.3.6 | Ensure bogus ICMP responses are ignored | Pass ||
+| 3.3.7 | Ensure Reverse Path Filtering is enabled | Pass ||
+| 3.3.8 | Ensure TCP SYN Cookies is enabled | Pass ||
+| 3.3.9 | Ensure IPv6 router advertisements are not accepted | Pass ||
+| 3.4 | Uncommon Network Protocols |||
+| 3.5 | Firewall Configuration |||
+| 3.5.1 | Configure UncomplicatedFirewall |||
+| 3.5.1.1 | Ensure ufw is installed | Pass ||
+| 3.5.1.2 | Ensure iptables-persistent is not installed with ufw | Pass ||
+| 3.5.1.3 | Ensure ufw service is enabled | Fail | Covered Elsewhere |
+| 3.5.1.4 | Ensure ufw loopback traffic is configured | Fail | Covered Elsewhere |
+| 3.5.1.5 | Ensure ufw outbound connections are configured | Not Applicable | Covered Elsewhere |
+| 3.5.1.6 | Ensure ufw firewall rules exist for all open ports | Not Applicable | Covered Elsewhere |
+| 3.5.1.7 | Ensure ufw default deny firewall policy | Fail | Covered Elsewhere |
+| 3.5.2 | Configure nftables |||
+| 3.5.2.1 | Ensure nftables is installed | Fail | Covered Elsewhere |
+| 3.5.2.2 | Ensure ufw is uninstalled or disabled with nftables | Fail | Covered Elsewhere |
+| 3.5.2.3 | Ensure iptables are flushed with nftables | Not Applicable | Covered Elsewhere |
+| 3.5.2.4 | Ensure a nftables table exists | Fail | Covered Elsewhere |
+| 3.5.2.5 | Ensure nftables base chains exist | Fail | Covered Elsewhere |
+| 3.5.2.6 | Ensure nftables loopback traffic is configured | Fail | Covered Elsewhere |
+| 3.5.2.7 | Ensure nftables outbound and established connections are configured | Not Applicable | Covered Elsewhere |
+| 3.5.2.8 | Ensure nftables default deny firewall policy | Fail | Covered Elsewhere |
+| 3.5.2.9 | Ensure nftables service is enabled | Fail | Covered Elsewhere |
+| 3.5.2.10 | Ensure nftables rules are permanent | Fail | Covered Elsewhere |
+| 3.5.3| Configure iptables |||
+| 3.5.3.1 | Configure iptables software |||
+| 3.5.3.1.1 | Ensure iptables packages are installed | Fail | Covered Elsewhere |
+| 3.5.3.1.2 | Ensure nftables is not installed with iptables | Pass ||
+| 3.5.3.1.3 | Ensure ufw is uninstalled or disabled with iptables | Fail | Covered Elsewhere |
+| 3.5.3.2 | Configure IPv4 iptables |||
+| 3.5.3.2.1 | Ensure iptables default deny firewall policy | Fail | Covered Elsewhere |
+| 3.5.3.2.2 | Ensure iptables loopback traffic is configured | Fail | Not Applicable |
+| 3.5.3.2.3 | Ensure iptables outbound and established connections are configured | Not Applicable ||
+| 3.5.3.2.4 | Ensure iptables firewall rules exist for all open ports | Fail | Potential Operation Impact |
+| 3.5.3.3 | Configure IPv6 ip6tables |||
+| 3.5.3.3.1 | Ensure ip6tables default deny firewall policy | Fail | Covered Elsewhere |
+| 3.5.3.3.2 | Ensure ip6tables loopback traffic is configured | Fail | Covered Elsewhere |
+| 3.5.3.3.3 | Ensure ip6tables outbound and established connections are configured | Not Applicable | Covered Elsewhere |
+| 3.5.3.3.4 | Ensure ip6tables firewall rules exist for all open ports | Fail | Covered Elsewhere |
+| 4 | Logging and Auditing |||
+| 4.1 | Configure System Accounting (auditd) |||
+| 4.1.1.2 | Ensure auditing is enabled |||
+| 4.1.2 | Configure Data Retention |||
+| 4.2 | Configure Logging |||
+| 4.2.1 | Configure rsyslog |||
+| 4.2.1.1 | Ensure rsyslog is installed | Pass ||
+| 4.2.1.2 | Ensure rsyslog Service is enabled | Pass ||
+| 4.2.1.3 | Ensure logging is configured | Pass ||
+| 4.2.1.4 | Ensure rsyslog default file permissions configured | Pass ||
+| 4.2.1.5 | Ensure rsyslog is configured to send logs to a remote log host | Fail | Covered Elsewhere |
+| 4.2.1.6 | Ensure remote rsyslog messages are only accepted on designated log hosts. | Not Applicable | |
+| 4.2.2 | Configure journald |||
+| 4.2.2.1 | Ensure journald is configured to send logs to rsyslog | Pass ||
+| 4.2.2.2 | Ensure journald is configured to compress large log files | Fail | |
+| 4.2.2.3 | Ensure journald is configured to write logfiles to persistent disk | Pass | |
+| 4.2.3 | Ensure permissions on all logfiles are configured | Fail | |
+| 4.3 | Ensure logrotate is configured | Pass ||
+| 4.4 | Ensure logrotate assigns appropriate permissions | Fail | |
+| 5 | Access, Authentication, and Authorization |||
+| 5.1 | Configure time-based job schedulers |||
+| 5.1.1 | Ensure cron daemon is enabled and running | Pass ||
+| 5.1.2 | Ensure permissions on /etc/crontab are configured | Pass ||
+| 5.1.3 | Ensure permissions on /etc/cron.hourly are configured | Pass ||
+| 5.1.4 | Ensure permissions on /etc/cron.daily are configured | Pass ||
+| 5.1.5 | Ensure permissions on /etc/cron.weekly are configured | Pass ||
+| 5.1.6 | Ensure permissions on /etc/cron.monthly are configured | Pass ||
+| 5.1.7 | Ensure permissions on /etc/cron.d are configured | Pass ||
+| 5.1.8 | Ensure cron is restricted to authorized users | Fail | |
+| 5.1.9 | Ensure at is restricted to authorized users | Fail | |
+| 5.2 | Configure sudo |||
+| 5.2.1 | Ensure sudo is installed | Pass ||
+| 5.2.2 | Ensure sudo commands use pty | Fail | Potential Operational Impact |
+| 5.2.3 | Ensure sudo log file exists | Fail | |
+| 5.3 | Configure SSH Server |||
+| 5.3.1 | Ensure permissions on /etc/ssh/sshd_config are configured | Pass ||
+| 5.3.2 | Ensure permissions on SSH private host key files are configured | Pass ||
+| 5.3.3 | Ensure permissions on SSH public host key files are configured | Pass ||
+| 5.3.4 | Ensure SSH access is limited | Pass ||
+| 5.3.5 | Ensure SSH LogLevel is appropriate | Pass ||
+| 5.3.7 | Ensure SSH MaxAuthTries is set to 4 or less | Pass ||
+| 5.3.8 | Ensure SSH IgnoreRhosts is enabled | Pass ||
+| 5.3.9 | Ensure SSH HostbasedAuthentication is disabled | Pass ||
+| 5.3.10 | Ensure SSH root login is disabled | Pass ||
+| 5.3.11 | Ensure SSH PermitEmptyPasswords is disabled | Pass ||
+| 5.3.12 | Ensure SSH PermitUserEnvironment is disabled | Pass ||
+| 5.3.13 | Ensure only strong Ciphers are used | Pass ||
+| 5.3.14 | Ensure only strong MAC algorithms are used | Pass ||
+| 5.3.15 | Ensure only strong Key Exchange algorithms are used | Pass ||
+| 5.3.16 | Ensure SSH Idle Timeout Interval is configured | Fail | |
+| 5.3.17 | Ensure SSH LoginGraceTime is set to one minute or less | Pass ||
+| 5.3.18 | Ensure SSH warning banner is configured | Pass ||
+| 5.3.19 | Ensure SSH PAM is enabled | Pass ||
+| 5.3.21 | Ensure SSH MaxStartups is configured | Fail | |
+| 5.3.22 | Ensure SSH MaxSessions is limited | Pass ||
+| 5.4 | Configure PAM |||
+| 5.4.1 | Ensure password creation requirements are configured | Pass ||
+| 5.4.2 | Ensure lockout for failed password attempts is configured | Fail | |
+| 5.4.3 | Ensure password reuse is limited | Fail | |
+| 5.4.4 | Ensure password hashing algorithm is SHA-512 | Pass ||
+| 5.5 | User Accounts and Environment |||
+| 5.5.1 | Set Shadow Password Suite Parameters |||
+| 5.5.1.1 | Ensure minimum days between password changes is configured | Pass ||
+| 5.5.1.2 | Ensure password expiration is 365 days or less | Pass ||
+| 5.5.1.3 | Ensure password expiration warning days is 7 or more | Pass ||
+| 5.5.1.4 | Ensure inactive password lock is 30 days or less | Pass ||
+| 5.5.1.5 | Ensure all users last password change date is in the past | Fail | |
+| 5.5.2 | Ensure system accounts are secured | Pass ||
+| 5.5.3 | Ensure default group for the root account is GID 0 | Pass ||
+| 5.5.4 | Ensure default user umask is 027 or more restrictive | Pass ||
+| 5.5.5 | Ensure default user shell timeout is 900 seconds or less | Fail | |
+| 5.6 | Ensure root login is restricted to system console | Not Applicable | |
+| 5.7 | Ensure access to the su command is restricted | Fail | Potential Operation Impact |
+| 6 | System Maintenance |||
+| 6.1 | System File Permissions |||
+| 6.1.2 | Ensure permissions on /etc/passwd are configured | Pass ||
+| 6.1.3 | Ensure permissions on /etc/passwd- are configured | Pass ||
+| 6.1.4 | Ensure permissions on /etc/group are configured | Pass ||
+| 6.1.5 | Ensure permissions on /etc/group- are configured | Pass ||
+| 6.1.6 | Ensure permissions on /etc/shadow are configured | Pass ||
+| 6.1.7 | Ensure permissions on /etc/shadow- are configured | Pass ||
+| 6.1.8 | Ensure permissions on /etc/gshadow are configured | Pass ||
+| 6.1.9 | Ensure permissions on /etc/gshadow- are configured | Pass ||
+| 6.1.10 | Ensure no world writable files exist | Fail | Potential Operation Impact |
+| 6.1.11 | Ensure no unowned files or directories exist | Fail | Potential Operation Impact |
+| 6.1.12 | Ensure no ungrouped files or directories exist | Fail | Potential Operation Impact |
+| 6.1.13 | Audit SUID executables | Not Applicable | |
+| 6.1.14 | Audit SGID executables | Not Applicable | |
+| 6.2 | User and Group Settings |||
+| 6.2.1 | Ensure accounts in /etc/passwd use shadowed passwords | Pass ||
+| 6.2.2 | Ensure password fields are not empty | Pass ||
+| 6.2.3 | Ensure all groups in /etc/passwd exist in /etc/group | Pass ||
+| 6.2.4 | Ensure all users' home directories exist | Pass ||
+| 6.2.5 | Ensure users own their home directories | Pass ||
+| 6.2.6 | Ensure users' home directories permissions are 750 or more restrictive | Pass ||
+| 6.2.7 | Ensure users' dot files are not group or world writable | Pass ||
+| 6.2.8 | Ensure no users have .netrc files | Pass ||
+| 6.2.9 | Ensure no users have .forward files | Pass ||
+| 6.2.10 | Ensure no users have .rhosts files | Pass ||
+| 6.2.11 | Ensure root is the only UID 0 account | Pass ||
+| 6.2.12 | Ensure root PATH Integrity | Pass ||
+| 6.2.13 | Ensure no duplicate UIDs exist | Pass ||
+| 6.2.14 | Ensure no duplicate GIDs exist | Pass ||
+| 6.2.15 | Ensure no duplicate user names exist | Pass ||
+| 6.2.16 | Ensure no duplicate group names exist | Pass ||
+| 6.2.17 | Ensure shadow group is empty | Pass ||
+
+## Next steps
+
+For more information about AKS security, see the following articles:
+
+* [Azure Kubernetes Service (AKS)](./intro-kubernetes.md)
+* [AKS security considerations](./concepts-security.md)
+* [AKS best practices](./best-practices.md)
++
+[azure-update-management]: ../automation/update-management/overview.md
+[azure-file-integrity-monotoring]: ../security-center/security-center-file-integrity-monitoring.md
+[azure-time-sync]: ../virtual-machines/linux/time-sync.md
+[auzre-log-analytics-agent-overview]: ../azure-monitor/platform/log-analytics-agent.md
+[cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark
+[cis-benchmark-aks]: https://www.cisecurity.org/benchmark/kubernetes/
+[cis-benchmark-ubuntu]: https://www.cisecurity.org/benchmark/ubuntu/
+[linux-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-linux.md
aks Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/devops-pipeline.md
Within your selected organization, create a _project_. If you don't have any pro
1. Select the name of your container registry.
-1. You can leave the image name and the service port set to the defaults.
+1. You can leave the image name set to the default.
+
+1. Set the service port to 8080.
1. Set the **Enable Review App for Pull Requests** checkbox for [review app](/azure/devops/pipelines/process/environments-kubernetes) related configuration to be included in the pipeline YAML auto-generated in subsequent steps.
You're now ready to create a release, which means to start the process of runnin
1. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
When the add-on is enabled, it creates a DNS Zone in your subscription. For more
> [!CAUTION] > The HTTP application routing add-on is designed to let you quickly create an ingress controller and access your applications. This add-on is not currently designed for use in a production environment and is not recommended for production use. For production-ready ingress deployments that include multiple replicas and TLS support, see [Create an HTTPS ingress controller](./ingress-tls.md). +
+## Limitations
+
+* HTTP application routing doesn't currently work with AKS versions 1.22.6+
+ ## HTTP routing solution overview The add-on deploys two components: a [Kubernetes Ingress controller][ingress] and an [External-DNS][external-dns] controller.
aks Security Hardened Vm Host Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-hardened-vm-host-image.md
- Title: Security hardening in AKS virtual machine hosts
-description: Learn about the security hardening in AKS VM host OS
-- Previously updated : 03/29/2021---
-# Security hardening for AKS agent node host OS
-
-As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI DSS, and HIPAA standards. This article covers the security hardening applied to AKS virtual machine (VM) hosts. For more information about AKS security, see [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](./concepts-security.md).
-
-> [!Note]
-> This document is scoped to Linux agents in AKS only.
-
-AKS clusters are deployed on host VMs, which run a security-optimized OS used for containers running on AKS. This host OS is based on an **Ubuntu 18.04.5 LTS** image with more [security hardening](#security-hardening-features) and optimizations applied.
-
-The goal of the security hardened host OS is to reduce the surface area of attack and optimize for the deployment of containers in a secure manner.
-
-> [!Important]
-> The security hardened OS is **not** CIS benchmarked. While it overlaps with CIS benchmarks, the goal is not to be CIS-compliant. The goal for host OS hardening is to converge on a level of security consistent with Microsoft's own internal host security standards.
-
-## Security hardening features
-
-* AKS provides a security-optimized host OS by default, but no option to select an alternate operating system.
-
-* Azure applies daily patches (including security patches) to AKS virtual machine hosts.
- * Some of these patches require a reboot, while others will not.
- * You're responsible for scheduling AKS VM host reboots as needed.
- * For guidance on how to automate AKS patching, see [patching AKS nodes](./node-updates-kured.md).
-
-## What is configured
-
-| CIS | Audit description|
-|||
-| 1.1.1.1 |Ensure mounting of cramfs filesystems is disabled|
-| 1.1.1.2 |Ensure mounting of freevxfs filesystems is disabled|
-| 1.1.1.3 |Ensure mounting of jffs2 filesystems is disabled|
-| 1.1.1.4 |Ensure mounting of HFS filesystems is disabled|
-| 1.1.1.5 |Ensure mounting of HFS Plus filesystems is disabled|
-|1.4.3 |Ensure authentication required for single user mode |
-|1.7.1.2 |Ensure local login warning banner is configured properly |
-|1.7.1.3 |Ensure remote login warning banner is configured properly |
-|1.7.1.5 |Ensure permissions on /etc/issue are configured |
-|1.7.1.6 |Ensure permissions on /etc/issue.net are configured |
-|2.1.5 |Ensure that --streaming-connection-idle-timeout is not set to 0 |
-|3.1.2 |Ensure packet redirect sending is disabled |
-|3.2.1 |Ensure source routed packages are not accepted |
-|3.2.2 |Ensure ICMP redirects are not accepted |
-|3.2.3 |Ensure secure ICMP redirects are not accepted |
-|3.2.4 |Ensure suspicious packets are logged |
-|3.3.1 |Ensure IPv6 router advertisements are not accepted |
-|3.5.1 |Ensure DCCP is disabled |
-|3.5.2 |Ensure SCTP is disabled |
-|3.5.3 |Ensure RDS is disabled |
-|3.5.4 |Ensure TIPC is disabled |
-|4.2.1.2 |Ensure logging is configured |
-|5.1.2 |Ensure permissions on /etc/crontab are configured |
-|5.2.4 |Ensure SSH X11 forwarding is disabled |
-|5.2.5 |Ensure SSH MaxAuthTries is set to 4 or less |
-|5.2.8 |Ensure SSH root login is disabled |
-|5.2.10 |Ensure SSH PermitUserEnvironment is disabled |
-|5.2.11 |Ensure only approved MAX algorithms are used |
-|5.2.12 |Ensure SSH Idle Timeout Interval is configured |
-|5.2.13 |Ensure SSH LoginGraceTime is set to one minute or less |
-|5.2.15 |Ensure SSH warning banner is configured |
-|5.3.1 |Ensure password creation requirements are configured |
-|5.4.1.1 |Ensure password expiration is 90 days or less |
-|5.4.1.4 |Ensure inactive password lock is 30 days or less |
-|5.4.4 |Ensure default user umask is 027 or more restrictive |
-|5.6 |Ensure access to the su command is restricted|
-
-## Additional notes
-
-* To further reduce the attack surface area, some unnecessary kernel module drivers have been disabled in the OS.
-
-* The security hardened OS is built and maintained specifically for AKS and is **not** supported outside of the AKS platform.
-
-## Next steps
-
-For more information about AKS security, see the following articles:
-
-* [Azure Kubernetes Service (AKS)](./intro-kubernetes.md)
-* [AKS security considerations](./concepts-security.md)
-* [AKS best practices](./best-practices.md)
api-management Api Management Error Handling Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-error-handling-policies.md
The following errors are predefined for error conditions that can occur during p
| ip-filter | Caller IP is in blocked list | CallerIpBlocked | Caller IP address is blocked. Access denied. | | check-header | Required header not presented or value is missing | HeaderNotFound | Header {header-name} was not found in the request. Access denied. | | check-header | Required header not presented or value is missing | HeaderValueNotAllowed | Header {header-name} value of {header-value} is not allowed. Access denied. |
-| validate-jwt | Jwt token is missing in request | TokenNotFound | JWT not found in the request. Access denied. |
+| validate-jwt | Jwt token is missing in request | TokenNotPresent | JWT not present. |
| validate-jwt | Signature validation failed | TokenSignatureInvalid | <message from jwt library\>. Access denied. | | validate-jwt | Invalid audience | TokenAudienceNotAllowed | <message from jwt library\>. Access denied. | | validate-jwt | Invalid issuer | TokenIssuerNotAllowed | <message from jwt library\>. Access denied. |
For more information working with policies, see:
- [Policies in API Management](api-management-howto-policies.md) - [Transform APIs](transform-api.md) - [Policy Reference](./api-management-policies.md) for a full list of policy statements and their settings-- [Policy samples](./policy-reference.md)
+- [Policy samples](./policy-reference.md)
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
Now that you've enabled access for users in an Azure AD tenant, you can:
* Control product visibility using Azure AD groups. Follow these steps to grant:
-* `Directory.Read.All` application permission for Microsoft Graph API and Azure Active Directory Graph API.
-* `User.Read` delegated permission for Microsoft Graph API.
+* `Directory.Read.All` **application** permission for Microsoft Graph API.
+* `User.Read` **delegated** permission for Microsoft Graph API.
1. Update the first 3 lines of the following Azure CLI script to match your environment and run it.
Follow these steps to grant:
az login az account set --subscription $subId #Assign the following permissions: Microsoft Graph Delegated Permission: User.Read, Microsoft Graph Application Permission: Directory.ReadAll, Azure Active Directory Graph Application Permission: Directory.ReadAll (legacy)
- az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/$($tenantId)/applications/$($appObjectID)" --body "{'requiredResourceAccess':[{'resourceAccess': [{'id': 'e1fe6dd8-ba31-4d61-89e7-88639da4683d','type': 'Scope'},{'id': '7ab1d382-f21e-4acd-a863-ba3e13f7da61','type': 'Role'}],'resourceAppId': '00000003-0000-0000-c000-000000000000'},{'resourceAccess': [{'id': '5778995a-e1bf-45b8-affa-663a9f3f4d04','type': 'Role'}], 'resourceAppId': '00000002-0000-0000-c000-000000000000'}]}"
+ az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/$($tenantId)/applications/$($appObjectID)" --body "{'requiredResourceAccess':[{'resourceAccess': [{'id': 'e1fe6dd8-ba31-4d61-89e7-88639da4683d','type': 'Scope'},{'id': '7ab1d382-f21e-4acd-a863-ba3e13f7da61','type': 'Role'}],'resourceAppId': '00000003-0000-0000-c000-000000000000'}]}"
``` 2. Log out and log back in to the Azure portal.
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
Using key vault certificates is recommended because it helps improve API Managem
1. Assign a [key vault access policy](../key-vault/general/assign-access-policy-portal.md) to the managed identity with permissions to get and list certificates from the vault. To add the policy: 1. In the portal, navigate to your key vault. 1. Select **Settings > Access policies > + Add Access Policy**.
- 1. Select **Certificate permissions**, then select **Get** and **List**.
+ 1. Select **Secret permissions**, then select **Get** and **List**.
1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance. 1. Create or import a certificate to the key vault. See [Quickstart: Set and retrieve a certificate from Azure Key Vault using the Azure portal](../key-vault/certificates/quick-create-portal.md).
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
description: Learn how to create system-assigned and user-assigned identities in
documentationcenter: '' - -- Previously updated : 03/09/2021+ Last updated : 04/05/2022 # Use managed identities in Azure API Management
-This article shows you how to create a managed identity for an Azure API Management instance and how to access other resources. A managed identity generated by Azure Active Directory (Azure AD) allows your API Management instance to easily and securely access other Azure AD-protected resources, such as Azure Key Vault. Azure manages this identity, so you don't have to provision or rotate any secrets. For more information about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
+This article shows you how to create a managed identity for an Azure API Management instance and how to use it to access other resources. A managed identity generated by Azure Active Directory (Azure AD) allows your API Management instance to easily and securely access other Azure AD-protected resources, such as Azure Key Vault. Azure manages this identity, so you don't have to provision or rotate any secrets. For more information about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
You can grant two types of identities to an API Management instance:
You can grant two types of identities to an API Management instance:
To set up a managed identity in the Azure portal, you'll first create an API Management instance and then enable the feature. 1. Create an API Management instance in the portal as you normally would. Browse to it in the portal.
-2. Select **Managed identities**.
+2. In the left menu, under **Security**, select **Managed identities**.
3. On the **System assigned** tab, switch **Status** to **On**. Select **Save**.
- :::image type="content" source="./media/api-management-msi/enable-system-msi.png" alt-text="Selections for enabling a system-assigned managed identity" border="true":::
+ :::image type="content" source="./media/api-management-howto-use-managed-service-identity/enable-system-identity.png" alt-text="Selections for enabling a system-assigned managed identity" border="true":::
### Azure PowerShell
The following steps walk you through creating an API Management instance and ass
1. If needed, install Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-az-ps). Then run `Connect-AzAccount` to create a connection with Azure.
-2. Use the following code to create the instance. For more examples of how to use Azure PowerShell with an API Management instance, see [API Management PowerShell samples](powershell-samples.md).
+2. Use the following code to create the instance with a system-assigned managed identity. For more examples of how to use Azure PowerShell with an API Management instance, see [API Management PowerShell samples](powershell-samples.md).
```azurepowershell-interactive # Create a resource group.
The following steps walk you through creating an API Management instance and ass
New-AzApiManagement -ResourceGroupName $resourceGroupName -Name consumptionskuservice -Location $location -Sku Consumption -Organization contoso -AdminEmail contoso@contoso.com -SystemAssignedIdentity ```
-3. Update an existing instance to create the identity:
+You can also update an existing instance to create the identity:
- ```azurepowershell-interactive
- # Get an API Management instance
- $apimService = Get-AzApiManagement -ResourceGroupName $resourceGroupName -Name $apiManagementName
+```azurepowershell-interactive
+# Get an API Management instance
+$apimService = Get-AzApiManagement -ResourceGroupName $resourceGroupName -Name $apiManagementName
- # Update an API Management instance
- Set-AzApiManagement -InputObject $apimService -SystemAssignedIdentity
- ```
+# Update an API Management instance
+Set-AzApiManagement -InputObject $apimService -SystemAssignedIdentity
+```
### Azure Resource Manager template
-You can create an API Management instance with an identity by including the following property in the resource definition:
+You can create an API Management instance with a system-assigned identity by including the following property in the resource definition:
```json "identity" : {
For example, a complete Azure Resource Manager template might look like the foll
```json {
- "$schema": "https://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "0.9.0.0", "resources": [{
- "apiVersion": "2020-01-01",
+ "apiVersion": "2021-08-01",
"name": "contoso", "type": "Microsoft.ApiManagement/service", "location": "[resourceGroup().location]",
When the instance is created, it has the following additional properties:
} ```
-The `tenantId` property identifies what Azure AD tenant the identity belongs to. The `principalId` property is a unique identifier for the instance's new identity. Within Azure AD, the service principal has the same name that you gave to your API Management instance.
+The `tenantId` property identifies which Azure AD tenant the identity belongs to. The `principalId` property is a unique identifier for the instance's new identity. Within Azure AD, the service principal has the same name that you gave to your API Management instance.
> [!NOTE] > An API Management instance can have both system-assigned and user-assigned identities at the same time. In this case, the `type` property would be `SystemAssigned,UserAssigned`.
-## Supported scenarios using System Assigned Identity
+## Configure Key Vault access using a managed identity
+
+Refer to the following configurations that are needed for API Management to access secrets and certificates from Key Vault.s
+
+### Configure Key Vault access policy
+
+To configure an access policy using the portal:
+
+1. In the Azure portal, navigate to your key vault.
+1. Select **Settings > Access policies > + Add Access Policy**.
+1. Select **Secret permissions**, then select **Get** and **List**.
+1. In **Select principal**, select the resource name of your managed identity. If you're using a system-assigned identity, the principal is the name of your API Management instance.
+1. Select **Add**.
++
+## Supported scenarios using system-assigned identity
### <a name="use-ssl-tls-certificate-from-azure-key-vault"></a>Obtain a custom TLS/SSL certificate for the API Management instance from Azure Key Vault You can use the system-assigned identity of an API Management instance to retrieve custom TLS/SSL certificates stored in Azure Key Vault. You can then assign these certificates to custom domains in the API Management instance. Keep these considerations in mind:
The following example shows an Azure Resource Manager template that contains the
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "publisherEmail": {
The following example shows an Azure Resource Manager template that contains the
"proxyCustomHostname1": { "type": "string", "metadata": {
- "description": "Proxy Custom hostname."
+ "description": "Gateway custom hostname."
} }, "keyVaultIdToCertificate": {
The following example shows an Azure Resource Manager template that contains the
"apimServiceIdentityResourceId": "[concat(resourceId('Microsoft.ApiManagement/service', variables('apiManagementServiceName')),'/providers/Microsoft.ManagedIdentity/Identities/default')]" }, "resources": [{
- "apiVersion": "2020-01-01",
+ "apiVersion": "2021-08-01",
"name": "[variables('apiManagementServiceName')]", "type": "Microsoft.ApiManagement/service", "location": "[resourceGroup().location]",
The following example shows an Azure Resource Manager template that contains the
"tenantId": "[reference(variables('apimServiceIdentityResourceId'), '2015-08-31-PREVIEW').tenantId]", "objectId": "[reference(variables('apimServiceIdentityResourceId'), '2015-08-31-PREVIEW').principalId]", "permissions": {
- "secrets": ["get"]
+ "secrets": ["get", "list"]
} }] }
The following example shows an Azure Resource Manager template that contains the
} ```
-### Authenticate to the back end by using an API Management identity
+### Store and manage named values from Azure Key Vault
+
+You can use a system-assigned managed identity to access Azure Key Vault to store and manage secrets for use in API Management policies. For more information, see [Use named values in Azure API Management policies](api-management-howto-properties.md).
+
+### Authenticate to a backend by using an API Management identity
-You can use the system-assigned identity to authenticate to the back end through the [authentication-managed-identity](api-management-authentication-policies.md#ManagedIdentity) policy.
+You can use the system-assigned identity to authenticate to a backend service through the [authentication-managed-identity](api-management-authentication-policies.md#ManagedIdentity) policy.
-### <a name="apim-as-trusted-service"></a>Connect to Azure resources behind IP Firewall using System Assigned Managed Identity
+### <a name="apim-as-trusted-service"></a>Connect to Azure resources behind IP firewall using system-assigned managed identity
-API Management is a trusted microsoft service to the following resources. This allows the service to connect to the following resources behind a firewall. After explicitly assigning the appropriate Azure role to the [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) for that resource instance, the scope of access for the instance corresponds to the Azure role assigned to the managed identity.
+API Management is a trusted Microsoft service to the following resources. This allows the service to connect to the following resources behind a firewall. After you explicitly assign the appropriate Azure role to the [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) for that resource instance, the scope of access for the instance corresponds to the Azure role assigned to the managed identity.
|Azure Service | Link| |||
+|Azure Key Vault | [Trusted-access-to-azure-key-vault](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services)|
|Azure Storage | [Trusted-access-to-azure-storage](../storage/common/storage-network-security.md?tabs=azure-portal#trusted-access-based-on-system-assigned-managed-identity)| |Azure Service Bus | [Trusted-access-to-azure-service-bus](../service-bus-messaging/service-bus-ip-filtering.md#trusted-microsoft-services)| |Azure Event Hub | [Trused-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)| - ## Create a user-assigned managed identity > [!NOTE]
API Management is a trusted microsoft service to the following resources. This a
### Azure portal
-To set up a managed identity in the portal, you'll first create an API Management instance and then enable the feature.
+To set up a managed identity in the portal, you'll first create an API Management instance and [create a user-assigned identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Then, enable the feature.
1. Create an API Management instance in the portal as you normally would. Browse to it in the portal.
-2. Select **Managed identities**.
+2. In the left menu, under **Security**, select **Managed identities**.
3. On the **User assigned** tab, select **Add**. 4. Search for the identity that you created earlier and select it. Select **Add**.
- :::image type="content" source="./media/api-management-msi/enable-user-assigned-msi.png" alt-text="Selections for enabling a user-assigned managed identity" border="true":::
+ :::image type="content" source="./media/api-management-howto-use-managed-service-identity/enable-user-assigned-identity.png" alt-text="Selections for enabling a user-assigned managed identity" border="true":::
### Azure PowerShell
The following steps walk you through creating an API Management instance and ass
1. If needed, install the Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-az-ps). Then run `Connect-AzAccount` to create a connection with Azure.
-2. Use the following code to create the instance. For more examples of how to use Azure PowerShell with an API Management instance, see [API Management PowerShell samples](powershell-samples.md).
+1. Use the following code to create the instance. For more examples of how to use Azure PowerShell with an API Management instance, see [API Management PowerShell samples](powershell-samples.md).
```azurepowershell-interactive # Create a resource group.
The following steps walk you through creating an API Management instance and ass
New-AzApiManagement -ResourceGroupName $resourceGroupName -Location $location -Name $apiManagementName -Organization contoso -AdminEmail admin@contoso.com -Sku Consumption -UserAssignedIdentity $userIdentities ```
-3. Update an existing service to assign an identity to the service:
+You can also update an existing service to assign an identity to the service:
- ```azurepowershell-interactive
- # Get an API Management instance
- $apimService = Get-AzApiManagement -ResourceGroupName $resourceGroupName -Name $apiManagementName
+```azurepowershell-interactive
+# Get an API Management instance
+$apimService = Get-AzApiManagement -ResourceGroupName $resourceGroupName -Name $apiManagementName
- # Create a user-assigned identity. This requires installation of the "Az.ManagedServiceIdentity" module.
- $userAssignedIdentity = New-AzUserAssignedIdentity -Name $userAssignedIdentityName -ResourceGroupName $resourceGroupName
+# Create a user-assigned identity. This requires installation of the "Az.ManagedServiceIdentity" module.
+$userAssignedIdentity = New-AzUserAssignedIdentity -Name $userAssignedIdentityName -ResourceGroupName $resourceGroupName
- # Update an API Management instance
- $userIdentities = @($userAssignedIdentity.Id)
- Set-AzApiManagement -InputObject $apimService -UserAssignedIdentity $userIdentities
- ```
+# Update an API Management instance
+$userIdentities = @($userAssignedIdentity.Id)
+Set-AzApiManagement -InputObject $apimService -UserAssignedIdentity $userIdentities
+```
### Azure Resource Manager template
For example, a complete Azure Resource Manager template might look like the foll
"$schema": "https://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#", "contentVersion": "0.9.0.0", "resources": [{
- "apiVersion": "2020-12-01",
+ "apiVersion": "2021-08-01",
"name": "contoso", "type": "Microsoft.ApiManagement/service", "location": "[resourceGroup().location]",
The `principalId` property is a unique identifier for the identity that's used f
> [!NOTE] > An API Management instance can have both system-assigned and user-assigned identities at the same time. In this case, the `type` property would be `SystemAssigned,UserAssigned`.
-## Supported scenarios using User Assigned Managed Identity
+## Supported scenarios using user-assigned managed identity
### <a name="use-ssl-tls-certificate-from-azure-key-vault-ua"></a>Obtain a custom TLS/SSL certificate for the API Management instance from Azure Key Vault
-You can use any user-assigned identity to establish trust between an API Management instance and KeyVault. This trust can then be used to retrieve custom TLS/SSL certificates stored in Azure Key Vault. You can then assign these certificates to custom domains in the API Management instance.
+You can use a user-assigned identity to establish trust between an API Management instance and Azure Key Vault. This trust can then be used to retrieve custom TLS/SSL certificates stored in Azure Key Vault. You can then assign these certificates to custom domains in the API Management instance.
+
+> [!IMPORTANT]
+> If [Key Vault firewall](../key-vault/general/network-security.md) is enabled on your key vault, you can't use a user-assigned identity for access from API Management. You can use the system-assigned identity instead. In Key Vault firewall, the **Allow Trusted Microsoft Services to bypass this firewall** option must also be enabled.
Keep these considerations in mind:
Keep these considerations in mind:
> [!Important] > If you don't provide the object version of the certificate, API Management will automatically obtain the newer version of the certificate within four hours after it's updated in Key Vault.
-For the complete template, see [API Management with KeyVault based SSL using User Assigned Identity](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.apimanagement/api-management-key-vault-create/azuredeploy.json).
+For the complete template, see [API Management with Key Vault based SSL using User Assigned Identity](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.apimanagement/api-management-key-vault-create/azuredeploy.json).
In this template, you will deploy:
-* Azure API Management
-* Azure Managed User Assigned Identity
-* Azure KeyVault for storing the SSL/TLS certificate
+* Azure API Management instance
+* Azure user-assigned managed identity
+* Azure Key Vault for storing the SSL/TLS certificate
To run the deployment automatically, click the following button: [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-key-vault-create%2Fazuredeploy.json)
-### Authenticate to the back end by using a user-assigned identity
+### Store and manage named values from Azure Key Vault
+
+You can use a user-assigned managed identity to access Azure Key Vault to store and manage secrets for use in API Management policies. For more information, see [Use named values in Azure API Management policies](api-management-howto-properties.md).
+
+> [!NOTE]
+> If [Key Vault firewall](../key-vault/general/network-security.md) is enabled on your key vault, you can't use a user-assigned identity for access from API Management. You can use the system-assigned identity instead. In Key Vault firewall, the **Allow Trusted Microsoft Services to bypass this firewall** option must also be enabled.
+
+### Authenticate to a backend by using a user-assigned identity
-You can use the user-assigned identity to authenticate to the back end through the [authentication-managed-identity](api-management-authentication-policies.md#ManagedIdentity) policy.
+You can use the user-assigned identity to authenticate to a backend service through the [authentication-managed-identity](api-management-authentication-policies.md#ManagedIdentity) policy.
## <a name="remove"></a>Remove an identity
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
The following steps describe how to assign the App Configuration Data Reader rol
1. Select **Add** > **Add role assignment**.
- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
1. On the **Role** tab, select the **App Configuration Data Reader** role.
- ![Add role assignment page with Role tab selected.](./media/add-role-assignment-role.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected.":::
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-development.md
description: Learn how to develop code for Azure Cache for Redis.
Previously updated : 03/23/2022 Last updated : 04/15/2022
While you can connect from outside of Azure, it isn't recommended *especially wh
The public IP address assigned to your cache can change as a result of a scale operation or backend improvement. We recommend relying on the hostname, in the form `<cachename>.redis.cache.windows.net`, instead of an explicit public IP address.
+## Choose an appropriate Redis version
+
+The default version of Redis that is used when creating a cache can change over time. Azure Cache for Redis might adopt a new version when a new version of open-source Redis is released. If you need a specific version of Redis for your application, we recommend choosing the Redis version explicitly when you create the cache.
+ ## Use TLS encryption Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible.
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
As with any cloud systems, unplanned outages can occur that result in a virtual machines (VM) instance, an Availability Zone, or a complete Azure region going down. We recommend customers have a plan in place to handle zone or regional outages.
-This article presents the information for customers to create a _business continuity and disaster recovery plan_ for their Azure Cache for Redis, or Azure Cache for Redis Enterprise implementation.
+This article presents the information for customers to create a *business continuity and disaster recovery plan* for their Azure Cache for Redis, or Azure Cache for Redis Enterprise implementation.
Various high availability options are available in the Standard, Premium, and Enterprise tiers:
A zone redundant cache provides automatic failover. When the current primary nod
A cache in either Enterprise tier runs on a Redis Enterprise *cluster*. It always requires an odd number of server nodes to form a quorum. By default, it has three nodes, each hosted on a dedicated VM. -- An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*. -- An Enterprise Flash cache has three same-sized data nodes.
+- An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*.
+- An Enterprise Flash cache has three same-sized data nodes.
The Enterprise cluster divides Azure Cache for Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never collocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
Consider choosing a geo-redundant storage account to ensure high availability of
Applicable tiers: **Premium**
-[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions. Geo-replication is designed mainly for disaster recovery. Two Premium tier cache instances are connected through geo-replication in away that provides reads and writes to your primary cache, and that data is replicated to the secondary cache.
+[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions. Geo-replication is designed mainly for disaster recovery. Two Premium tier cache instances are connected through geo-replication in a way that provides reads and writes to your primary cache, and that data is replicated to the secondary cache.
For more information on how to set it up, see [Configure geo-replication for Premium Azure Cache for Redis instances](./cache-how-to-geo-replication.md). If the region hosting the primary cache goes down, youΓÇÖll need to start the failover by: first, unlinking the secondary cache, and then, updating your application to point to the secondary cache for reads and writes.
Learn more about how to configure Azure Cache for Redis high-availability option
- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers) - [Add replicas to Azure Cache for Redis](cache-how-to-multi-replicas.md) - [Enable zone redundancy for Azure Cache for Redis](cache-how-to-zone-redundancy.md)-- [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)
+- [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 02/25/2022 Last updated : 04/22/2022 # What's New in Azure Cache for Redis
+## April 2022
+
+### New metrics for connection creation rate
+
+These two new metrics can help identify whether Azure Cache for Redis clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and **Redis Server Load**.
+
+- Connections Created Per Second
+- Connections Closed Per Second
+
+For more information, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
+
+### Default cache change
+
+On May 15, 2022, all new Azure Cache for Redis instances will use Redis 6 by default. You can still create a Redis 4 instance by explicitly selecting the version when you create an Azure Cache for Redis instance.
+
+This change doesn't affect any existing instances. The change is only applicable to new instances created after May 15, 2022.
+
+The default version of Redis that is used when creating a cache can change over time. Azure Cache for Redis might adopt a new version when a new version of open-source Redis is released. If you need a specific version of Redis for your application, we recommend choosing the Redis version explicitly when you create the cache.
+ ## February 2022 ### TLS Certificate Change As of May 2022, Azure Cache for Redis rolls over to TLS certificates issued by DigiCert Global G2 CA Root. The current Baltimore CyberTrust Root expires in May 2025, requiring this change.
-We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), which is known as *certificate pinning*.
+We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), known as *certificate pinning*.
For more information, read this blog that contains instructions on [how to check whether your client application is affected](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-cache-for-redis-tls-upcoming-migration-to-digicert-global/ba-p/3171086). We recommend taking the actions recommended in the blog to avoid cache connectivity loss.
For more information on the effect to Azure Cache for Redis, see [Azure TLS Cert
## Next steps
-If you have more questions, contact us through [support](https://azure.microsoft.com/support/options/).
+If you have more questions, contact us through [support](https://azure.microsoft.com/support/options/).
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
The process for installing the extension varies depending on the extension versi
# [Functions 2.x+](#tab/functionsv2/in-process)
-Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [NuGet package], version 3.x.
+Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB), version 3.x.
# [Extension 4.x+ (preview)](#tab/extensionv4/in-process)
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
}, "location": { "type": "string",
- "defaultValue": "westus2",
- "allowedValues": [
- "westus2",
- "eastus2",
- "eastus2euap"
- ],
"metadata": { "description": "Specifies the location in which to create the Data Collection Endpoint." }
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
}, "location": { "type": "string",
- "defaultValue": "westus2",
- "allowedValues": [
- "westus2",
- "eastus2",
- "eastus2euap"
- ],
"metadata": { "description": "Specifies the location in which to create the Data Collection Rule." }
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
"logFiles ": [ { "streams": [
- "Custom-MyLogFileFormat "
+ "Custom-MyLogFileFormat"
], "filePatterns": [ "C:\\JavaLogs\\*.log"
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
"Custom-MyLogFileFormat" ], "filePatterns": [
- "/var/*.log"
+ "//var//*.log"
], "format": "text", "settings": {
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
}, "location": { "type": "string",
- "defaultValue": "westus2",
- "allowedValues": [
- "westus2",
- "eastus2",
- "eastus2euap"
- ],
"metadata": { "description": "Specifies the location in which to create the Data Collection Rule." }
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
Title: Best practices for autoscale description: Autoscale patterns in Azure for Web Apps, Virtual Machine Scale sets, and Cloud Services Previously updated : 07/07/2017 Last updated : 04/22/2022 # Best practices for Autoscale
If you manually update the instance count to a value above or below the maximum,
### Always use a scale-out and scale-in rule combination that performs an increase and decrease If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This is not optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings.
+When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, itΓÇÖs possible that the scale-in and scale-out conditions could be met at the same time resulting in some level of flapping. For example, the following rule combination is *not* recommended because there is no scale-in rule for memory usage:
+
+* If CPU > 90%, scale-out by 1
+* If Memory > 90%, scale-out by 1
+* If CPU < 45%, scale-in by 1
+
+In this example, you can have a situation in which the memory usage is over 90% but the CPU usage is under 45%. This can lead to flapping for as long as both conditions are met.
+ ### Choose the appropriate statistic for your diagnostics metric For diagnostics metrics, you can choose among *Average*, *Minimum*, *Maximum* and *Total* as a metric to scale by. The most common statistic is *Average*.
azure-monitor Autoscale Common Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-metrics.md
Title: Autoscale common metrics description: Learn which metrics are commonly used for autoscaling your Cloud Services, Virtual Machines and Web Apps. Previously updated : 12/6/2016 Last updated : 04/22/2022
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
Title: Get started with autoscale in Azure
-description: "Learn how to scale your resource Web App, Cloud Service, Virtual Machine or Virtual Machine scale set in Azure."
+description: "Learn how to scale your resource web app, cloud service, virtual machine, or virtual machine scale set in Azure."
Previously updated : 07/07/2017 Last updated : 04/05/2022 # Get started with Autoscale in Azure
Azure Monitor autoscale applies only to [Virtual Machine scale sets](https://azu
You can discover all the resources for which Autoscale is applicable in Azure Monitor. Use the following steps for a step-by-step walkthrough: 1. Open the [Azure portal.][1]
-1. Click the Azure Monitor icon in the left pane.
- ![Open Azure Monitor][2]
+1. Click the Azure Monitor icon on the top of the page.
+ [![Screenshot on how to open Azure Monitor.](./media/autoscale-get-started/click-on-monitor-1.png)](./media/autoscale-get-started/click-on-monitor-1.png#lightbox)
1. Click **Autoscale** to view all the resources for which Autoscale is applicable, along with their current Autoscale status.
- ![Discover Autoscale in Azure Monitor][3]
-
+ [![Screenshot of Autoscale in Azure Monitor.](./media/autoscale-get-started/click-on-autoscale-2.png)](./media/autoscale-get-started/click-on-autoscale-2.png#lightbox)
+
+
You can use the filter pane at the top to scope down the list to select resources in a specific resource group, specific resource types, or a specific resource.
+[![Screenshot of View resource status.](./media/autoscale-get-started/view-all-resources-3.png)](./media/autoscale-get-started/view-all-resources-3.png#lightbox)
+ For each resource, you will find the current instance count and the Autoscale status. The Autoscale status can be: - **Not configured**: You have not enabled Autoscale yet for this resource. - **Enabled**: You have enabled Autoscale for this resource. - **Disabled**: You have disabled Autoscale for this resource. +
+Additionally, you can reach the scaling page by clicking on **All Resources** on the home page and filter to the resource you're interested in scaling.
+
+[![Screenshot of all resources.](./media/autoscale-get-started/choose-all-resources.png)](./media/autoscale-get-started/choose-all-resources.png#lightbox)
++
+Once you've selected the resource that you're interested in, select the **Scaling** tab to configure autoscaling rules.
+
+[![Screenshot of scaling button.](./media/autoscale-get-started/scaling-page.png)](./media/autoscale-get-started/scaling-page.png#lightbox)
+ ## Create your first Autoscale setting Let's now go through a simple step-by-step walkthrough to create your first Autoscale setting.
-1. Open the **Autoscale** blade in Azure Monitor and select a resource that you want to scale. (The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.][4])
-1. Note that the current instance count is 1. Click **Enable autoscale**.
- ![Scale setting for new web app][5]
-1. Provide a name for the scale setting, and then click **Add a rule**. Notice the scale rule options that open as a context pane on the right side. By default, this sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70 percent. Leave it at its default values and click **Add**.
- ![Create scale setting for a web app][6]
+1. Open the **Autoscale** blade in Azure Monitor and select a resource that you want to scale. (The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.][5])
+1. Note that the current instance count is 1. Click **Custom autoscale**.
+ [![Scale setting for new web app.](./media/autoscale-get-started/manual-scale-04.png)](./media/autoscale-get-started/manual-scale-04.png#lightbox)
+1. Provide a name for the scale setting, and then click **Add a rule**. This opens as a context pane on the right side. By default, this sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70 percent. Leave it at its default values and click **Add**.
+ [![Create scale setting for a web app.](./media/autoscale-get-started/custom-scale-add-rule-05.png)](./media/autoscale-get-started/custom-scale-add-rule-05.png#lightbox)
1. You've now created your first scale rule. Note that the UX recommends best practices and states that "It is recommended to have at least one scale in rule." To do so: a. Click **Add a rule**.
Let's now go through a simple step-by-step walkthrough to create your first Auto
d. Set **Operation** to **Decrease count by**. You should now have a scale setting that scales out/scales in based on CPU usage.
- ![Scale based on CPU][8]
+ [![Scale based on CPU](./media/autoscale-get-started/custom-scale-results-06.png)](./media/autoscale-get-started/custom-scale-results-06.png#lightbox)
1. Click **Save**. Congratulations! You've now successfully created your first scale setting to autoscale your web app based on CPU usage. > [!NOTE]
-> The same steps are applicable to get started with a virtual machine scale set or cloud service role.
+> The same steps are applicable to get started with a Virtual Machine Scale Set or cloud service role.
## Other considerations ### Scale based on a schedule
In addition to scale based on CPU, you can set your scale differently for specif
1. Select **Repeat specific days** for the schedule. 1. Select the days and the start/end time for when the scale condition should be applied.
-![Scale condition based on schedule][9]
+[![Scale condition based on schedule](./media/autoscale-get-started/scale-same-based-on-condition-07.png)](./media/autoscale-get-started/scale-same-based-on-condition-07.png#lightbox)
### Scale differently on specific dates In addition to scale based on CPU, you can set your scale differently for specific dates.
In addition to scale based on CPU, you can set your scale differently for specif
1. Select **Specify start/end dates** for the schedule. 1. Select the start/end dates and the start/end time for when the scale condition should be applied.
-![Scale condition based on dates][10]
+[![Scale condition based on dates](./media/autoscale-get-started/scale-different-based-on-time-08.png)](./media/autoscale-get-started/scale-different-based-on-time-08.png#lightbox)
### View the scale history of your resource Whenever your resource is scaled up or down, an event is logged in the activity log. You can view the scale history of your resource for the past 24 hours by switching to the **Run history** tab.
-![Run history][11]
+![Run history][12]
If you want to view the complete scale history (for up to 90 days), select **Click here to see more details**. The activity log opens, with Autoscale pre-selected for your resource and category. ### View the scale definition of your resource Autoscale is an Azure Resource Manager resource. You can view the scale definition in JSON by switching to the **JSON** tab.
-![Scale definition][12]
+[![Scale definition](./media/autoscale-get-started/view-scale-definition-09.png)](./media/autoscale-get-started/view-scale-definition-09.png#lightbox)
You can make changes in JSON directly, if required. These changes will be reflected after you save them.
-### Disable Autoscale and manually scale your instances
-There might be times when you want to disable your current scale setting and manually scale your resource.
-
-Click the **Disable autoscale** button at the top.
-![Disable Autoscale][13]
-
-> [!NOTE]
-> This option disables your configuration. However, you can get back to it after you enable Autoscale again.
-
-You can now set the number of instances that you want to scale to manually.
-
-![Set manual scale][14]
-
-You can always return to Autoscale by clicking **Enable autoscale** and then **Save**.
- ### Cool-down period effects Autoscale uses a cool-down period to prevent "flapping", which is the rapid, repetitive up and down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). Other valuable information on flapping and understanding how to monitor the autoscale engine can be found in [Autoscale Best Practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md) respectively.
To learn more about moving resources between regions and disaster recovery in Az
<!--Reference--> [1]:https://portal.azure.com
-[2]: ./media/autoscale-get-started/azure-monitor-launch.png
-[3]: ./media/autoscale-get-started/discover-autoscale-azure-monitor.png
-[4]: ../../app-service/quickstart-dotnetcore.md
-[5]: ./media/autoscale-get-started/scale-setting-new-web-app.png
-[6]: ./media/autoscale-get-started/create-scale-setting-web-app.png
-[7]: ./media/autoscale-get-started/scale-in-recommendation.png
-[8]: ./media/autoscale-get-started/scale-based-on-cpu.png
-[9]: ./media/autoscale-get-started/scale-condition-schedule.png
-[10]: ./media/autoscale-get-started/scale-condition-dates.png
-[11]: ./media/autoscale-get-started/scale-history.png
-[12]: ./media/autoscale-get-started/scale-definition-json.png
-[13]: ./media/autoscale-get-started/disable-autoscale.png
-[14]: ./media/autoscale-get-started/set-manualscale.png
+[2]: ./media/autoscale-get-started/click-on-monitor-1.png
+[3]: ./media/autoscale-get-started/click-on-autoscale-2.png
+[4]: ./media/autoscale-get-started/view-all-resources-3.png
+[5]: ../../app-service/quickstart-dotnetcore.md
+[6]: ./media/autoscale-get-started/manual-scale-04.png
+[7]: ./media/autoscale-get-started/custom-scale-add-rule-05.png
+[8]: ./media/autoscale-get-started/scale-in-recommendation.png
+[9]: ./media/autoscale-get-started/custom-scale-results-06.png
+[10]: ./media/autoscale-get-started/scale-same-based-on-condition-07.png
+[11]: ./media/autoscale-get-started/scale-different-based-on-time-08.png
+[12]: ./media/autoscale-get-started/scale-history.png
+[13]: ./media/autoscale-get-started/view-scale-definition-09.png
+[14]: ./media/autoscale-get-started/disable-autoscale.png
+[15]: ./media/autoscale-get-started/set-manualscale.png
+[16]: ./media/autoscale-get-started/choose-all-resources.png
+[17]: ./media/autoscale-get-started/scaling-page.png
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
Title: Autoscale in Microsoft Azure
description: "Autoscale in Microsoft Azure" Previously updated : 09/24/2018 Last updated : 04/22/2022
To learn more about autoscale, use the Autoscale Walkthroughs listed previously
* [Best practices for Azure Monitor autoscale](autoscale-best-practices.md) * [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md) * [Autoscale REST API](/rest/api/monitor/autoscalesettings)
-* [Troubleshooting Virtual Machine Scale Sets Autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
+* [Troubleshooting Virtual Machine Scale Sets Autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
The following schemas are relevant to action groups, which are part of the notif
## See Also - See [Monitoring Azure Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
For the following Kubernetes environments:
- Azure Stack or on-premises - Azure Red Hat OpenShift and Red Hat OpenShift version 4.x
-run the command `kubectl apply -f <configmap_yaml_file.yaml`.
+run the command `kubectl apply -f <config3. map_yaml_file.yaml>`.
-For an Azure Red Hat OpenShift v3.x cluster, run the command, `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging` to open the file in your default editor to modify and then save it.
+For an example, run the command, `Example: kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
+The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a popup message is displayed that's similar to the following and includes the result: 'configmap "container-azm-ms-agentconfig' created to indicate the configmap resource created.
## Verify configuration
azure-netapp-files Azacsnap Cmd Ref Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-backup.md
In this example the *log file* name is `azacsnap-backup-bootVol.log` (see [Log f
The log file name is constructed from the following "(command name)-(the `-c` option)-(the config filename)". For example, if running the command `azacsnap -c backup --configfile h80.json --retention 5 --prefix one-off` then the log file will be called `azacsnap-backup-h80.log`. Or if using the `-c test` option with the same configuration file (e.g. `azacsnap -c test --configfile h80.json`) then the log file will be called `azacsnap-test-h80.log`.
+> [!NOTE]
+> Log files can be automatically maintained using [this guide](azacsnap-tips.md#manage-azacsnap-log-files).
+ ## Next steps - [Get snapshot details](azacsnap-cmd-ref-details.md)
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md
Create RBAC Service Principal
1. Create a service principal using Azure CLI per the following example: ```azurecli-interactive
- az ad sp create-for-rbac --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth
+ az ad sp create-for-rbac --name "AzAcSnap" --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth
``` 1. This should generate an output like the following example:
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-tips.md
Further explanation of cron and the format of the crontab file here: <https://en
> Users are responsible for monitoring the cron jobs to ensure snapshots are being generated successfully.
+## Manage AzAcSnap log files
+
+AzAcSnap writes output of its operation to log files to assist with debugging and to validate correct operation. These log files will continue to grow unless actively managed. Fortunately UNIX based systems have a tool to manage and archive log files called logrotate.
+
+This is an example configuration for logrotate. This configuration will keep a maximum of 31 logs (approximately one month), and when the log files are larger than 10k it will rotate and compress them.
+
+```output
+# azacsnap logrotate configuration file
+compress
+
+~/bin/azacsnap*.log {
+ rotate 31
+ size 10k
+}
+```
+
+After creating the logrotate.conf file, logrotate should be run on a regular basis to archive AzAcSnap log files accordingly. This can be done using cron. The following is the line of the azacsnap user's crontab which will run logrotate on a daily schedule using the configuration file described above.
+
+```output
+@daily /usr/sbin/logrotate -s ~/logrotate.state ~/logrotate.conf >> ~/logrotate.log
+```
+
+> [!NOTE]
+> In the example above the logrotate.conf file is in the user's home (~) directory.
+
+After several days the azacsnap log files should look similar to the following directory listing.
+
+```bash
+ls -ltra ~/bin/logs
+```
+
+```output
+-rw-r--r-- 1 azacsnap users 127431 Mar 14 23:56 azacsnap-backup-azacsnap.log.6.gz
+-rw-r--r-- 1 azacsnap users 128379 Mar 15 23:56 azacsnap-backup-azacsnap.log.5.gz
+-rw-r--r-- 1 azacsnap users 129272 Mar 16 23:56 azacsnap-backup-azacsnap.log.4.gz
+-rw-r--r-- 1 azacsnap users 128010 Mar 17 23:56 azacsnap-backup-azacsnap.log.3.gz
+-rw-r--r-- 1 azacsnap users 128947 Mar 18 23:56 azacsnap-backup-azacsnap.log.2.gz
+-rw-r--r-- 1 azacsnap users 128971 Mar 19 23:56 azacsnap-backup-azacsnap.log.1.gz
+-rw-r--r-- 1 azacsnap users 167921 Mar 20 01:21 azacsnap-backup-azacsnap.log
+```
++ ## Monitor the snapshots The following conditions should be monitored to ensure a healthy system:
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 04/06/2022 Last updated : 04/21/2022 # SMB FAQs for Azure NetApp Files
Azure NetApp Files supports [`CHANGE_NOTIFY` response](/openspecs/windows_protoc
Azure NetApp Files also supports [`LOCK` response](/openspecs/windows_protocols/ms-smb2/e215700a-102c-450a-a598-7ec2a99cd82c). This response is for the clientΓÇÖs request that comes in the form of a [`LOCK` request](/openspecs/windows_protocols/ms-smb2/6178b960-48b6-4999-b589-669f88e9017d).
+## What is the password rotation policy for the Active Directory machine account for SMB volumes?
+
+The Azure NetApp Files service has a policy that automatically updates the password on the Active Directory machine account that is created for SMB volumes. This policy has the following properties:
+
+* Schedule interval: 4 weeks
+* Schedule randomization period: 120 minutes
+* Schedule: Sunday `@0100`
+
+To see when the password was last updated on the Azure NetApp Files SMB machine account, check the `pwdLastSet` property on the computer account using the [Attribute Editor](create-volumes-dual-protocol.md#access-active-directory-attribute-editor) in the **Active Directory Users and Computers** utility:
+
+![Screenshot that shows the Active Directory Users and Computers utility](../media/azure-netapp-files/active-directory-users-computers-utility.png )
+
+>[!NOTE]
+> Due to an interoperability issue with the [April 2022 Monthly Windows Update](
+https://support.microsoft.com/topic/april-12-2022-kb5012670-monthly-rollup-cae43d16-5b5d-43ea-9c52-9174177c6277), the policy that automatically updates the Active Directory machine account password for SMB volumes has been suspended until a fix is deployed.
+ ## Next steps - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md)
azure-signalr Signalr Howto Authorize Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md
The following steps describe how to assign a `SignalR App Server` role to a serv
1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
-1. Select **Access Control (IAM)**.
+1. Select **Access control (IAM)**.
1. Select **Add > Add role assignment**. :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select **SignalR App Server**.
+1. On the **Role** tab, select **SignalR App Server**.
1. On the **Members** tab, select **User, group, or service principal**, and then select **Select members**.
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
The following steps describe how to assign a `SignalR App Server` role to a syst
1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
-1. Select **Access Control (IAM)**.
+1. Select **Access control (IAM)**.
1. Select **Add > Add role assignment**. :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select **SignalR App Server**.
+1. On the **Role** tab, select **SignalR App Server**.
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+1. Select your Azure subscription.
+ 1. Select **System-assigned managed identity**, search for a virtual machine to which would you'd like to assign the role, and then select it. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
azure-sql Active Geo Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-overview.md
Previously updated : 4/13/2022 Last updated : 4/14/2022 # Active geo-replication
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
Previously updated : 04/21/2022 Last updated : 04/22/2022 # vCore purchasing model - Azure SQL Database
Compute tier options in the vCore model include the provisioned and [serverless]
Hardware configurations in the vCore model include Gen4, Gen5, M-series, Fsv2-series, and DC-series. Hardware configuration defines compute and memory limits and other characteristics that impact workload performance.
-Certain hardware configurations such as Gen5 may use more than one type of processor (CPU), as described in [Compute resources (CPU and memory)](#compute-resources-cpu-and-memory). While a given database or elastic pool tends to stay on the hardware with the same CPU type for a long time (commonly for multiple months), there are certain events that can cause a database or pool to be moved to hardware that uses a different CPU type. For example, a database or pool can be moved if it's scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
+Certain hardware configurations such as Gen5 may use more than one type of processor (CPU), as described in [Compute resources (CPU and memory)](#compute-resources-cpu-and-memory). While a given database or elastic pool tends to stay on the hardware with the same CPU type for a long time (commonly for multiple months), there are certain events that can cause a database or pool to be moved to hardware that uses a different CPU type. For example, a database or pool can be moved if it is scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
For some workloads, a move to a different CPU type can change performance. SQL Database configures hardware with the goal to provide predictable workload performance even if CPU type changes, keeping performance changes within a narrow band. However, across the wide spectrum of customer workloads running in SQL Database, and as new types of CPUs become available, it is possible to occasionally see more noticeable changes in performance if a database or pool moves to a different CPU type.
The following table compares compute resources in different hardware configurati
|Hardware configuration |CPU |Memory | |:|:|:|
-|Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB|
-|Gen5 |**Provisioned compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz, Intel&reg; SP-8160 (Skylake)\*, and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz and Intel&reg; SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max|
-|Fsv2-series |- Intel&reg; 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (1 vCore = 1 hyper-thread)|- 1.9 GB per vCore<br>- Provision up to 136 GB|
-|M-series |- Intel&reg; E7-8890 v3 2.5 GHz and Intel&reg; 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (1 vCore = 1 hyper-thread)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
-|DC-series | - Intel&reg; XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
+|Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (physical) |- 7 GB per vCore<br>- Provision up to 168 GB|
+|Gen5 |**Provisioned compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3 GHz, Intel&reg; SP-8160 (Skylake)\*, Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\*, and Intel&reg; Xeon Platinum 8307C (Ice Lake)\* processors<br>- Provision up to 80 vCores (hyper-threaded)<br><br>**Serverless compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3 GHz, Intel&reg; SP-8160 (Skylake)\*, Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\*, and Intel Xeon&reg; Platinum 8307C (Ice Lake)\* processors<br>- Auto-scale up to 40 vCores (hyper-threaded)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max|
+|Fsv2-series |- Intel&reg; 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (hyper-threaded)|- 1.9 GB per vCore<br>- Provision up to 136 GB|
+|M-series |- Intel&reg; E7-8890 v3 2.5 GHz and Intel&reg; 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (hyper-threaded)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
+|DC-series | - Intel&reg; XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (physical) | 4.5 GB per vCore |
-\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, hardware generation for databases using Intel&reg; 8272CL (Cascade Lake) appears as Gen7 and hardware generation for databases using Intel Xeon&reg; Platinum 8307C (Ice Lake) appear as Gen8. For a given compute size and hardware configuration, resource limits are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
+\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, hardware generation for databases using Intel&reg; 8272CL (Cascade Lake) appears as Gen7, and hardware generation for databases using Intel Xeon&reg; Platinum 8307C (Ice Lake) appear as Gen8. For a given compute size and hardware configuration, resource limits are the same regardless of CPU type (Broadwell, Skylake, Ice Lake, or Cascade Lake).
For more information see resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Title: Configure external identity source for vCenter Server description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter Server as an external identity source. Previously updated : 04/07/2022 Last updated : 04/22/2022
-# Configure external identity source for vCenter
+# Configure external identity source for vCenter Server
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-configure-networking.md
You can use the **Azure vNet connect** feature to use an existing vNet or create
>[!NOTE] >Address space in the vNet cannot overlap with the Azure VMware Solution private cloud CIDR.
+### Prerequisites
+
+Before selecting an existing vNet, there are specific requirements that must be met.
+
+1. vNet must contain a gateway subnet.
+1. In the same region as Azure VMware Solution private cloud.
+1. In the same resource group as Azure VMware Solution private cloud.
+1. vNet must contain an address space that doesn't overlap with Azure VMware Solution.
### Select an existing vNet
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
You can use the [Azure Backup service](./backup-overview.md) to back up Azure fi
## Supported regions
-Azure file shares backup is available in all regions, **except** for Germany Central (Sovereign), Germany Northeast (Sovereign), China East, China East 2, China North, China North 2, and US Gov Iowa.
+Azure file shares backup is available in all regions, **except** for Germany Central (Sovereign), Germany Northeast (Sovereign), China East, China East 2, China North, China North 2, France South and US Gov Iowa.
## Supported storage accounts
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy (in preview) description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 03/25/2022 Last updated : 04/12/2022
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure o
You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features: -- Supports _Multiple Backups Per Day_. To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com).
+- Supports *Multiple Backups Per Day* (in preview).
- Instant Restore tier is zonally redundant using Zone-redundant storage (ZRS) resiliency. See the [pricing details for Enhanced policy storage here](https://azure.microsoft.com/pricing/details/managed-disks/). :::image type="content" source="./media/backup-azure-vms-enhanced-policy/enhanced-backup-policy-settings.png" alt-text="Screenshot showing the enhanced backup policy options.":::
Follow these steps:
With backup schedule set to **Hourly**, the default selection for start time is **8 AM**, schedule is **Every 4 hours**, and duration is **24 Hours**. Hourly backup has a minimum RPO of 4 hours and a maximum of 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively.
- Note that Hourly backup frequency is in preview. To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com).
+ Note that Hourly backup frequency is in preview.
- **Instant Restore**: You can set the retention of recovery snapshot from _1_ to _30_ days. The default value is set to _7_. - **Retention range**: Options for retention range are auto-selected based on backup frequency you choose. The default retention for daily, weekly, monthly, and yearly backup points are set to 180 days, 12 weeks, 60 months, and 10 years respectively. You can customize these values as required.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 02/18/2022 Last updated : 04/12/2022
Monthly/yearly backup| Not supported when backing up with Azure VM extension. On
Automatic clock adjustment | Not supported.<br/><br/> Azure Backup doesn't automatically adjust for daylight saving time changes when backing up a VM.<br/><br/> Modify the policy manually as needed. [Security features for hybrid backup](./backup-azure-security-feature.md) |Disabling security features isn't supported. Back up the VM whose machine time is changed | Not supported.<br/><br/> If the machine time is changed to a future date-time after enabling backup for that VM, however even if the time change is reverted, successful backup isn't guaranteed.
-Multiple Backups Per Day | Supported, using _Enhanced policy_ (in preview). To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn about how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
+Multiple Backups Per Day | Supported (in preview), using *Enhanced policy* (in preview). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn about how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
## Operating system support (Windows)
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
description: Understand the available actions you can use with Chaos Studio incl
Previously updated : 03/03/2022 Last updated : 04/21/2022
Known issues on Linux:
| Capability Name | Reboot-1.0 | | Target type | Microsoft-AzureClusteredCacheForRedis | | Description | Causes a forced reboot operation to occur on the target to simulate a brief outage. |
-| Prerequisites | None. |
+| Prerequisites | The target Azure Cache for Redis resource must be a Redis Cluster, which requires that the cache must be a Premium Tier cache. Standard and Basic Tiers are not supported. |
| Urn | urn:csci:microsoft:azureClusteredCacheForRedis:reboot/1.0 | | Fault type | Discrete | | Parameters (key, value) | |
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
The chaos agent is an application that runs in your virtual machine or virtual m
```azurecli-interactive az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}' ```
-3. If setting up a virtual machine scale set, verify that the instances have been upgraded to the latest model.
+3. If setting up a virtual machine scale set, verify that the instances have been upgraded to the latest model. If needed, upgrade all instances in the model.
+
+ ```azurecli-interactive
+ az vmss update-instances -g $RESOURCE_GROUP -n $VMSS_NAME --instance-ids *
+ ```
## Create an experiment
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.
![Reviewing agent-based target enablement](images/tutorial-agent-based-targets-enable-review.png) 7. After a few minutes, a notification will appear indicating that the resource(s) selected were successfully enabled. The Azure portal will add the user-assigned identity to the virtual machine, enable the agent target and capabilities, and install the chaos agent as a virtual machine extension. ![Notification showing target successfully enabled](images/tutorial-agent-based-targets-enable-confirm.png)
+8. If enabling a virtual machine scale set, upgrade instances to the latest model by going to the virtual machine scale set resource blade, clicking **Instances**, then selecting all instances and clicking **Upgrade** if not on the latest model.
You have now successfully onboarded your Linux virtual machine to Chaos Studio. In the **Targets** view you can also manage the capabilities enabled on this resource. Clicking the **Manage actions** link next to a resource will display the capabilities enabled for that resource.
You are now ready to run your experiment. To see the impact, we recommend openin
## Next steps Now that you have run an agent-based experiment, you are ready to: - [Create an experiment that uses service-direct faults](chaos-studio-tutorial-service-direct-portal.md)-- [Manage your experiment](chaos-studio-run-experiment.md)
+- [Manage your experiment](chaos-studio-run-experiment.md)
cognitive-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/limits.md
These represent the limits when Prebuilt API is used to *Generate response* or c
> [!IMPORTANT] > Support for unstructured file/content and is available only in question answering.
+## Alterations limits
+[Alterations](https://docs.microsoft.com/rest/api/cognitiveservices/qnamaker/alterations/replace) do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
+ ## Next steps Learn when and how to change [service pricing tiers](How-To/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku).
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/conversation-transcription.md
Last updated 01/23/2022 -+ # What is conversation transcription?
cognitive-services Custom Commands Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands-encryption-of-data-at-rest.md
Last updated 07/05/2020 + # Custom Commands encryption of data at rest
cognitive-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands-references.md
Last updated 06/18/2020 + # Custom Commands concepts and definitions
cognitive-services Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands.md
Last updated 03/11/2020 + # What is Custom Commands?
cognitive-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/direct-line-speech.md
Last updated 03/11/2020 + # What is Direct Line Speech?
cognitive-services How To Async Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-async-conversation-transcription.md
Last updated 11/04/2019 ms.devlang: csharp, java-+ zone_pivot_groups: programming-languages-set-twenty-one
cognitive-services How To Custom Commands Debug Build Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-debug-build-time.md
Last updated 06/18/2020 + # Debug errors when authoring a Custom Commands application
cognitive-services How To Custom Commands Debug Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-debug-runtime.md
Last updated 06/18/2020 + # Troubleshoot a Custom Commands application at runtime
cognitive-services How To Custom Commands Deploy Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-deploy-cicd.md
Last updated 06/18/2020 + # Continuous Deployment with Azure DevOps
cognitive-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-developer-flow-test.md
Last updated 06/18/2020 + # Test your Custom Commands Application
cognitive-services How To Custom Commands Integrate Remote Skills https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-integrate-remote-skills.md
Last updated 09/30/2020 + # Export Custom Commands application as a remote skill
cognitive-services How To Custom Commands Send Activity To Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-send-activity-to-client.md
Last updated 06/18/2020 ms.devlang: csharp+ # Send Custom Commands activity to client application
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
Last updated 06/18/2020 ms.devlang: csharp-+ # Integrate with a client application using Speech SDK
cognitive-services How To Custom Commands Setup Web Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-web-endpoints.md
Last updated 06/18/2020 ms.devlang: csharp+ # Set up web endpoints
cognitive-services How To Custom Commands Update Command From Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-client.md
Last updated 10/20/2020 + # Update a command from a client app
cognitive-services How To Custom Commands Update Command From Web Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-web-endpoint.md
Last updated 10/20/2020 + # Update a command from a web endpoint
cognitive-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
Last updated 12/15/2020 + # Develop Custom Commands applications
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
Last updated 01/24/2022
zone_pivot_groups: acs-js-csharp ms.devlang: csharp, javascript-+ # Quickstart: Real-time Conversation Transcription
cognitive-services How To Windows Voice Assistants Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-windows-voice-assistants-get-started.md
Last updated 04/15/2020 + # Get started with voice assistants on Windows
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/multi-device-conversation.md
Last updated 02/19/2022 + # What is Multi-device Conversation?
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
Last updated 02/19/2022 -+ # Quickstart: Create a voice assistant with Custom Commands
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/multi-device-conversation.md
Last updated 06/25/2020
zone_pivot_groups: programming-languages-set-nine ms.devlang: cpp, csharp-+ # Quickstart: Multi-device Conversation
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/voice-assistants.md
Last updated 06/25/2020 ms.devlang: csharp, golang, java-+ zone_pivot_groups: programming-languages-voice-assistants
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Previously updated : 01/24/2022 Last updated : 04/22/2022
In the following tables, the parameters without the **Adjustable** row aren't ad
| Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--|
-| **Max number of transactions per second (TPS) per Speech service resource** | | |
-| Real-time API. Prebuilt neural voices and custom neural voices. | 20 per 60 seconds | 200<sup>4</sup> |
-| Adjustable | No<sup>4</sup> | Yes<sup>4</sup> |
+| **Max number of transactions per certain time period per Speech service resource** | | |
+| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) |
+| Adjustable | No<sup>4</sup> | Yes<sup>5</sup> |
| **HTTP-specific quotas** | | | | Max audio length produced per request | 10 min | 10 min | | Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
After you've published your custom lexicon, you can reference it from your SSML.
When you use this custom lexicon, "BTW" is read as "By the way." "Benigni" is read with the provided IPA "bɛˈniːnji."
-It's easy to make mistakes in the custom lexicon, so Microsoft provides a [validation tool for the custom lexicon](https://github.com/jiajzhan/Custom-Lexicon-Validation). It provides detailed error messages that help you find errors. Before you send SSML with the custom lexicon to the Speech service, check your custom lexicon with this tool.
+It's easy to make mistakes in the custom lexicon, so Microsoft provides a [validation tool for the custom lexicon](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomLexiconValidation). It provides detailed error messages that help you find errors. Before you send SSML with the custom lexicon to the Speech service, check your custom lexicon with this tool.
**Limitations**
The Mathematical Markup Language (MathML) is an XML-compliant markup language th
This SSML snippet demonstrates how the MathML elements are used to output synthesized speech. The text-to-speech output for this example is "a squared plus b squared equals c squared". ```xml
-<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>a</mi><mn>2</mn></msup><mo>+</mo><msup><mi>b</mi><mn>2</mn></msup><mo>=</mo><msup><mi>c</mi><mn>2</mn></msup></math>
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US"><voice name="en-US-JennyNeural"><math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>a</mi><mn>2</mn></msup><mo>+</mo><msup><mi>b</mi><mn>2</mn></msup><mo>=</mo><msup><mi>c</mi><mn>2</mn></msup></math></voice></speak>
```
-The `xmlns` attribute in `<math xmlns="http://www.w3.org/1998/Math/MathML">` is optional.
-All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3.0](https://www.w3.org/TR/MathML3/) specifications are supported, except the MathML 3.0 [Elementary Math](https://www.w3.org/TR/MathML3/chapter3.html#presm.elementary) elements. The `semantics`, `annotation`, and `annotation-xml` elements don't output speech, so they are ignored.
+The `xmlns` attribute in `<math xmlns="http://www.w3.org/1998/Math/MathML">` is optional.
+
+All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3.0](https://www.w3.org/TR/MathML3/) specifications are supported, except the MathML 3.0 [Elementary Math](https://www.w3.org/TR/MathML3/chapter3.html#presm.elementary) elements. The `semantics`, `annotation`, and `annotation-xml` elements don't output speech, so they are ignored.
> [!NOTE] > If an element is not recognized, it will be ignored, and the child elements within it will still be processed.
cognitive-services Tutorial Tenant Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-tenant-model.md
- Title: Create a tenant model (preview) - Speech Service-
-description: Automatically generate a secure, compliant tenant model (Custom Speech with Microsoft 365 data) that uses your Microsoft 365 data to deliver optimal speech recognition for organization-specific terms.
------ Previously updated : 06/25/2020----
-# Tutorial: Create a tenant model (preview)
-
-Tenant Model (Custom Speech with Microsoft 365 data) is an opt-in service for Microsoft 365 enterprise customers that automatically generates a custom speech recognition model from your organization's Microsoft 365 data. The model is optimized for technical terms, jargon, and people's names, all in a secure and compliant way.
-
-> [!IMPORTANT]
-> If your organization enrolls by using the Tenant Model service, Speech Service may access your organizationΓÇÖs language model. The model is generated from Microsoft 365 public group emails and documents, which can be seen by anyone in your organization. Your organization's admin can turn on or turn off the use of the organization-wide language model from the admin portal.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Enroll in the Tenant Model by using the Microsoft 365 admin center
-> * Get a Speech subscription key
-> * Create a tenant model
-> * Deploy a tenant model
-> * Use your tenant model with the Speech SDK
-
-## Enroll in the Tenant Model service
-
-Before you can deploy your tenant model, you need to be enrolled in the Tenant Model service. Enrollment is completed in the Microsoft 365 admin center and can be done only by your admin.
-
-1. Sign in to the [Microsoft 365 admin center](https://admin.microsoft.com).
-
-1. In the left pane, select **Settings**, then select **Settings** from the nested menu, then select **Azure Speech Services** from the main window.
-
- ![The "Services & add-ins" pane](media/tenant-language-model/tenant-language-model-enrollment.png)
-
-1. Select the **Allow the organization-wide language model** check box, and then select **Save changes**.
-
- ![The Azure Speech Services pane](media/tenant-language-model/tenant-language-model-enrollment-2.png)
-
-To turn off the tenant model instance:
-1. Repeat the preceding steps 1 and 2.
-1. Clear the **Allow the organization-wide language model** check box, and then select **Save changes**.
-
-## Get a Speech subscription key
-
-To use your tenant model with the Speech SDK, you need a Speech resource and its associated subscription key.
-
-1. Sign in to the [Azure portal](https://aka.ms/azureportal).
-1. Select **Create a resource**.
-1. In the **Search** box, type **Speech**.
-1. In the results list, select **Speech**, and then select **Create**.
-1. Follow the onscreen instructions to create your resource. Make sure that:
- * **Location** is set to either **eastus** or **westus**.
- * **Pricing tier** is set to **S0**.
-1. Select **Create**.
-
- After a few minutes, your resource is created. The subscription key is available in the **Overview** section for your resource.
-
-## Create a language model
-
-After your admin has enabled Tenant Model for your organization, you can create a language model that's based on your Microsoft 365 data.
-
-1. Sign in to [Speech Studio](https://speech.microsoft.com/).
-1. At the top right, select **Settings** (gear icon), and then select **Tenant Model settings**.
-
- ![The "Tenant Model settings" link](media/tenant-language-model/tenant-language-settings.png)
-
- Speech Studio displays a message that lets you know whether you're qualified to create a tenant model.
-
- > [!NOTE]
- > Enterprise customers in North America are eligible to create a tenant model (English). If you're a Customer Lockbox, Customer Key, or Office 365 Government customer, this feature isn't available. To determine whether you're a Customer Lockbox or Customer Key customer, see:
- > * [Customer Lockbox](/microsoft-365/compliance/customer-lockbox-requests)
- > * [Customer Key](/microsoft-365/compliance/customer-key-overview)
- > * [Office 365 Government](https://www.microsoft.com/microsoft-365/government)
-
-1. Select **Opt in**.
-
- When your tenant model is ready, you'll receive a confirmation email message with further instructions.
-
-## Deploy your tenant model
-
-When your tenant model instance is ready, deploy it by doing the following:
-
-1. In your confirmation email message, select the **View model** button. Or sign in to [Speech Studio](https://speech.microsoft.com/).
-1. At the top right, select **Settings** (gear icon), and then select **Tenant Model settings**.
-
- ![The "Tenant Model settings" link](media/tenant-language-model/tenant-language-settings.png)
-
-1. Select **Deploy**.
-
- When your model has been deployed, the status changes to *Deployed*.
-
-## Use your tenant model with the Speech SDK
-
-Now that you've deployed your model, you can use it with the Speech SDK. In this section, you use sample code to call Speech Service by using Azure Active Directory (Azure AD) authentication.
-
-Let's look at the code you'll use to call the Speech SDK in C#. In this example, you perform speech recognition by using your tenant model. This guide presumes that your platform is already set up. If you need setup help, see [Quickstart: Recognize speech, C# (.NET Core)](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnetcore).
-
-Copy this code into your project:
-
-```csharp
-namespace PrincetonSROnly.FrontEnd.Samples
-{
- using System;
- using System.Collections.Generic;
- using System.IO;
- using System.Net.Http;
- using System.Text;
- using System.Text.RegularExpressions;
- using System.Threading.Tasks;
- using Microsoft.CognitiveServices.Speech;
- using Microsoft.CognitiveServices.Speech.Audio;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- using Newtonsoft.Json.Linq;
-
- // ServiceApplicationId is a fixed value. No need to change it.
-
- public class TenantLMSample
- {
- private const string EndpointUriArgName = "EndpointUri";
- private const string SubscriptionKeyArgName = "SubscriptionKey";
- private const string UsernameArgName = "Username";
- private const string PasswordArgName = "Password";
- private const string ClientApplicationId = "f87bc118-1576-4097-93c9-dbf8f45ef0dd";
- private const string ServiceApplicationId = "18301695-f99d-4cae-9618-6901d4bdc7be";
-
- public static async Task ContinuousRecognitionWithTenantLMAsync(Uri endpointUri, string subscriptionKey, string audioDirPath, string username, string password)
- {
- var config = SpeechConfig.FromEndpoint(endpointUri, subscriptionKey);
-
- // Passing client specific information for obtaining LM
- if (string.IsNullOrEmpty(username) || string.IsNullOrEmpty(password))
- {
- config.AuthorizationToken = await AcquireAuthTokenWithInteractiveLoginAsync().ConfigureAwait(false);
- }
- else
- {
- config.AuthorizationToken = await AcquireAuthTokenWithUsernamePasswordAsync(username, password).ConfigureAwait(false);
- }
-
- var stopRecognition = new TaskCompletionSource<int>();
-
- // Creates a speech recognizer using file as audio input.
- // Replace with your own audio file name.
- using (var audioInput = AudioConfig.FromWavFileInput(audioDirPath))
- {
- using (var recognizer = new SpeechRecognizer(config, audioInput))
- {
- // Subscribes to events
- recognizer.Recognizing += (s, e) =>
- {
- Console.WriteLine($"RECOGNIZING: Text={e.Result.Text}");
- };
-
- recognizer.Recognized += (s, e) =>
- {
- if (e.Result.Reason == ResultReason.RecognizedSpeech)
- {
- Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
- }
- else if (e.Result.Reason == ResultReason.NoMatch)
- {
- Console.WriteLine($"NOMATCH: Speech could not be recognized.");
- }
- };
-
- recognizer.Canceled += (s, e) =>
- {
- Console.WriteLine($"CANCELED: Reason={e.Reason}");
- if (e.Reason == CancellationReason.Error)
- {
- Exception exp = new Exception(string.Format("Error Code: {0}\nError Details{1}\nIs your subscription information updated?", e.ErrorCode, e.ErrorDetails));
- throw exp;
- }
-
- stopRecognition.TrySetResult(0);
- };
-
- recognizer.SessionStarted += (s, e) =>
- {
- Console.WriteLine("\n Session started event.");
- };
-
- recognizer.SessionStopped += (s, e) =>
- {
- Console.WriteLine("\n Session stopped event.");
- Console.WriteLine("\nStop recognition.");
- stopRecognition.TrySetResult(0);
- };
-
- // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
- await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
-
- // Waits for completion.
- // Use Task.WaitAny to keep the task rooted.
- Task.WaitAny(new[] { stopRecognition.Task });
-
- // Stops recognition.
- await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
- }
- }
- }
-
- public static void Main(string[] args)
- {
- var arguments = new Dictionary<string, string>();
- string inputArgNamePattern = "--";
- Regex regex = new Regex(inputArgNamePattern);
- if (args.Length > 0)
- {
- foreach (var arg in args)
- {
- var userArgs = arg.Split("=");
- arguments[regex.Replace(userArgs[0], string.Empty)] = userArgs[1];
- }
- }
-
- var endpointString = arguments.GetValueOrDefault(EndpointUriArgName, $"wss://westus.online.princeton.customspeech.ai/msgraphcustomspeech/conversation/v1");
- var endpointUri = new Uri(endpointString);
-
- if (!arguments.ContainsKey(SubscriptionKeyArgName))
- {
- Exception exp = new Exception("Subscription Key missing! Please pass in a Cognitive services subscription Key using --SubscriptionKey=\"your_subscription_key\"" +
- "Find more information on creating a Cognitive services resource and accessing your Subscription key here: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows");
- throw exp;
- }
-
- var subscriptionKey = arguments[SubscriptionKeyArgName];
- var username = arguments.GetValueOrDefault(UsernameArgName, null);
- var password = arguments.GetValueOrDefault(PasswordArgName, null);
-
- var audioDirPath = Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location), "../../../AudioSamples/DictationBatman.wav");
- if (!File.Exists(audioDirPath))
- {
- Exception exp = new Exception(string.Format("Audio File does not exist at path: {0}", audioDirPath));
- throw exp;
- }
-
- ContinuousRecognitionWithTenantLMAsync(endpointUri, subscriptionKey, audioDirPath, username, password).GetAwaiter().GetResult();
- }
-
- private static async Task<string> AcquireAuthTokenWithUsernamePasswordAsync(string username, string password)
- {
- var tokenEndpoint = "https://login.microsoftonline.com/common/oauth2/token";
- var postBody = $"resource={ServiceApplicationId}&client_id={ClientApplicationId}&grant_type=password&username={username}&password={password}";
- var stringContent = new StringContent(postBody, Encoding.UTF8, "application/x-www-form-urlencoded");
- using (HttpClient httpClient = new HttpClient())
- {
- var response = await httpClient.PostAsync(tokenEndpoint, stringContent).ConfigureAwait(false);
-
- if (response.IsSuccessStatusCode)
- {
- var result = await response.Content.ReadAsStringAsync().ConfigureAwait(false);
-
- JObject jobject = JObject.Parse(result);
- return jobject["access_token"].Value<string>();
- }
- else
- {
- throw new Exception($"Requesting token from {tokenEndpoint} failed with status code {response.StatusCode}: {await response.Content.ReadAsStringAsync().ConfigureAwait(false)}");
- }
- }
- }
-
- private static async Task<string> AcquireAuthTokenWithInteractiveLoginAsync()
- {
- var authContext = new AuthenticationContext("https://login.windows.net/microsoft.onmicrosoft.com");
- var deviceCodeResult = await authContext.AcquireDeviceCodeAsync(ServiceApplicationId, ClientApplicationId).ConfigureAwait(false);
-
- Console.WriteLine(deviceCodeResult.Message);
-
- var authResult = await authContext.AcquireTokenByDeviceCodeAsync(deviceCodeResult).ConfigureAwait(false);
-
- return authResult.AccessToken;
- }
- }
-}
-```
-
-Next, you need to rebuild and run the project from the command line. Before you run the command, update a few parameters by doing the following:
-
-1. Replace `<Username>` and `<Password>` with the values for a valid tenant user.
-1. Replace `<Subscription-Key>` with the subscription key for your Speech resource. This value is available in the **Overview** section for your Speech resource in the [Azure portal](https://aka.ms/azureportal).
-1. Replace `<Endpoint-Uri>` with the following endpoint. Make sure that you replace `{your region}` with the region where your Speech resource was created. These regions are supported: `westus`, `westus2`, and `eastus`. Your region information is available in the **Overview** section of your Speech resource in the [Azure portal](https://aka.ms/azureportal).
- ```
- "wss://{your region}.online.princeton.customspeech.ai/msgraphcustomspeech/conversation/v1".
- ```
-1. Run the following command:
-
- ```bash
- dotnet TenantLMSample.dll --Username=<Username> --Password=<Password> --SubscriptionKey=<Subscription-Key> --EndpointUri=<Endpoint-Uri>
- ```
-
-In this tutorial, you've learned how to use Microsoft 365 data to create a custom speech recognition model, deploy it, and use it with the Speech SDK.
-
-## Next steps
-
-* [Speech Studio](https://speech.microsoft.com/)
-* [Speech SDK](speech-sdk.md)
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
Last updated 01/24/2022 ms.devlang: csharp-+ # Tutorial: Voice-enable your bot
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/voice-assistants.md
Last updated 03/11/2020 + # What is a voice assistant?
cognitive-services Windows Voice Assistants Automatic Enablement Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-automatic-enablement-guidelines.md
Last updated 04/15/2020 + # Privacy guidelines for voice assistants on Windows
cognitive-services Windows Voice Assistants Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-best-practices.md
Last updated 05/1/2020 + # Design assistant experiences for Windows 10
cognitive-services Windows Voice Assistants Implementation Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-implementation-guide.md
Last updated 04/15/2020 -+ # Implementing Voice Assistants on Windows
cognitive-services Windows Voice Assistants Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/windows-voice-assistants-overview.md
Last updated 02/19/2022 + # What are Voice Assistants on Windows?
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Previously updated : 03/15/2022 Last updated : 04/21/2022
Use the table below to find which model versions are supported by each feature.
| Custom NER | `2021-11-01-preview` | | `2021-11-01-preview` | | Personally Identifiable Information (PII) detection | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01`, `2021-01-15` | `2021-01-15` | | | Question answering | `2021-10-01` | `2021-10-01` |
-| Text Analytics for health | `2021-05-15` | `2021-05-15` | |
+| Text Analytics for health | `2021-05-15`, `2022-03-01` | `2022-03-01` | |
| Key phrase extraction | `2019-10-01`, `2020-07-01`, `2021-06-01` | `2021-06-01` | | | Text summarization | `2021-08-01` | `2021-08-01` | |
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/fail-over.md
Use the url from the `resultUrl` key in the body to view the exported assets fro
### Get export results
-Submit a **GET** request using the `{RESULT-URL}` you recieved from the previous step to view the results of the export job.
+Submit a **GET** request using the `{RESULT-URL}` you received from the previous step to view the results of the export job.
#### Headers
Use the following header to authenticate your request.
## Deploy your model
-This is te step where you make your trained model available form consumption via the [runtime API](https://aka.ms/clu-apis).
+This is the step where you make your trained model available form consumption via the [runtime API](https://aka.ms/clu-apis).
> [!TIP] > Use the same deployment name as your primary project for easier maintenance and minimal changes to your system to handle redirecting your traffic.
Repeat the same steps for your replicated project using `{YOUR-SECONDARY-ENDPOIN
In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-* [Authoring REST API reference ](https://aka.ms/ct-authoring-swagger)
+* [Authoring REST API reference ](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob)
* [Runtime prediction REST API reference ](https://aka.ms/ct-runtime-swagger)
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-query-model.md
If you would like to swap the models between two deployments, simply select the
To delete a deployment, select the deployment you want to delete and click on **Delete deployment**. > [!TIP]
-> If you're using the REST API, see the [quickstart](../quickstart.md?pivots=rest-api#deploy-your-model) and REST API [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2021-11-01-preview/operations/Deployments_TriggerDeploymentJob) for examples and more information.
+> If you're using the REST API, see the [quickstart](../quickstart.md?pivots=rest-api#deploy-your-model) and REST API [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) for examples and more information.
> [!NOTE] > You can only have ten deployment names.
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/fail-over.md
Repeat the same steps for your replicated project using `{YOUR-SECONDARY-ENDPOIN
In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-* [Authoring REST API reference ](https://aka.ms/ct-authoring-swagger)
+* [Authoring REST API reference ](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob)
* [Runtime prediction REST API reference ](https://aka.ms/ct-runtime-swagger)-
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/faq.md
The training process can take some time. As a rough estimate, the expected train
## How do I build my custom model programmatically?
-You can use the [REST APIs](https://aka.ms/ct-authoring-swagger) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
+You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-to-make-predictions), you can use the REST API, or the client library.
After deploying your model, you [call the prediction API](how-to/call-api.md), u
## Data privacy and security
-Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, custom text classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://aka.ms/ct-authoring-swagger).
+Custom text classification is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, custom text classification users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
Your data is only stored in your Azure Storage account. Custom text classification only has access to read from it during training. ## How to clone my project?
-To clone your project you need to use the export API to export the project assets and then import them into a new project. See [REST APIs](https://aka.ms/ct-authoring-swagger) reference for both operations.
+To clone your project you need to use the export API to export the project assets and then import them into a new project. See [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
## Next steps
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/tutorials/cognitive-search.md
In this tutorial, you will learn how to:
## Deploy your model
-To deploy your model, go to your project in [Language Studio](https://aka.ms/custom-classification). You can also use the [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/language-authoring-apis-2021-11-01-preview/operations/Deployments_TriggerDeploymentJob).
+To deploy your model, go to your project in [Language Studio](https://aka.ms/custom-classification). You can also use the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
[!INCLUDE [Deploy a model using Language Studio](../includes/deploy-model-language-studio.md)]
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/fail-over.md
Use the url from the `resultUrl` key in the body to view the exported assets fro
### Get export results
-Submit a **GET** request using the `{RESULT-URL}` you recieved from the previous step to view the results of the export job.
+Submit a **GET** request using the `{RESULT-URL}` you received from the previous step to view the results of the export job.
#### Headers
Repeat the same steps for your replicated project using your secondary endpoint
In this article, you have learned how to use the export and import APIs to replicate your project to a secondary Language resource in other region. Next, explore the API reference docs to see what else you can do with authoring APIs.
-* [Authoring REST API reference ](https://aka.ms/ct-authoring-swagger)
+* [Authoring REST API reference ](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob)
* [Runtime prediction REST API reference ](https://aka.ms/ct-runtime-swagger)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/faq.md
The training process can take a long time. As a rough estimate, the expected tra
[!INCLUDE [SDK limitations](includes/sdk-limitations.md)]
-You can use the [REST APIs](https://aka.ms/ct-authoring-swagger) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
+You can use the [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
When you're ready to start [using your model to make predictions](#how-do-i-use-my-trained-model-for-predictions), you can use the REST API, or the client library.
After deploying your model, you [call the prediction API](how-to/call-api.md), u
## Data privacy and security
-Custom NER is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, Custom NER users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://aka.ms/ct-authoring-swagger).
+Custom NER is a data processor for General Data Protection Regulation (GDPR) purposes. In compliance with GDPR policies, Custom NER users have full control to view, export, or delete any user content either through the [Language Studio](https://aka.ms/languageStudio) or programmatically by using [REST APIs](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob).
Your data is only stored in your Azure Storage account. Custom NER only has access to read from it during training. ## How to clone my project?
-To clone your project you need to use the export API to export the project assets, and then import them into a new project. See the [REST API](https://aka.ms/ct-authoring-swagger) reference for both operations.
+To clone your project you need to use the export API to export the project assets, and then import them into a new project. See the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2022-03-01-preview/operations/Projects_TriggerImportProjectJob) reference for both operations.
## Next steps
communication-services Get Started Ui Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-ui-kit.md
Title: Quickstart - Get started with UI Library Design Kit
+ Title: Quickstart - Get started with the UI Library Design Kit
-description: In this quickstart, you will learn how to leverage UI Library Design Kit for Azure Communication Services to quickly design communication experiences using Figma.
+description: In this quickstart, you'll learn how to use the UI Library Design Kit for Azure Communication Services to quickly design communication experiences using Figma.
Last updated 03/24/2022
-# Get started with UI Library Design Kit (Figma)
+# Get started with the UI Library Design Kit (Figma)
This article describes how to get started with the UI Library Design Kit (Figma).
-Start by getting the [UI Library Design Kit](https://www.figma.com/community/file/1095841357293210472) from Figma.
+## Design faster
-## Design faster
+The UI Library Design Kit is a resource to help you design user interfaces built on Azure Communication Services. By using the APIs in Azure Communication Services, you can deploy applications across any device, on any platform.
-A resource to help design user interfaces built on Azure Communication Services, the UI Library Design Kit includes components, composites, and UX guidance purpose-built to help bring your video calling and chat experiences to life faster.
+The UI Library Design Kit includes components, composites, and user experience guidance that's purpose-built to help bring your video calling and chat experiences to life faster.
-## UI Library components and composites
+The UI Library Design Kit can help you to:
-The same components and composites offered in the UI Library are available in Figma so you can quickly begin designing and prototyping your calling and chat experiences.
+- Build an iterative design flow.
+- Gain visibility for rapid collaboration.
+- Design faster with real content.
+- Maintain design consistency.
+- Test and iterate quickly.
-## Built on Fluent
+## UI Library components and composites
-The UI Library Design Kit's components are based on MicrosoftΓÇÖs Fluent UI; so, theyΓÇÖre built with usability, accessibility, and localization in mind.
+The same components and composites offered in the UI Library Design Kit are available in Figma. For this reason, you can quickly begin designing and prototyping your calling and chat experiences.
-## Next steps
+## Built on Fluent
+
+The UI Library Design Kit's components are based on Fluent UI, the cross-platform design system that's used by Microsoft. As a result, the components are built with usability, accessibility, and localization in mind.
+
+## Next step
>[!div class="nextstepaction"]
->[Get the ACS UI Kit (Figma)](https://www.figma.com/community/file/1095841357293210472)
+>[Get the Azure Communication Services UI Kit (Figma)](https://www.figma.com/community/file/1095841357293210472)
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
In this scenario, the function app will read the temperature of the aquarium, th
:::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select **DocumentDB Account Contributor**.
+1. On the **Role** tab, select **DocumentDB Account Contributor**.
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
In the preceding example, omitting the ```"university":1``` clause returns an er
`cannot create unique index over {student_id : 1.0} with shard key pattern { university : 1.0 }`
-#### Note
-
-Support for unique index on existing collections with data is available in preview. You can sign up for the feature ΓÇ£Azure Cosmos DB API for MongoDB New Unique Indexes in existing collectionΓÇ¥ through the [Preview Features blade in the portal](./../access-previews.md).
- #### Limitations
-On accounts that have continuous backup or synapse link enabled, unique indexes will need to be created while the collection is empty.
+Unique indexes need to be created while the collection is empty.
+
+Support for unique index on existing collections with data is available in preview for accounts that do not use Synapse Link or Continuous backup. You can sign up for the feature ΓÇ£Azure Cosmos DB API for MongoDB New Unique Indexes in existing collectionΓÇ¥ through the [Preview Features blade in the portal](./../access-previews.md).
#### Unique partial indexes
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk.md
If your app is deployed on [Azure Virtual Machines without a public IP address](
* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md). When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl).
-* Assign a [public IP to your Azure VM](../../load-balancer/troubleshoot-outbound-connection.md#assignilpip).
+* Assign a [public IP to your Azure VM](../../load-balancer/troubleshoot-outbound-connection.md#configure-an-individual-public-ip-on-vm).
### <a name="high-network-latency"></a>High network latency High network latency can be identified by using the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring) in the V2 SDK or [diagnostics](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics#Microsoft_Azure_Cosmos_ResponseMessage_Diagnostics) in V3 SDK.
cost-management-billing Troubleshoot Declined Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-declined-card.md
Previously updated : 12/01/2021 Last updated : 04/22/2022 # Troubleshoot a declined card at Azure sign-up
-You may experience an issue or error in which a credit card is declined at Azure sign-up in the Microsoft Azure portal.
+You may experience an issue or error in which a card is declined at Azure sign-up in the Microsoft Azure portal.
To resolve your issue, select the topic below which most closely resembles your error.
-## The credit card provider is not accepted for your country/region
+## The card provider is not accepted for your country/region
When you choose a card, Azure displays the card options that are valid in the country/region that you select. Contact your bank or card issuer to verify that your credit card is enabled for international transactions. For more information about supported countries/regions and currencies, see the [Azure Purchase FAQ](https://azure.microsoft.com/pricing/faq/).
->[!Note]
->American Express credit cards are not currently supported as a payment instrument in India. We have no time frame as to when it may be an accepted form of payment.
+> [!Note]
+> - American Express credit cards are not currently supported as a payment instrument in India. We have no time frame as to when it may be an accepted form of payment.
+> - Debit cards are not currently accepted in Hong Kong and Brazil.
## You're using a virtual or prepaid card
If you represent a business, you can use invoice payment methods such as checks,
For more information about how to pay by invoice, see [Submit a request to pay Azure subscription by invoice](pay-by-invoice.md).
-## Your credit card information is outdated
+## Your card information is outdated
For information about how to manage your card information, including changing or removing a card, see [Add, update, or remove a credit for Azure](change-credit-card.md).
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md
Previously updated : 04/18/2022 Last updated : 04/22/2022 # Copy data from SharePoint Online List by using Azure Data Factory or Azure Synapse Analytics
The following sections provide details about properties you can use to define en
## Linked service properties
-The following properties are supported for an SharePoint Online List linked service:
+The following properties are supported for a SharePoint Online List linked service:
| **Property** | **Description** | **Required** | | - | | |
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-square.md
Previously updated : 01/27/2022 Last updated : 04/20/2022 # Copy data from Square using Azure Data Factory or Synapse Analytics (Preview)
Square support two types of access token: **personal** and **OAuth**.
- Personal access tokens are used to get unlimited Connect API access to resources in your own Square account. - OAuth access tokens are used to get authenticated and scoped Connect API access to any Square account. Use them when your app accesses resources in other Square accounts on behalf of account owners. OAuth access tokens can also be used to access resources in your own Square account.
+ >[!Important]
+ > To perform **Test connection** in the linked service, `MERCHANT_PROFILE_READ` is required to get a scoped OAuth access token. For permissions to access other tables, see [Square OAuth Permissions Reference](https://developer.squareup.com/docs/oauth-api/square-permissions).
++ Authentication via personal access token only needs `accessToken`, while authentication via OAuth requires `accessToken` and `refreshToken`. Learn how to retrieve access token from [here](https://developer.squareup.com/docs/build-basics/access-tokens). **Example:**
data-share Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/concepts-roles-permissions.md
To create a role assignment for the data share resource's managed identity manua
1. Navigate to the Azure data store.
-1. Select **Access Control (IAM)**.
+1. Select **Access control (IAM)**.
1. Select **Add > Add role assignment**. :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select one of the roles listed in the role assignment table in the previous section.
+1. On the **Role** tab, select one of the roles listed in the role assignment table in the previous section.
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+1. Select your Azure subscription.
+ 1. Select **System-assigned managed identity**, search for your Azure Data Share resource, and then select it. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
Alternatively, user can have owner of the storage account add the data share res
1. Navigate to the Azure data store.
-1. Select **Access Control (IAM)**.
+1. Select **Access control (IAM)**.
1. Select **Add > Add role assignment**. :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select one of the roles listed in the role assignment table in the previous section. For example, for a storage account, select Storage Blob Data Reader.
+1. On the **Role** tab, select one of the roles listed in the role assignment table in the previous section. For example, for a storage account, select Storage Blob Data Reader.
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+1. Select your Azure subscription.
+ 1. Select **System-assigned managed identity**, search for your Azure Data Share resource, and then select it. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
Simulations help you:
## Azure DDoS simulation testing policy You may only simulate attacks using our approved testing partners:-- [Red Button](https://www.red-button.net/): work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment. - [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud): a self-service traffic generator where your customers can generate traffic against DDoS Protection-enabled public endpoints for simulations.
+- [Red Button](https://www.red-button.net/): work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment.
Our testing partners' simulation environments are built within Azure. You can only simulate against Azure-hosted public IP addresses that belong to an Azure subscription of your own, which will be validated by Azure Active Directory (Azure AD) before testing. Additionally, these target public IP addresses must be protected under Azure DDoS Protection.
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-high-availability-disaster-recovery.md
# Mandatory fields. Title: High availability and disaster recovery
+ Title: Azure Digital Twins high availability and disaster recovery
-description: Learn about Azure high availability and disaster recovery features as they pertain to Azure Digital Twins, which will help you build highly available Azure IoT solutions with disaster recovery capabilities.
+description: Learn about high availability and disaster recovery features for Azure Digital Twins.
Previously updated : 03/01/2022 Last updated : 04/22/2022 + # Optional fields. Don't forget to remove # if you need a field. #
# Azure Digital Twins high availability and disaster recovery
-This article discusses the High Availability (HA) and Disaster Recovery (DR) features offered specifically by the Azure Digital Twins service. The article covers intra-region HA, cross region DR, monitoring service health, and best practices on HA/DR.
+This article discusses the High Availability (HA) and Disaster Recovery (DR) features for the Azure Digital Twins service, including intra-region HA and cross region DR. This article also explains how you can monitor your service health.
-A key area of consideration for resilient IoT solutions is business continuity and disaster recovery. Designing for HA and DR can help you define and achieve appropriate uptime goals for your solution.
+Considering business continuity and disaster recovery can help you create resilient IoT solutions, and designing for HA and DR can help you define and achieve appropriate uptime goals for your Azure Digital Twins solution.
-Azure Digital Twins supports these feature options:
+Azure Digital Twins supports these features:
* *Intra-region HA* ΓÇô Built-in redundancy to deliver on uptime of the service * *Cross region DR* ΓÇô Failover to a geo-paired Azure region if there's an unexpected data center failure
-You can also see the [Best practices](#best-practices) section for general Azure guidance about designing for HA/DR.
- ## Intra-region HA
-Azure Digital Twins provides *intra-region HA* by implementing redundancies within the service. This functionality is reflected in the [service SLA](https://azure.microsoft.com/support/legal/sla/digital-twins) for uptime. No additional work is required by the developers of an Azure Digital Twins solution to take advantage of these HA features. Although Azure Digital Twins offers a reasonably high uptime guarantee, transient failures can still be expected, as with any distributed computing platform. Appropriate retry policies should be built in to the components interacting with a cloud application to deal with transient failures.
+Azure Digital Twins provides *intra-region HA* by implementing redundancies within the service. This functionality is reflected in the [service SLA](https://azure.microsoft.com/support/legal/sla/digital-twins) for uptime. No additional work is required by the developers of an Azure Digital Twins solution to take advantage of these HA features.
+
+Although Azure Digital Twins offers a high uptime guarantee, transient failures are possible on any distributed computing platform. Appropriate retry policies should be built into the components interacting with your cloud application to handle these transient failures.
## Cross region DR
-There could be some rare situations when a data center experiences extended outages because of power failures or other events in the region. Such events are rare, and during such failures, the intra region HA capability described above may not help. Azure Digital Twins addresses this scenario with Microsoft-initiated failover.
+It's possible, although unlikely, for a data center to experience extended outages because of power failures or other events in the region. During a rare failure event like this, the intra region HA capability described above may not be sufficient. Azure Digital Twins addresses this scenario with Microsoft-initiated failover.
+
+*Microsoft-initiated failover* is exercised in rare situations to failover all the Azure Digital Twins instances from an affected region to the corresponding [geo-paired region](../availability-zones/cross-region-replication-azure.md). This process is a default option and will happen without any intervention from you. Microsoft reserves the right to make a determination of when this option will be exercised, and this mechanism doesn't involve user consent before the user's instance is failed over.
-*Microsoft-initiated failover* is exercised in rare situations to failover all the Azure Digital Twins instances from an affected region to the corresponding [geo-paired region](../availability-zones/cross-region-replication-azure.md). This process is a default option (with no way for users to opt out), and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve user consent before the user's instance is failed over.
->[!NOTE]
-> Some Azure services provide an additional option called *customer-initiated failover*, which enables customers to initiate a failover just for their instance, such as to run a DR drill. This mechanism is currently not supported by Azure Digital Twins.
If it's important for you to keep all data within certain geographical areas, check the location of the [geo-paired region](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) for the region where you're creating your instance, to ensure that it meets your data residency requirements. >[!NOTE]
-> Some Azure services provide an option for users to configure a different region for failover, as a way to meet data residency requirements. This capability is currently not supported by Azure Digital Twins.
+> Some Azure services provide an additional option called *customer-initiated failover*, which enables customers to initiate a failover just for their instance, such as to run a DR drill. This mechanism is currently not supported by Azure Digital Twins.
+>
+> Other Azure services provide an option for users to configure a different region for failover, as a way to meet data residency requirements. This capability is also not supported by Azure Digital Twins.
## Monitor service health
To view Service Health events...
:::image type="content" source="media/concepts-high-availability-disaster-recovery/issue-updates.png" alt-text="Screenshot of the Azure portal showing the 'Health History' page with the 'Issue updates' tab highlighted. The tab displays the status of entries." lightbox="media/concepts-high-availability-disaster-recovery/issue-updates.png"::: - The information displayed in this tool isn't specific to one Azure Digital instance. After using Service Health to understand what's going with the Azure Digital Twins service in a certain region or subscription, you can take monitoring a step further by using [Azure Resource Health](how-to-monitor-resource-health.md) to drill down into specific instances and see whether they're affected.
-## Best practices
-
-For best practices on HA/DR, see the following Azure guidance on this topic:
-* The article [Design reliable Azure applications](/azure/architecture/framework/resiliency/app-design) describes a general framework to help you think about business continuity and disaster recovery.
-* The [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery) paper provides architecture guidance on strategies for Azure applications to achieve High Availability (HA) and Disaster Recovery (DR).
- ## Next steps
-Read more about getting started with Azure Digital Twins solutions:
-
-* [What is Azure Digital Twins?](overview.md)
-* [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md)
+Read about general best practices for HA/DR in these Azure articles:
+* The article [Design reliable Azure applications](/azure/architecture/framework/resiliency/app-design) describes a general framework to help you think about business continuity and disaster recovery.
+* The [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery) paper provides architecture guidance on strategies for Azure applications to achieve High Availability (HA) and Disaster Recovery (DR).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Azure national clouds are isolated from each other and from global commercial Az
| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | Supported | AT&T NetBond, British Telecom, Equinix, Level 3 Communications, Verizon | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | Supported | Equinix, Internet2, Megaport, Verizon | | **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | Supported | Equinix, CenturyLink Cloud Connect, Verizon |
-| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
+| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/data-center-locations/arizona/phoenix-data-center/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | Supported | CenturyLink Cloud Connect, Megaport | | **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | Supported | AT&T, Equinix, Level 3 Communications, Verizon | | **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | Supported | Equinix, Megaport |
hdinsight Apache Domain Joined Run Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-run-kafka.md
To create two topics, `salesevents` and `marketingspend`:
1. Download the [Apache Kafka domain-joined producer consumer examples](https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/tree/master/DomainJoined-Producer-Consumer). 1. Follow Steps 2 and 3 under **Build and deploy the example** in [Tutorial: Use the Apache Kafka Producer and Consumer APIs](../kafk#build-and-deploy-the-example)
+ > [!NOTE]
+ > For this tutorial, please use the kafka-producer-consumer.jar under "DomainJoined-Producer-Consumer" project (not the one under Producer-Consumer project, which is for non domain joined scenarios).
1. Run the following commands:
To produce and consume topics in ESP Kafka by using the console:
3. Produce messages to topic `salesevents`: ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --topic salesevents --broker-list $KAFKABROKERS --security-protocol SASL_PLAINTEXT
+ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --topic salesevents --broker-list $KAFKABROKERS --producer-property security.protocol=SASL_PLAINTEXT
``` 4. Consume messages from topic `salesevents`: ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --topic salesevents --from-beginning --bootstrap-server $KAFKABROKERS --security-protocol SASL_PLAINTEXT
+ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --topic salesevents --from-beginning --bootstrap-server $KAFKABROKERS --consumer-property security.protocol=SASL_PLAINTEXT
+ ```
+
+## Produce and consume topics for long running session in ESP Kafka
+
+Kerberos ticket cache has an expiration limitation. For long running session, we'd better to use keytab instead of renewing ticket cache manually.
+To use keytab in long running session without `kinit`:
+1. Create a new keytab for your domain user
+ ```bash
+ ktutil
+ addent -password -p <user@domain> -k 1 -e RC4-HMAC
+ wkt /tmp/<user>.keytab
+ q
+
+ ```
+2. Create `/home/sshuser/kafka_client_jaas.conf` and it should have the following lines:
+ ```
+ KafkaClient {
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=true
+ storeKey=true
+ keyTab="/tmp/<user>.keytab"
+ useTicketCache=false
+ serviceName="kafka"
+ principal="<user@domain>";
+ };
+ ```
+3. Replace `java.security.auth.login.config` with `/home/sshuser/kafka_client_jaas.conf` and produce or consume topic using console or API
+ ```
+ export KAFKABROKERS=<brokerlist>:9092
+
+ # console tool
+ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/sshuser/kafka_client_jaas.conf"
+ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --topic salesevents --broker-list $KAFKABROKERS --producer-property security.protocol=SASL_PLAINTEXT
+ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --topic salesevents --from-beginning --bootstrap-server $KAFKABROKERS --consumer-property security.protocol=SASL_PLAINTEXT
+
+ # API
+ java -jar -Djava.security.auth.login.config=/home/sshuser/kafka_client_jaas.conf kafka-producer-consumer.jar producer salesevents $KAFKABROKERS
+ java -jar -Djava.security.auth.login.config=/home/sshuser/kafka_client_jaas.conf kafka-producer-consumer.jar consumer salesevents $KAFKABROKERS
``` ## Clean up resources
hdinsight Apache Hadoop Connect Excel Hive Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-excel-hive-odbc-driver.md
description: Learn how to set up and use the Microsoft Hive ODBC driver for Exce
Previously updated : 04/22/2020 Last updated : 04/22/2022 # Connect Excel to Apache Hadoop in Azure HDInsight with the Microsoft Hive ODBC driver
In this article, you learned how to use the Microsoft Hive ODBC driver to retrie
* [Visualize Apache Hive data with Microsoft Power BI in Azure HDInsight](apache-hadoop-connect-hive-power-bi.md). * [Visualize Interactive Query Hive data with Power BI in Azure HDInsight](../interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md). * [Connect Excel to Apache Hadoop by using Power Query](apache-hadoop-connect-excel-power-query.md).
-* [Connect to Azure HDInsight and run Apache Hive queries using Data Lake Tools for Visual Studio](apache-hadoop-visual-studio-tools-get-started.md).
+* [Connect to Azure HDInsight and run Apache Hive queries using Data Lake Tools for Visual Studio](apache-hadoop-visual-studio-tools-get-started.md).
hdinsight Troubleshoot Wasbs Storage Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-wasbs-storage-exception.md
Title: The account being accessed does not support http error in Azure HDInsight
description: This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. Previously updated : 02/06/2020 Last updated : 04/22/2022 # The account being accessed does not support http error in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Hadoop Create Linux Clusters Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-adf.md
description: Tutorial - Learn how to create on-demand Apache Hadoop clusters in
Previously updated : 04/24/2020 Last updated : 04/22/2022 #Customer intent: As a data worker, I need to create a Hadoop cluster and run Hive jobs on demand
hdinsight Hdinsight Hadoop Port Settings For Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md
description: This article provides a list of ports used by Apache Hadoop service
Previously updated : 04/28/2020 Last updated : 04/22/2022 # Ports used by Apache Hadoop services on HDInsight
Examples:
Examples:
-* Livy: `curl -u admin -G "http://10.0.0.11:8998/"`. In this example, `10.0.0.11` is the IP address of the headnode that hosts the Livy service.
+* Livy: `curl -u admin -G "http://10.0.0.11:8998/"`. In this example, `10.0.0.11` is the IP address of the headnode that hosts the Livy service.
hdinsight Hdinsight Storage Sharedaccesssignature Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-storage-sharedaccesssignature-permissions.md
description: Learn how to use Shared Access Signatures to restrict HDInsight acc
Previously updated : 04/28/2020 Last updated : 04/22/2022 # Use Azure Blob storage Shared Access Signatures to restrict access to data in HDInsight
hdinsight Hdinsight Version Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-version-release.md
Title: HDInsight 4.0 overview - Azure
description: Compare HDInsight 3.6 to HDInsight 4.0 features, limitations, and upgrade recommendations. Previously updated : 08/21/2020 Last updated : 04/22/2022 # Azure HDInsight 4.0 overview
hdinsight Apache Kafka Mirroring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirroring.md
description: Learn how to use Apache Kafka's mirroring feature to maintain a rep
Previously updated : 11/29/2019 Last updated : 04/22/2022 # Use MirrorMaker to replicate Apache Kafka topics with Kafka on HDInsight
hdinsight Apache Spark Troubleshoot Event Log Requestbodytoolarge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-event-log-requestbodytoolarge.md
Last updated 07/29/2019
-# "NativeAzureFileSystem...RequestBodyTooLarge" appear in Apache Spark streaming app log in HDInsight
+# RequestBodyTooLarge appear in Apache Spark Streaming application log in HDInsight
This article describes troubleshooting steps and possible resolutions for issues when using Apache Spark components in Azure HDInsight clusters. ## Issue
-The error: `NativeAzureFileSystem ... RequestBodyTooLarge` appears in the driver log for an Apache Spark streaming app.
+You would recieve below errors in an Apache Spark Streaming application log
+
+`NativeAzureFileSystem ... RequestBodyTooLarge`
+
+Or
+
+```
+java.io.IOException: Operation failed: "The request body is too large and exceeds the maximum permissible limit.", 413, PUT, https://<storage account>.dfs.core.windows.net/<container>/hdp/spark2-events/application_1620341592106_0004_1.inprogress?action=flush&retainUncommittedData=false&position=9238349177&close=false&timeout=90, RequestBodyTooLarge, "The request body is too large and exceeds the maximum permissible limit. RequestId:0259adb6-101f-0041-0660-43f672000000 Time:2021-05-07T16:48:00.2660760Z"
+ at org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.flushWrittenBytesToServiceInternal(AbfsOutputStream.java:362)
+ at org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.flushWrittenBytesToService(AbfsOutputStream.java:337)
+ at org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.flushInternal(AbfsOutputStream.java:272)
+ at org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.hflush(AbfsOutputStream.java:230)
+ at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:134)
+ at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
+ at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
+ at scala.Option.foreach(Option.scala:257)
+ at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
+```
++ ## Cause
-Your Spark event log file is probably hitting the file length limit for WASB.
+Files created over ABFS driver create Block blobs in Azure storage. Your Spark event log file is probably hitting the file length limit for WASB. See [50,000 blocks that a block blob can hold at max](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs).
In Spark 2.3, each Spark app generates one Spark event log file. The Spark event log file for a Spark streaming app continues to grow while the app is running. Today a file on WASB has a 50000 block limit, and the default block size is 4 MB. So in default configuration the max file size is 195 GB. However, Azure Storage has increased the max block size to 100 MB, which effectively brought the single file limit to 4.75 TB. For more information, see [Scalability and performance targets for Blob storage](../../storage/blobs/scalability-targets.md). ## Resolution
-There are three solutions available for this error:
+There are four solutions available for this error:
* Increase the block size to up to 100 MB. In Ambari UI, modify HDFS configuration property `fs.azure.write.request.size` (or create it in `Custom core-site` section). Set the property to a larger value, for example: 33554432. Save the updated configuration and restart affected components.
There are three solutions available for this error:
``` 1. Restart all affected services via Ambari UI.-
+* Add `--conf spark.hadoop.fs.azure.enable.flush=false` in spark-submit to disable auto flush
## Next steps
hdinsight Apache Spark Zeppelin Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md
description: Step-by-step instructions on how to use Zeppelin notebooks with Apa
Previously updated : 04/23/2020 Last updated : 04/22/2022 # Use Apache Zeppelin notebooks with Apache Spark cluster on Azure HDInsight
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
In this step, browse to your FHIR service in the Azure portal, and select the **
## Assign permissions to the FHIR service to access the storage account
-1. Select **Access Control (IAM)**.
+1. Select **Access control (IAM)**.
1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task. :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role.
+1. On the **Role** tab, select the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role.
[![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox) 1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+1. Select your Azure subscription.
+ 1. Select **System-assigned managed identity**, and then select the FHIR service. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Change the status to **On** to enable managed identity in FHIR service.
### Provide access of the ACR to FHIR service
-1. Select **Access Control (IAM)**.
+1. Select **Access control (IAM)**.
1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task. :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
+1. On the **Role** tab, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
[![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox) 1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+1. Select your Azure subscription.
+ 1. Select **System-assigned managed identity**, and then select the FHIR service. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
iot-hub Iot Hub Message Enrichments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-message-enrichments-overview.md
Previously updated : 03/16/2022 Last updated : 04/21/2022 #Customer intent: As a developer, I want to be able to add information to messages sent from a device to my IoT hub, based on the destination endpoint.
The **key** is a string. A key can only contain alphanumeric characters or these
The **value** can be any of the following examples:
-* Any static string. Dynamic values such as conditions, logic, operations, and functions are not allowed. For example, if you develop a SaaS application that is used by several customers, you can assign an identifier to each customer and make that identifier available in the application. When the application runs, IoT Hub will stamp the device telemetry messages with the customer's identifier, making it possible to process the messages differently for each customer.
+* Any static string. Dynamic values such as conditions, logic, operations, and functions aren't allowed. For example, if you develop a SaaS application that is used by several customers, you can assign an identifier to each customer and make that identifier available in the application. When the application runs, IoT Hub will stamp the device telemetry messages with the customer's identifier, making it possible to process the messages differently for each customer.
* The name of the IoT hub sending the message. This value is *$iothubname*.
The messages can come from any data source supported by [IoT Hub message routing
You can add enrichments to messages that are going to the built-in endpoint of an IoT hub, or to messages that are being routed to custom endpoints such as Azure Blob storage, a Service Bus queue, or a Service Bus topic.
-You can also add enrichments to messages that are being published to Event Grid by first creating an event grid subscription with the device telemetry message type. Based on this subscription, we will create a default route in Azure IoT Hub for the telemetry. This single route can handle all of your Event Grid subscriptions. You can then configure enrichments for the endpoint by using the **Enrich messages** tab of the IoT Hub **Message routing** section. For information about reacting to events by using Event Grid, see [Iot Hub and Event Grid](iot-hub-event-grid.md).
+You can also add enrichments to messages that are being published to Event Grid by first creating an Event Grid subscription with the device telemetry message type. Based on this subscription, we will create a default route in Azure IoT Hub for the telemetry. This single route can handle all of your Event Grid subscriptions. You can then configure enrichments for the endpoint by using the **Enrich messages** tab of the IoT Hub **Message routing** section. For information about reacting to events by using Event Grid, see [Iot Hub and Event Grid](iot-hub-event-grid.md).
Enrichments are applied per endpoint. If you specify five enrichments to be stamped for a specific endpoint, all messages going to that endpoint are stamped with the same five enrichments. Enrichments can be configured using the following methods: | **Method** | **Command** |
-| -- | --|
-| Portal | [Azure portal](https://portal.azure.com) See the [message enrichments tutorial](tutorial-message-enrichments.md) |
+| -- | --|
+| Portal | [Azure portal](https://portal.azure.com) See the [message enrichments tutorial](tutorial-message-enrichments.md) |
| Azure CLI | [az iot hub message-enrichment](/cli/azure/iot/hub/message-enrichment) | | Azure PowerShell | [Add-AzIotHubMessageEnrichment](/powershell/module/az.iothub/add-aziothubmessageenrichment) |
To try out message enrichments, see the [message enrichments tutorial](tutorial-
* You can add up to 10 enrichments per IoT hub for those hubs in the standard or basic tier. For IoT hubs in the free tier, you can add up to 2 enrichments.
-* In some cases, if you're applying an enrichment with a value set to a tag or property in the device twin, the value will be stamped with the specified device twin path. For example, if an enrichment value is set to $twin.tags.field, the messages will be stamped with the string "$twin.tags.field", rather than the value of that field from the twin. This behavior happens in the following cases:
+* In some cases, if you're enriching a message with a value set to a tag or property in the device twin, the value will be stamped with the specified device twin path. For example, if an enrichment value is set to $twin.tags.field, the messages will be stamped with the string "$twin.tags.field", rather than the value of that field from the twin. This behavior happens in the following cases:
- * Your IoT hub is in the basic tier. Basic tier IoT hubs do not support device twins.
+ * Your IoT hub is in the basic tier. Basic tier IoT hubs don't support device twins.
- * Your IoT hub is in the standard tier, but the device sending the message has no device twin.
+ * Your IoT hub is in the standard tier, but the device twin path used for the value of the enrichment doesn't exist. For example, if the enrichment value is set to $twin.tags.location, and the device twin does not have a location property under tags, the message is stamped with the string "$twin.tags.location".
- * Your IoT hub is in the standard tier, but the device twin path used for the value of the enrichment does not exist. For example, if the enrichment value is set to $twin.tags.location, and the device twin does not have a location property under tags, the message is stamped with the string "$twin.tags.location".
-
- * Your IoT hub is in the standard tier, but the device twin path used for the value of the enrichment resolves to an object, rather than a simple property. For example, if the enrichment value is set to $twin.tags.location, and the location property under tags is an object that contains child properties like `{"building": 43, "room": 503}`, the message is stamped with the string "$twin.tags.location".
+ * Your IoT hub is in the standard tier, but the device twin path used for the value of the enrichment resolves to an object, rather than a simple property. For example, if the enrichment value is set to $twin.tags.location, and the location property under tags is an object that contains child properties like `{"building": 43, "room": 503}`, the message is stamped with the string "$twin.tags.location".
* Updates to a device twin can take up to five minutes to be reflected in the corresponding enrichment value.
To try out message enrichments, see the [message enrichments tutorial](tutorial-
* Message enrichments don't apply to digital twin change events.
-* Modules do not inherit twin tags from their corresponding devices. Enrichments for messages originating from device modules (for example from IoT Edge modules) must use the twin tags that are set on the module twin.
+* Modules don't inherit twin tags from their corresponding devices. Enrichments for messages originating from device modules (for example from IoT Edge modules) must use the twin tags that are set on the module twin.
## Pricing
-Message enrichments are available for no additional charge. Currently, you are charged when you send a message to an IoT hub. You are only charged once for that message, even if the message goes to multiple endpoints.
+Message enrichments are available for no extra charge. Currently, you're charged when you send a message to an IoT hub. You're only charged once for that message, even if the message goes to multiple endpoints.
## Next steps
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md
When no longer needed, you can use the [az group delete](/cli/azure/group) comma
```azurecli-interactive az group delete --name ContosoResourceGroup ```
+> [!WARNING]
+> Deleting the resource group puts the Managed HSM into a soft-deleted state. The Managed HSM will continue to be billed until it is purged. See [Managed HSM soft-delete and purge protection](recovery.md)
## Next steps
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
After successfully downloading the security domain, your HSM will be in an activ
[!INCLUDE [Create a key vault](../../../includes/powershell-rg-delete.md)]
+> [!WARNING]
+> Deleting the resource group puts the Managed HSM into a soft-deleted state. The Managed HSM will continue to be billed until it is purged. See [Managed HSM soft-delete and purge protection](recovery.md)
## Next steps In this quickstart, you created and activated a Managed HSM. To learn more about Managed HSM and how to integrate it with your applications, continue on to these articles:
load-balancer Outbound Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/outbound-rules.md
The _parameters_ provide additional fine grained control over the outbound NAT a
Each additional IP address provided by a frontend provides additional 64,000 ephemeral ports for load balancer to use as SNAT ports.
-Use multiple IP addresses to plan for large-scale scenarios. Use outbound rules to mitigate [SNAT exhaustion](troubleshoot-outbound-connection.md#snatexhaust).
+Use multiple IP addresses to plan for large-scale scenarios. Use outbound rules to mitigate [SNAT exhaustion](troubleshoot-outbound-connection.md#configure-load-balancer-outbound-rules-to-maximize-snat-ports-per-vm).
You can also use a [public IP prefix](./load-balancer-outbound-connections.md#outboundrules) directly with an outbound rule.
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md
Title: Troubleshoot outbound connections in Azure Load Balancer
-description: Resolutions for common problems with outbound connectivity through the Azure Load Balancer.
+ Title: Troubleshoot SNAT exhaustion and connection timeouts
+
+description: Resolutions for common problems with outbound connectivity with Azure Load Balancer.
-+ Previously updated : 05/7/2020- Last updated : 04/21/2022+
-# <a name="obconnecttsg"></a> Troubleshooting outbound connections failures
+# Troubleshoot SNAT exhaustion and connection timeouts
-This article is intended to provide resolutions for common problems that can occur with outbound connections from an Azure Load Balancer. Most problems with outbound connectivity that customers experience are due to source network address translation (SNAT) port exhaustion and connection timeouts leading to dropped packets. This article provides steps for mitigating each of these issues.
+This article is intended to provide guidance for common problems that can occur with outbound connections from an Azure Load Balancer. Most problems with outbound connectivity that customers experience is due to source network address translation (SNAT) port exhaustion and connection timeouts leading to dropped packets.
-## Avoid SNAT
+To learn more about SNAT ports, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md).
-The best way to avoid SNAT port exhaustion is to eliminate the need for SNAT in the first place, if possible. In some cases this might not be possible. For example, when connecting to public endpoints. However, in some cases this is possible and can be achieved by connecting privately to resources. If connecting to Azure services like Storage, SQL, Cosmos DB, or any other of the [Azure services listed here](../private-link/availability.md), leveraging Azure Private Link eliminates the need for SNAT. As a result, you will not risk a potential connectivity issue due to SNAT port exhaustion.
+## Understand your SNAT port usage
-Private Link Service is also supported by Snowflake, MongoDB, Confluent, Elastic and other such services.
+Follow [Standard load balancer diagnostics with metrics, alerts, and resource health](load-balancer-standard-diagnostics.md) to monitor your existing load balancerΓÇÖs SNAT port usage and allocation. Monitor to confirm or determine the risk of SNAT exhaustion. If you're having trouble understanding your outbound connection behavior, use IP stack statistics (netstat) or collect packet captures. You can perform these packet captures in the guest OS of your instance or use [Network Watcher for packet capture](../network-watcher/network-watcher-packet-capture-manage-portal.md). For most scenarios, Azure recommends using a NAT gateway for outbound connectivity to reduce the risk of SNAT exhaustion. A NAT gateway is highly recommended if your service is initiating repeated TCP or UDP outbound connections to the same destination.
-## <a name="snatexhaust"></a> Managing SNAT (PAT) port exhaustion
-[Ephemeral ports](load-balancer-outbound-connections.md) used for [PAT](load-balancer-outbound-connections.md) are an exhaustible resource, as described in [Standalone VM without a Public IP address](load-balancer-outbound-connections.md) and [Load-balanced VM without a Public IP address](load-balancer-outbound-connections.md). You can monitor your usage of ephemeral ports and compare with your current allocation to determine the risk of or to confirm SNAT exhaustion using [this](./load-balancer-standard-diagnostics.md#how-do-i-check-my-snat-port-usage-and-allocation) guide.
+## Optimize your Azure deployments for outbound connectivity
-If you know that you're initiating many outbound TCP or UDP connections to the same destination IP address and port, and you observe failing outbound connections or are advised by support that you're exhausting SNAT ports (preallocated [ephemeral ports](load-balancer-outbound-connections.md#preallocatedports) used by [PAT](load-balancer-outbound-connections.md)), you have several general mitigation options. Review these options and decide what is available and best for your scenario. It's possible that one or more can help manage this scenario.
+It's important to optimize your Azure deployments for outbound connectivity. Optimization can prevent or alleviate issues with outbound connectivity.
-If you are having trouble understanding the outbound connection behavior, you can use IP stack statistics (netstat). Or it can be helpful to observe connection behaviors by using packet captures. You can perform these packet captures in the guest OS of your instance or use [Network Watcher for packet capture](../network-watcher/network-watcher-packet-capture-manage-portal.md).
+### Use a NAT gateway for outbound connectivity to the Internet
-## <a name ="manualsnat"></a>Manually allocate SNAT ports to maximize SNAT ports per VM
-As defined in [preallocated ports](load-balancer-outbound-connections.md#preallocatedports), the load balancer will automatically allocate ports based on the number of VMs in the backend. By default, this is done conservatively to ensure scalability. If you know the maximum number of VMs you will have in the backend, you can manually allocate SNAT ports in each outbound rule. For example, if you know you will have a maximum of 10 VMs you can allocate 6,400 SNAT ports per VM rather than the default 1,024.
+Virtual network NAT gateway is a highly resilient and scalable Azure service that provides outbound connectivity to the internet from your virtual network. A NAT gatewayΓÇÖs unique method of consuming SNAT ports helps resolve common SNAT exhaustion and connection issues. For more information about Azure Virtual Network NAT, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
-## <a name="connectionreuse"></a>Modify the application to reuse connections
-You can reduce demand for ephemeral ports that are used for SNAT by reusing connections in your application. Connection reuse is especially relevant for protocols like HTTP/1.1, where connection reuse is the default. And other protocols that use HTTP as their transport (for example, REST) can benefit in turn.
+* **How does a NAT gateway reduce the risk of SNAT port exhaustion?**
-Reuse is always better than individual, atomic TCP connections for each request. Reuse results in more performant, very efficient TCP transactions.
+ Azure Load Balancer allocates fixed amounts of SNAT ports to each virtual machine instance in a backend pool. This method of allocation can lead to SNAT exhaustion, especially if uneven traffic patterns result in a specific virtual machine sending a higher volume of outgoing connections. Unlike load balancer, a NAT gateway dynamically allocates SNAT ports across all VM instances within a subnet.
-## <a name="connection pooling"></a>Modify the application to use connection pooling
-You can employ a connection pooling scheme in your application, where requests are internally distributed across a fixed set of connections (each reusing where possible). This scheme constrains the number of ephemeral ports in use and creates a more predictable environment. This scheme can also increase the throughput of requests by allowing multiple simultaneous operations when a single connection is blocking on the reply of an operation.
+ A NAT gateway makes available SNAT ports accessible to every instance in a subnet. This dynamic allocation allows VM instances to use the number of SNAT ports each needs from the available pool of ports for new connections. The dynamic allocation reduces the risk of SNAT exhaustion.
-Connection pooling might already exist within the framework that you're using to develop your application or the configuration settings for your application. You can combine connection pooling with connection reuse. Your multiple requests then consume a fixed, predictable number of ports to the same destination IP address and port. The requests also benefit from efficient use of TCP transactions reducing latency and resource utilization. UDP transactions can also benefit, because managing the number of UDP flows can in turn avoid exhaust conditions and manage the SNAT port utilization.
+ :::image type="content" source="./media/troubleshoot-outbound-connection/load-balancer-vs-nat.png" alt-text="Diagram of Azure Load Balancer vs. Azure Virtual Network NAT.":::
-## <a name="retry logic"></a>Modify the application to use less aggressive retry logic
-When [preallocated ephemeral ports](load-balancer-outbound-connections.md#preallocatedports) used for [PAT](load-balancer-outbound-connections.md) are exhausted or application failures occur, aggressive or brute force retries without decay and backoff logic cause exhaustion to occur or persist. You can reduce demand for ephemeral ports by using a less aggressive retry logic.
+* **Port selection and reuse behavior.**
+
+ A NAT gateway selects ports at random from the available pool of ports. If there aren't available ports, SNAT ports will be reused as long as there's no existing connection to the same destination public IP and port. This port selection and reuse behavior of a NAT gateway makes it less likely to experience connection timeouts.
-Ephemeral ports have a 4-minute idle timeout (not adjustable). If the retries are too aggressive, the exhaustion has no opportunity to clear up on its own. Therefore, considering how--and how often--your application retries transactions is a critical part of the design.
+ To learn more about how SNAT and port usage works for NAT gateway, see [SNAT fundamentals](../virtual-network/nat-gateway/nat-gateway-resource.md#fundamentals). There are a few conditions in which you won't be able to use NAT gateway for outbound connections. For more information on NAT gateway limitations, see [Virtual Network NAT limitations](../virtual-network/nat-gateway/nat-gateway-resource.md#limitations).
-## <a name="assignilpip"></a>Assign a Public IP to each VM
-Assigning a Public IP address changes your scenario to [Public IP to a VM](load-balancer-outbound-connections.md). All ephemeral ports of the public IP that are used for each VM are available to the VM. (As opposed to scenarios where ephemeral ports of a public IP are shared with all the VMs associated with the respective backend pool.) There are trade-offs to consider, such as the additional cost of public IP addresses and the potential impact of filtering a large number of individual IP addresses.
+ If you're unable to use a NAT gateway for outbound connectivity, refer to the other migration options described in this article.
->[!NOTE]
->This option is not available for web worker roles.
+### Configure load balancer outbound rules to maximize SNAT ports per VM
-## <a name="multifesnat"></a>Use multiple frontends
-When using public Standard Load Balancer, you assign [multiple frontend IP addresses for outbound connections](load-balancer-outbound-connections.md) and [multiply the number of SNAT ports available](load-balancer-outbound-connections.md#preallocatedports). Create a frontend IP configuration, rule, and backend pool to trigger the programming of SNAT to the public IP of the frontend. The rule does not need to function and a health probe does not need to succeed. If you do use multiple frontends for inbound as well (rather than just for outbound), you should use custom health probes well to ensure reliability.
+If youΓÇÖre using a public standard load balancer and experience SNAT exhaustion or connection failures, ensure youΓÇÖre using outbound rules with manual port allocation. Otherwise, youΓÇÖre likely relying on load balancerΓÇÖs default outbound access. Default outbound access automatically allocates a conservative number of ports, which is based on the number of instances in your backend pool. Default outbound access isn't a recommended method for enabling outbound connections. When your backend pool scales, your connections may be impacted if ports need to be reallocated.
->[!NOTE]
->In most cases, exhaustion of SNAT ports is a sign of bad design. Make sure you understand why you are exhausting ports before using more frontends to add SNAT ports. You may be masking a problem which can lead to failure later.
+To learn more about default outbound access and default port allocation, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md).
-## <a name="scaleout"></a>Scale out
-[Preallocated ports](load-balancer-outbound-connections.md#preallocatedports) are assigned based on the backend pool size and grouped into tiers to minimize disruption when some of the ports have to be reallocated to accommodate the next larger backend pool size tier. You may have an option to increase the SNAT port utilization for a given frontend by scaling your backend pool to the maximum size for a given tier. Keeping in mind the default port allocation is required for the application to scale out efficiently without risk SNAT exhaustion.
+To increase the number of available SNAT ports per VM, configure outbound rules with manual port allocation on your load balancer. For example, if you know you'll have a maximum of 10 VMs in your backend pool, you can allocate up to 6,400 SNAT ports per VM rather than the default 1,024. If you need more SNAT ports, you can add multiple frontend IP addresses for outbound connections to multiply the number of SNAT ports available. Make sure you understand why you're exhausting SNAT ports before adding more frontend IP addresses.
-For example, two virtual machines in the backend pool would have 1024 SNAT ports available per IP configuration, allowing a total of 2048 SNAT ports for the deployment. If the deployment were to be increased to 50 virtual machines, even though the number of preallocated ports remains constant per virtual machine, a total of 51,200 (50 x 1024) SNAT ports can be used by the deployment. If you wish to scale out your deployment, check the number of [preallocated ports](load-balancer-outbound-connections.md#preallocatedports) per tier to make sure you shape your scale-out to the maximum for the respective tier. In the preceding example, if you had chosen to scale out to 51 instead of 50 instances, you would progress to the next tier and end up with fewer SNAT ports per VM as well as in total.
+For detailed guidance, see [Design your applications to use connections efficiently](#design-your-applications-to-use-connections-efficiently) later in this article. To add more IP addresses for outbound connections, create a frontend IP configuration for each new IP. When outbound rules are configured, you're able to select multiple frontend IP configurations for a backend pool. It's recommended to use different IP addresses for inbound and outbound connectivity. Different IP addresses isolate traffic for improved monitoring and troubleshooting.
-If you scale out to the next larger backend pool size tier, there is potential for some of your outbound connections to time out if allocated ports have to be reallocated. If you are only using some of your SNAT ports, scaling out across the next larger backend pool size is inconsequential. Half the existing ports will be reallocated each time you move to the next backend pool tier. If you don't want this to take place, you need to shape your deployment to the tier size. Or make sure your application can detect and retry as necessary. TCP keepalives can assist in detect when SNAT ports no longer function due to being reallocated.
+### Configure an individual public IP on VM
-## <a name="idletimeout"></a>Use keepalives to reset the outbound idle timeout
-Outbound connections have a 4-minute idle timeout. This timeout is adjustable via [Outbound rules](outbound-rules.md). You can also use transport (for example, TCP keepalives) or application-layer keepalives to refresh an idle flow and reset this idle timeout if necessary.
+For smaller scale deployments, you can consider assigning a public IP to a VM. If a public IP is assigned to a VM, all ports provided by the public IP are available to the VM. Unlike with a load balancer or a NAT gateway, the ports are only accessible to the single VM associated with the IP address.
-When using TCP keepalives, it is sufficient to enable them on one side of the connection. For example, it is sufficient to enable them on the server side only to reset the idle timer of the flow and it is not necessary for both sides to initiate TCP keepalives. Similar concepts exist for application layer, including database client-server configurations. Check the server side for what options exist for application-specific keepalives.
+We highly recommend considering utilizing NAT gateway instead, as assigning individual public IP addresses isn't a scalable solution.
-## Next Steps
-We are always looking to improve the experience of our customers. If you are experiencing issues with outbound connectivity that are not listed or resolved by this article, submit feedback through GitHub via the bottom of this page and we will address your feedback as soon as possible.
+> [!NOTE]
+> If you need to connect your Azure virtual network to Azure PaaS services like Storage, SQL, Cosmos DB, or any other of the Azure services [listed here](../private-link/availability.md), you can leverage Azure Private Link to avoid SNAT entirely. Azure Private Link sends traffic from your virtual network to Azure services over the Azure backbone network instead of over the internet.
+>
+>Private Link is the recommended option over service endpoints for private access to Azure hosted services. For more information on the difference between Private Link and service endpoints, see [Compare Private Endpoints and Service Endpoints](../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+
+## Design your applications to use connections efficiently
+
+When you design your applications, ensure they use connections efficiently. Connection efficiency can reduce or eliminate SNAT port exhaustion in your deployed applications.
+
+### Modify the application to reuse connections
+
+Rather than generating individual, atomic TCP connections for each request, we recommend configuring your application to reuse connections. Connection reuse results in more performant TCP transactions and is especially relevant for protocols like HTTP/1.1, where connection reuse is the default. This reuse applies to other protocols that use HTTP as their transport such as REST.
+
+### Modify the application to use connection pooling
+
+Employ a connection pooling scheme in your application, where requests are internally distributed across a fixed set of connections and reused when possible. This scheme constrains the number of SNAT ports in use and creates a more predictable environment.
+
+This scheme can increase the throughput of requests by allowing multiple simultaneous operations when a single connection is blocking on the reply of an operation.
+
+Connection pooling might already exist within the framework that you're using to develop your application or the configuration settings for your application. You can combine connection pooling with connection reuse. Your multiple requests then consume a fixed, predictable number of ports to the same destination IP address and port.
+
+The requests benefit from efficient use of TCP transactions, reducing latency and resource utilization. UDP transactions can also benefit. The management of the number of UDP flows can avoid exhaust conditions and manage the SNAT port utilization.
+
+### Modify the application to use less aggressive retry logic
+
+When SNAT ports are exhausted or application failures occur, aggressive or brute force retries without decay and back-off logic cause exhaustion to occur or persist. You can reduce demand for SNAT ports by using a less aggressive retry logic.
+
+Depending on the configured idle timeout, if retries are too aggressive, connections may not have enough time to close and release SNAT ports for reuse.
+
+### Use keepalives to reset the outbound idle timeout
+
+Load balancer outbound rules have a 4-minute idle timeout by default that is adjustable up to 100 minutes. You can use TCP keepalives to refresh an idle flow and reset this idle timeout if necessary. When using TCP keepalives, it's sufficient to enable them on one side of the connection.
+
+For example, it's sufficient to enable them on the server side only to reset the idle timer of the flow and it's not necessary for both sides to initiate TCP keepalives. Similar concepts exist for application layer, including database client-server configurations. Check the server side for what options exist for application-specific keepalives.
+
+## Next steps
+
+For more information about SNAT port exhaustion, outbound connectivity options, and default outbound access see:
+
+* [Use Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md)
+
+* [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md)
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](https://docs.microsoft.com/azure/load-balancer/update-load-balancer-with-vm-scale-set) to complete the migration.
+ ### Change allocation method of the public IP address to static The following are the recommended steps to change the allocation method.
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
# Upgrade Azure Internal Load Balancer- No Outbound Connection Required [Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus).
-This article introduces a PowerShell script which creates a Standard Load Balancer with the same configuration as the Basic Load Balancer along with migrating traffic from Basic Load Balancer to Standard Load Balancer.
+This article introduces a PowerShell script that creates a Standard Load Balancer with the same configuration as the Basic Load Balancer along with migrating traffic from Basic Load Balancer to Standard Load Balancer.
## Upgrade overview
-An Azure PowerShell script is available that does the following:
- * Creates a Standard Internal SKU Load Balancer in the location that you specify. Note that [outbound connection](./load-balancer-outbound-connections.md) will not be provided by the Standard Internal Load Balancer. * Seamlessly copies the configurations of the Basic SKU Load Balancer to the newly created Standard Load Balancer. * Seamlessly move the private IPs from Basic Load Balancer to the newly created Standard Load Balancer.
An Azure PowerShell script is available that does the following:
* The Basic Load Balancer needs to be in the same resource group as the backend VMs and NICs. * If the Standard load balancer is created in a different region, you wonΓÇÖt be able to associate the VMs existing in the old region to the newly created Standard Load Balancer. To work around this limitation, make sure to create a new VM in the new region. * If your Load Balancer does not have any frontend IP configuration or backend pool, you are likely to hit an error running the script. Make sure they are not empty.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](https://docs.microsoft.com/azure/load-balancer/update-load-balancer-with-vm-scale-set) to complete the migration.
## Change IP allocation method to Static for frontend IP Configuration (Ignore this step if it's already static)
To run the script:
1. Examine the required parameters:
- * **rgName: [String]: Required** ΓÇô This is the resource group for your existing Basic Load Balancer and new Standard Load Balancer. To find this string value, navigate to Azure portal, select your Basic Load Balancer source, and click the **Overview** for the load balancer. The Resource Group is located on that page.
- * **oldLBName: [String]: Required** ΓÇô This is the name of your existing Basic Balancer you want to upgrade.
- * **newlocation: [String]: Required** ΓÇô This is the location in which the Standard Load Balancer will be created. It is recommended to inherit the same location of the chosen Basic Load Balancer to the Standard Load Balancer for better association with other existing resources.
- * **newLBName: [String]: Required** ΓÇô This is the name for the Standard Load Balancer to be created.
+ * **rgName: [String]: Required** ΓÇô This parameter is the resource group for your existing Basic Load Balancer and new Standard Load Balancer. To find this string value, navigate to Azure portal, select your Basic Load Balancer source, and click the **Overview** for the load balancer. The Resource Group is located on that page.
+ * **oldLBName: [String]: Required** ΓÇô This parameter is the name of your existing Basic Balancer you want to upgrade.
+ * **newlocation: [String]: Required** ΓÇô This parameter is the location in which the Standard Load Balancer will be created. It is recommended to inherit the same location of the chosen Basic Load Balancer to the Standard Load Balancer for better association with other existing resources.
+ * **newLBName: [String]: Required** ΓÇô This parameter is the name for the Standard Load Balancer to be created.
1. Run the script using the appropriate parameters. It may take five to seven minutes to finish. **Example**
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
An Azure PowerShell script is available that does the following procedures:
* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](https://docs.microsoft.com/azure/load-balancer/update-load-balancer-with-vm-scale-set) to complete the migration.
+ ## Download the script Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureLBUpgrade/2.0).
marketplace Azure Private Plan Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-private-plan-troubleshooting.md
Previously updated : 12/10/2021 Last updated : 04/22/2022 # Troubleshooting Private Plans in the commercial marketplace
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Troubleshooting Checklist -- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID. (video guide)](/azure/media-services/latest/setup-azure-subscription-how-to?tabs=portal)
+- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID.](/azure/azure-portal/get-subscription-tenant-id)
- ISV to ensure that the Customer is not buying through a CSP. Private Plans are not available on a CSP-managed subscription. - Customer to ensure customer is logging in with an email ID that is registered under the same tenant ID (use the same user ID they used in step #1 above) - ISV to ask the customer to find the Private Plan in Azure Marketplace: [Private plans in Azure Marketplace](/marketplace/private-plans)
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
Previously updated : 11/02/2020 Last updated : 04/22/2022 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using Azure PowerShell.
purview Concept Best Practices Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-scanning.md
Microsoft Purview supports automated scanning of on-premises, multicloud, and so
Running a *scan* invokes the process to ingest metadata from the registered data sources. The metadata curated at the end of the scan and curation process includes technical metadata. This metadata can include data asset names such as table names or file names, file size, columns, and data lineage. Schema details are also captured for structured data sources. A relational database management system is an example of this type of source.
-The curation process applies automated classification labels on the schema attributes based on the scan rule set configured. Sensitivity labels are applied if your Microsoft Purview account is connected to the Microsoft 365 Security and Compliance Center.
+The curation process applies automated classification labels on the schema attributes based on the scan rule set configured. Sensitivity labels are applied if your Microsoft Purview account is connected to the Microsoft Purview compliance portal.
## Why do you need best practices to manage data sources?
purview Concept Best Practices Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-sensitivity-labels.md
Title: Best practices for applying sensitivity labels in the Microsoft Purview data map
+ Title: Best practices for applying sensitivity labels in the Microsoft Purview Data Map
description: This article provides best practices for applying sensitivity labels in Microsoft Purview.
# Labeling best practices
-The Microsoft Purview data map supports labeling structured and unstructured data stored across various data sources. Labeling data within the data map allows users to easily find data that matches predefined autolabeling rules that were configured in the Microsoft Purview compliance portal. The data map extends the use of sensitivity labels from Microsoft Purview Information Protection to assets stored in infrastructure cloud locations and structured data sources.
+The Microsoft Purview Data Map supports labeling structured and unstructured data stored across various data sources. Labeling data within the data map allows users to easily find data that matches predefined autolabeling rules that were configured in the Microsoft Purview compliance portal. The data map extends the use of sensitivity labels from Microsoft Purview Information Protection to assets stored in infrastructure cloud locations and structured data sources.
## Protect personal data with custom sensitivity labels for Microsoft Purview
Storing and processing personal data is subject to special protection. Labeling
With the data map, you can extend your organization's investment in sensitivity labels from Microsoft Purview Information Protection to assets that are stored in files and database columns within Azure, multicloud, and on-premises locations. These locations are defined in [supported data sources](./create-sensitivity-label.md#supported-data-sources). When you apply sensitivity labels to your content, you can keep your data secure by stating how sensitive certain data is in your organization. The data map also abstracts the data itself, so you can use labels to track the type of data, without exposing sensitive data on another platform.
-## Microsoft Purview data map labeling best practices and considerations
+## Microsoft Purview Data Map labeling best practices and considerations
The following sections walk you through the process of implementing labeling for your assets. ### Get started -- To enable sensitivity labeling in the data map, follow the steps in [automatically apply sensitivity labels to your data in the Microsoft Purview data map](./how-to-automatically-label-your-content.md).-- To find information on required licensing and helpful answers to other questions, see [Sensitivity labels in the Microsoft Purview data map FAQ](./sensitivity-labels-frequently-asked-questions.yml).
+- To enable sensitivity labeling in the data map, follow the steps in [automatically apply sensitivity labels to your data in the Microsoft Purview Data Map](./how-to-automatically-label-your-content.md).
+- To find information on required licensing and helpful answers to other questions, see [Sensitivity labels in the Microsoft Purview Data Map FAQ](./sensitivity-labels-frequently-asked-questions.yml).
### Label considerations
The following sections walk you through the process of implementing labeling for
### Label recommendations -- When you configure sensitivity labels for the Microsoft Purview data map, you might define autolabeling rules for files, database columns, or both within the label properties. Microsoft Purview labels files within the Microsoft Purview data map. When the autolabeling rule is configured, Microsoft Purview automatically applies the label or recommends that the label is applied.
+- When you configure sensitivity labels for the Microsoft Purview Data Map, you might define autolabeling rules for files, database columns, or both within the label properties. Microsoft Purview labels files within the Microsoft Purview Data Map. When the autolabeling rule is configured, Microsoft Purview automatically applies the label or recommends that the label is applied.
> [!WARNING] > If you haven't configured autolabeling for files and emails on your sensitivity labels, users might be affected within your Office and Microsoft 365 environment. You can test autolabeling on database columns without affecting users. -- If you're defining new autolabeling rules for files when you configure labels for the Microsoft Purview data map, make sure that you have the condition for applying the label set appropriately.
+- If you're defining new autolabeling rules for files when you configure labels for the Microsoft Purview Data Map, make sure that you have the condition for applying the label set appropriately.
- You can set the detection criteria to **All of these** or **Any of these** in the upper right of the autolabeling for files and emails page of the label properties. - The default setting for detection criteria is **All of these**. This setting means that the asset must contain all the specified sensitive information types for the label to be applied. While the default setting might be valid in some instances, many customers want to use **Any of these**. Then if at least one asset is found, the label is applied. :::image type="content" source="media/concept-best-practices/label-detection-criteria.png" alt-text="Screenshot that shows detection criteria for a label."::: > [!NOTE]
- > Microsoft Purview Information Protection trainable classifiers aren't used by the Microsoft Purview data map.
+ > Microsoft Purview Information Protection trainable classifiers aren't used by the Microsoft Purview Data Map.
- Maintain consistency in labeling across your data estate. If you use autolabeling rules for files, use the same sensitive information types for autolabeling database columns. - [Define your sensitivity labels via Microsoft Purview Information Protection to identify your personal data at a central place](/microsoft-365/compliance/information-protection).
The following sections walk you through the process of implementing labeling for
- [Force labeling by using autolabel functionality](./how-to-automatically-label-your-content.md). - Build groups of sensitivity labels and store them as a dedicated sensitivity label policy. For example, store all required sensitivity labels for regulatory rules by using the same sensitivity label policy to publish. - Capture all test cases for your labels. Test your label policies with all applications you want to secure.-- Promote sensitivity label policies to the Microsoft Purview data map.-- Run test scans from the Microsoft Purview data map on different data sources like hybrid cloud and on-premises to identify sensitivity labels.
+- Promote sensitivity label policies to the Microsoft Purview Data Map.
+- Run test scans from the Microsoft Purview Data Map on different data sources like hybrid cloud and on-premises to identify sensitivity labels.
- Gather and consider insights, for example, by using Microsoft Purview Insights. Use alerting mechanisms to mitigate potential breaches of regulations.
-By using sensitivity labels with the Microsoft Purview data map, you can extend Microsoft Purview Information Protection beyond the border of your Microsoft data estate to your on-premises, hybrid cloud, multicloud, and software as a service (SaaS) scenarios.
+By using sensitivity labels with the Microsoft Purview Data Map, you can extend Microsoft Purview Information Protection beyond the border of your Microsoft data estate to your on-premises, hybrid cloud, multicloud, and software as a service (SaaS) scenarios.
## Next steps
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Indirect costs impacting Microsoft Purview pricing to be considered are:
- Plan virtual machine sizing in order to distribute the scanning workload across VMs to optimize the v-cores utilized while running scans - [Microsoft 365 license](./create-sensitivity-label.md)
- - Microsoft Information Protection (MIP) sensitivity labels can be automatically applied to your Azure assets in Microsoft Purview.
- - MIP sensitivity labels are created and managed in the Microsoft 365 Security and Compliance Center.
+ - Microsoft Purview Information Protection sensitivity labels can be automatically applied to your Azure assets in the Microsoft Purview Data Map.
+ - Microsoft Purview Information Protection sensitivity labels are created and managed in the Microsoft Purview compliance portal.
- To create sensitivity labels for use in Microsoft Purview, you must have an active Microsoft 365 license, which offers the benefit of automatic labeling. For the full list of licenses, see the Sensitivity labels in Microsoft Purview FAQ. - [Azure Alerts](../azure-monitor/alerts/alerts-overview.md)
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-scans-and-ingestion.md
This article provides an overview of the Scanning and Ingestion features in Micr
## Scanning
-After data sources are [registered](manage-data-sources.md) in your Microsoft Purview account, the next step is to scan the data sources. The scanning process establishes a connection to the data source and captures technical metadata like names, file size, columns, and so on. It also extracts schema for structured data sources, applies classifications on schemas, and [applies sensitivity labels if your Microsoft Purview account is connected to a Microsoft 365 Security and Compliance Center (SCC)](create-sensitivity-label.md). The scanning process can be triggered to run immediately or can be scheduled to run on a periodic basis to keep your Microsoft Purview account up to date.
+After data sources are [registered](manage-data-sources.md) in your Microsoft Purview account, the next step is to scan the data sources. The scanning process establishes a connection to the data source and captures technical metadata like names, file size, columns, and so on. It also extracts schema for structured data sources, applies classifications on schemas, and [applies sensitivity labels if your Microsoft Purview Data Map is connected to a Microsoft Purview compliance portal](create-sensitivity-label.md). The scanning process can be triggered to run immediately or can be scheduled to run on a periodic basis to keep your Microsoft Purview account up to date.
For each scan there are customizations you can apply so that you're only scanning your sources for the information you need.
purview Concept Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-self-service-data-access-policy.md
This article helps you understand Microsoft Purview Self-service data access pol
## Important limitations
-The self-service data access policy is only supported when the prerequisites mentioned in [data use governance](./how-to-enable-data-use-governance.md#prerequisites) are satisfied.
+The self-service data access policy is only supported when the prerequisites mentioned in [Data Use Management](./how-to-enable-data-use-management.md#prerequisites) are satisfied.
## Overview
-Microsoft Purview Self-service data access workflow allows data consumer to request access to data when browsing or searching for data. Once the data access request is approved, a policy gets auto-generated to grant access to the requestor provided the data source is enabled for data use governance. Currently, self-service data access policy is supported for storage accounts, containers, folders, and files.
+Microsoft Purview Self-service data access workflow allows data consumer to request access to data when browsing or searching for data. Once the data access request is approved, a policy gets auto-generated to grant access to the requestor provided the data source is enabled for Data Use Management. Currently, self-service data access policy is supported for storage accounts, containers, folders, and files.
-A **workflow admin** will need to map a self-service data access workflow to a collection. Collection is logical grouping of data sources that are registered within Microsoft Purview. **Only data source(s) that are registered** for data use governance will have self-service policies auto-generated.
+A **workflow admin** will need to map a self-service data access workflow to a collection. Collection is logical grouping of data sources that are registered within Microsoft Purview. **Only data source(s) that are registered** for Data Use Management will have self-service policies auto-generated.
## Terminology
A **workflow admin** will need to map a self-service data access workflow to a c
* **Self-service data access workflow** is the workflow that is initiated when a data consumer requests access to data.
-* **Approver** is either security group or AAD users that can approve self-service access requests.
+* **Approver** is either security group or Azure Active Directory (Azure AD) users that can approve self-service access requests.
## How to use Microsoft Purview self-service data access policy
Microsoft Purview allows organizations to catalog metadata about all registered
With self-service data access workflow, data consumers can not only find data assets but also request access to the data assets. When the data consumer requests access to a data asset, the associated self-service data access workflow is triggered.
-A default self-service data access workflow template is provided with every Microsoft Purview account.The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
+A default self-service data access workflow template is provided with every Microsoft Purview account. The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
-Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Microsft Purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md#prerequisites) have to be satisfied.
+Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Microsoft Purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **Data Use Management**. The pre-requisites mentioned within the [Data Use Management](./how-to-enable-data-use-management.md#prerequisites) have to be satisfied.
## Next steps If you would like to preview these features in your environment, follow the link below.-- [Enable data use governance](./how-to-enable-data-use-governance.md#prerequisites)
+- [Enable Data Use Management](./how-to-enable-data-use-management.md#prerequisites)
- [create self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [working with policies at file level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166) - [working with policies at folder level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-sensitivity-label.md
Title: Labeling in the Microsoft Purview data map
+ Title: Labeling in the Microsoft Purview Data Map
description: Start utilizing sensitivity labels and classifications to enhance your Microsoft Purview assets Previously updated : 09/27/2021 Last updated : 04/22/2022
-# Labeling in the Microsoft Purview data map
+# Labeling in the Microsoft Purview Data Map
> [!IMPORTANT]
-> Labeling in the Microsoft Purview data map is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Labeling in the Microsoft Purview Data Map is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> To get work done, people in your organization collaborate with others both inside and outside the organization. Data doesn't always stay in your cloud, and often roams everywhere, across devices, apps, and services. When your data roams, you still want it to be secure in a way that meets your organization's business and compliance policies.</br>
For example, applying a sensitivity label ΓÇÿhighly confidentialΓÇÖ to a documen
Microsoft Purview allows you to apply sensitivity labels to assets, enabling you to classify and protect your data.
-* **Label travels with the data:** The sensitivity labels created in Microsoft Purview Information Protection can also be extended to the Microsoft Purview data map, SharePoint, Teams, Power BI, and SQL. When you apply a label on an office document and then scan it into the Microsoft Purview data map, the label will be applied to the data asset. While the label is applied to the actual file in Microsoft Purview Information Protection, it's only added as metadata in the Microsoft Purview map. While there are differences in how a label is applied to an asset across various services/applications, labels travel with the data and is recognized by all the services you extend it to.
-* **Overview of your data estate:** Microsoft Purview provides insights into your data through pre-canned reports. When you scan data into the Microsoft Purview data map, we hydrate the reports with information on what assets you have, scan history, classifications found in your data, labels applied, glossary terms, etc.
+* **Label travels with the data:** The sensitivity labels created in Microsoft Purview Information Protection can also be extended to the Microsoft Purview Data Map, SharePoint, Teams, Power BI, and SQL. When you apply a label on an office document and then scan it into the Microsoft Purview Data Map, the label will be applied to the data asset. While the label is applied to the actual file in Microsoft Purview Information Protection, it's only added as metadata in the Microsoft Purview map. While there are differences in how a label is applied to an asset across various services/applications, labels travel with the data and is recognized by all the services you extend it to.
+* **Overview of your data estate:** Microsoft Purview provides insights into your data through pre-canned reports. When you scan data into the Microsoft Purview Data Map, we hydrate the reports with information on what assets you have, scan history, classifications found in your data, labels applied, glossary terms, etc.
* **Automatic labeling:** Labels can be applied automatically based on sensitivity of the data. When an asset is scanned for sensitive data, autolabeling rules are used to decide which sensitivity label to apply. You can create autolabeling rules for each sensitivity label, defining which classification/sensitive information type constitutes a label. * **Apply labels to files and database columns:** Labels can be applied to files in storage like Azure Data Lake, Azure Files, etc. and to schematized data like columns in Azure SQL DB, Cosmos DB, etc. Sensitivity labels are tags that you can apply on assets to classify and protect your data. Learn more about [sensitivity labels here](/microsoft-365/compliance/create-sensitivity-labels).
-## How to apply labels to assets in the Microsoft Purview data map
+## How to apply labels to assets in the Microsoft Purview Data Map
:::image type="content" source="media/create-sensitivity-label/apply-label-flow.png" alt-text="Applying labels to assets in Microsoft Purview flow. Create labels, register asset, scan asset, classifications found, labels applied."::: Being able to apply labels to your asset in the data map requires you to perform the following steps: 1. [Create new or apply existing sensitivity labels](how-to-automatically-label-your-content.md) in the Microsoft Purview compliance portal. Creating sensitivity labels include autolabeling rules that tell us which label should be applied based on the classifications found in your data.
-1. [Register and scan your asset](how-to-automatically-label-your-content.md#scan-your-data-to-apply-sensitivity-labels-automatically) in the Microsoft Purview data map.
+1. [Register and scan your asset](how-to-automatically-label-your-content.md#scan-your-data-to-apply-sensitivity-labels-automatically) in the Microsoft Purview Data Map.
1. Microsoft Purview applies **classifications**: When you schedule a scan on an asset, Microsoft Purview scans the type of data in your asset and applies classifications to it in the data map. Application of classifications is done automatically by Microsoft Purview, there's no action for you. 1. Microsoft Purview applies **labels**: Once classifications are found on an asset, Microsoft Purview will apply labels to the assets depending on autolabeling rules. Application of labels is done automatically by Microsoft Purview, there's no action for you as long as you have created labels with autolabeling rules in step 1.
Being able to apply labels to your asset in the data map requires you to perform
## Supported data sources
-Sensitivity labels are supported in the Microsoft Purview data map for the following data sources:
+Sensitivity labels are supported in the Microsoft Purview Data Map for the following data sources:
|Data type |Sources | |||
Sensitivity labels are supported in the Microsoft Purview data map for the follo
## Labeling for SQL databases
-In addition to labeling for schematized data assets, the Microsoft Purview data map also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Microsoft Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
+In addition to labeling for schematized data assets, the Microsoft Purview Data Map also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Microsoft Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
-Labeling in Microsoft Purview and labeling in SSMS are separate processes that don't currently interact with each other. Therefore, **labels applied in SSMS are not shown in Microsoft Purview, and vice versa**. We recommend Microsoft Purview for labeling SQL databases, as it uses global MIP labels that can be applied across multiple platforms.
+Labeling in Microsoft Purview and labeling in SSMS are separate processes that don't currently interact with each other. Therefore, **labels applied in SSMS are not shown in Microsoft Purview, and vice versa**. We recommend Microsoft Purview for labeling SQL databases, because the labels can be applied globally, across multiple platforms.
For more information, see the [SQL data discovery and classification documentation](/sql/relational-databases/security/sql-data-discovery-and-classification). </br></br>
purview How To Automatically Label Your Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-automatically-label-your-content.md
Title: How to automatically apply sensitivity labels to your data in Microsoft Purview data map
+ Title: How to automatically apply sensitivity labels to your data in Microsoft Purview Data Map
description: Learn how to create sensitivity labels and automatically apply them to your data during a scan.
Last updated 04/21/2021
-# How to automatically apply sensitivity labels to your data in the Microsoft Purview data map
+# How to automatically apply sensitivity labels to your data in the Microsoft Purview Data Map
## Create new or apply existing sensitivity labels in the data map > [!IMPORTANT]
-> Labeling in the Microsoft Purview data map is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Labeling in the Microsoft Purview Data Map is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-If you don't already have sensitivity labels, you'll need to create them and make them available for the Microsoft Purview data map. Existing sensitivity labels from Microsoft Purview Information Protection can also be modified to make them available to the data map.
+If you don't already have sensitivity labels, you'll need to create them and make them available for the Microsoft Purview Data Map. Existing sensitivity labels from Microsoft Purview Information Protection can also be modified to make them available to the data map.
### Step 1: Licensing requirements
Sensitivity labels are created and managed in the Microsoft Purview compliance p
For the full list of licenses, see the [Sensitivity labels in Microsoft Purview FAQ](sensitivity-labels-frequently-asked-questions.yml). If you don't already have the required license, you can sign up for a trial of [Microsoft 365 E5](https://www.microsoft.com/microsoft-365/business/compliance-solutions#midpagectaregion).
-### Step 2: Consent to use sensitivity labels in the Microsoft Purview data map
+### Step 2: Consent to use sensitivity labels in the Microsoft Purview Data Map
The following steps extend your existing sensitivity labels and enable them to be available for use in the data map, where you can apply sensitivity labels to files and database columns.
For example:
>If you don't see the button, and you're not sure if consent has been granted to extend labeling to assets in the Microsoft Purview Data Map, see [this FAQ](sensitivity-labels-frequently-asked-questions.yml#how-can-i-determine-if-consent-has-been-granted-to-extend-labeling-to-the-microsoft-purview-data-map) item on how to determine the status. >
-After you've extended labeling to assets in the Microsoft Purview data map, all published sensitivity labels are available for use in the data map.
+After you've extended labeling to assets in the Microsoft Purview Data Map, all published sensitivity labels are available for use in the data map.
### Step 3: Create or modify existing label to automatically label content
After you've extended labeling to assets in the Microsoft Purview data map, all
1. Under **Solutions**, select **Information protection**, then select **Create a label**.
- :::image type="content" source="media/how-to-automatically-label-your-content/create-sensitivity-label-full-small.png" alt-text="Create sensitivity labels in the Microsoft 365 compliance center" lightbox="media/how-to-automatically-label-your-content/create-sensitivity-label-full.png":::
+ :::image type="content" source="media/how-to-automatically-label-your-content/create-sensitivity-label-full-small.png" alt-text="Create sensitivity labels in the Microsoft Purview compliance center" lightbox="media/how-to-automatically-label-your-content/create-sensitivity-label-full.png":::
1. Name the label. Then, under **Define the scope for this label**: - In all cases, select **Schematized data assets**. - To label files, also select **Files & emails**. This option isn't required to label schematized data assets only
- :::image type="content" source="media/how-to-automatically-label-your-content/create-label-scope-small.png" alt-text="Automatically label in the Microsoft 365 compliance center" lightbox="media/how-to-automatically-label-your-content/create-label-scope.png":::
+ :::image type="content" source="media/how-to-automatically-label-your-content/create-label-scope-small.png" alt-text="Automatically label in the Microsoft Purview compliance center" lightbox="media/how-to-automatically-label-your-content/create-label-scope.png":::
1. Follow the rest of the prompts to configure the label settings.
On the **Auto-labeling for Office apps** page, enable **Auto-labeling for Office
For example: For more information, see the documentation to [apply a sensitivity label to data automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically#how-to-configure-auto-labeling-for-office-apps).
At the **Schematized data assets** option:
For example: ### Step 4: Publish labels
-Once you create a label, you'll need to Scan your data in the Microsoft Purview data map to automatically apply the labels you've created, based on the autolabeling rules you've defined.
+Once you create a label, you'll need to Scan your data in the Microsoft Purview Data Map to automatically apply the labels you've created, based on the autolabeling rules you've defined.
## Scan your data to apply sensitivity labels automatically Scan your data in the data map to automatically apply the labels you've created, based on the autolabeling rules you've defined. Allow up to 24 hours for sensitivity label changes to reflect in the data map.
-For more information on how to set up scans on various assets in the Microsoft Purview data map, see:
+For more information on how to set up scans on various assets in the Microsoft Purview Data Map, see:
|Source |Reference | |||
For example:
## View Insight reports for the classifications and sensitivity labels
-Find insights on your classified and labeled data in the Microsoft Purview data map use the **Classification** and **Sensitivity labeling** reports.
+Find insights on your classified and labeled data in the Microsoft Purview Data Map use the **Classification** and **Sensitivity labeling** reports.
> [!div class="nextstepaction"] > [Classification insights](./classification-insights.md)
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-and-manage-collections.md
Collections in Microsoft Purview can be used to organize assets and sources by y
### Check permissions
-In order to create and manage collections in Microsoft Purview, you will need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the Microsoft Purview account in [Azure portal](https://portal.azure.com).
+In order to create and manage collections in Microsoft Purview, you will need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). You can find Studio in the overview page of the Microsoft Purview account in [Azure portal](https://portal.azure.com).
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the Collections tab selected." border="true":::
1. Select your root collection. This is the top collection in your collection list and will have the same name as your Microsoft Purview account. In the following example, it's called Contoso Microsoft Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
- :::image type="content" source="./media/how-to-create-and-manage-collections/select-root-collection.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/select-root-collection.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the root collection highlighted." border="true":::
1. Select **Role assignments** in the collection window.
- :::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Microsoft Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant your permission.
- :::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the collection admin section highlighted." border="true":::
## Collection management
You'll need to be a collection admin in order to create a collection. If you are
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the Collections tab selected and open." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Microsoft Purview governance portal window, opened to the Data Map, with the Collections tab selected and open." border="true":::
1. Select **+ Add a collection**. Again, note that only [collection admins](#check-permissions) can manage collections.
- :::image type="content" source="./media/how-to-create-and-manage-collections/select-add-a-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the new collection window, with the 'Add a collection' button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/select-add-a-collection.png" alt-text="Screenshot of Microsoft Purview governance portal window, showing the new collection window, with the 'Add a collection' button highlighted." border="true":::
1. In the right panel, enter the collection name and description. If needed you can also add users or groups as collection admins to the new collection. 1. Select **Create**.
- :::image type="content" source="./media/how-to-create-and-manage-collections/create-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/create-collection.png" alt-text="Screenshot of Microsoft Purview governance portal window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
1. The new collection's information will reflect on the page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/created-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the newly created collection window." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/created-collection.png" alt-text="Screenshot of Microsoft Purview governance portal window, showing the newly created collection window." border="true":::
### Edit a collection 1. Select **Edit** either from the collection detail page, or from the collection's dropdown menu.
- :::image type="content" source="./media/how-to-create-and-manage-collections/edit-collection.png" alt-text="Screenshot of Microsoft Purview studio window, open to collection window, with the 'edit' button highlighted both in the selected collection window, and under the ellipsis button next to the name of the collection." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/edit-collection.png" alt-text="Screenshot of Microsoft Purview governance portal window, open to collection window, with the 'edit' button highlighted both in the selected collection window, and under the ellipsis button next to the name of the collection." border="true":::
1. Currently collection description and collection admins can be edited. Make any changes, then select **Save** to save your change.
- :::image type="content" source="./media/how-to-create-and-manage-collections/edit-description.png" alt-text="Screenshot of Microsoft Purview studio window with the edit collection window open, a description added to the collection, and the save button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/edit-description.png" alt-text="Screenshot of Microsoft Purview governance portal window with the edit collection window open, a description added to the collection, and the save button highlighted." border="true":::
### View Collections 1. Select the triangle icon beside the collection's name to expand or collapse the collection hierarchy. Select the collection names to navigate.
- :::image type="content" source="./media/how-to-create-and-manage-collections/subcollection-menu.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the button next to the collection name highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/subcollection-menu.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the button next to the collection name highlighted." border="true":::
1. Type in the filter box at the top of the list to filter collections.
- :::image type="content" source="./media/how-to-create-and-manage-collections/filter-collections.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the filter above the collections highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/filter-collections.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the filter above the collections highlighted." border="true":::
1. Select **Refresh** in Root collection's contextual menu to reload the collection list.
- :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-collections.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the button next to the Resource name selected, and the refresh button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-collections.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the button next to the Resource name selected, and the refresh button highlighted." border="true":::
1. Select **Refresh** in collection detail page to reload the single collection.
- :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-single-collection.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the refresh button under the collection window highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-single-collection.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the refresh button under the collection window highlighted." border="true":::
### Delete a collection
You'll need to be a collection admin in order to delete a collection. If you are
1. Select **Delete** from the collection detail page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collections.png" alt-text="Screenshot of Microsoft Purview studio window to delete a collection" border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collections.png" alt-text="Screenshot of Microsoft Purview governance portal window to delete a collection" border="true":::
2. Select **Confirm** when prompted, **Are you sure you want to delete this collection?**
- :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collection-confirmation.png" alt-text="Screenshot of Microsoft Purview studio window showing confirmation message to delete a collection" border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collection-confirmation.png" alt-text="Screenshot of Microsoft Purview governance portal window showing confirmation message to delete a collection" border="true":::
3. Verify deletion of the collection from your Microsoft Purview Data Map.
All assigned roles apply to sources, assets, and other objects within the collec
1. Select the **Role assignments** tab to see all the roles in a collection. Only a collection admin can manage role assignments.
- :::image type="content" source="./media/how-to-create-and-manage-collections/select-role-assignments.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/select-role-assignments.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the role assignments tab highlighted." border="true":::
1. Select **Edit role assignments** or the person icon to edit each role member.
- :::image type="content" source="./media/how-to-create-and-manage-collections/edit-role-assignments.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the edit role assignments dropdown list selected." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/edit-role-assignments.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the edit role assignments dropdown list selected." border="true":::
1. Type in the textbox to search for users you want to add to the role member. Select **X** to remove members you don't want to add.
- :::image type="content" source="./media/how-to-create-and-manage-collections/search-user-permissions.png" alt-text="Screenshot of Microsoft Purview studio collection admin window with the search bar highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/search-user-permissions.png" alt-text="Screenshot of Microsoft Purview governance portal collection admin window with the search bar highlighted." border="true":::
1. Select **OK** to save your changes, and you will see the new users reflected in the role assignments list.
All assigned roles apply to sources, assets, and other objects within the collec
1. Select **X** button next to a user's name to remove a role assignment.
- :::image type="content" source="./media/how-to-create-and-manage-collections/remove-role-assignment.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the role assignments tab selected, and the x button beside one of the names highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/remove-role-assignment.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the role assignments tab selected, and the x button beside one of the names highlighted." border="true":::
1. Select **Confirm** if you're sure to remove the user.
Once you restrict inheritance, you will need to add users directly to the restri
1. Navigate to the collection where you want to restrict inheritance and select the **Role assignments** tab. 1. Select **Restrict inherited permissions** and select **Restrict access** in the popup dialog to remove inherited permissions from this collection and any subcollections. Note that collection admin permissions won't be affected.
- :::image type="content" source="./media/how-to-create-and-manage-collections/restrict-access-inheritance.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the role assignments tab selected, and the restrict inherited permissions slide button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/restrict-access-inheritance.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the role assignments tab selected, and the restrict inherited permissions slide button highlighted." border="true":::
1. After restriction, inherited members are removed from the roles expect for collection admin. 1. Select the **Restrict inherited permissions** toggle button again to revert.
- :::image type="content" source="./media/how-to-create-and-manage-collections/remove-restriction.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the role assignments tab selected, and the unrestrict inherited permissions slide button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/remove-restriction.png" alt-text="Screenshot of Microsoft Purview governance portal collection window, with the role assignments tab selected, and the unrestrict inherited permissions slide button highlighted." border="true":::
## Register source to a collection 1. Select **Register** or register icon on collection node to register a data source. Only a data source admin can register sources.
- :::image type="content" source="./media/how-to-create-and-manage-collections/register-by-collection.png" alt-text="Screenshot of the data map Microsoft Purview studio window with the register button highlighted both at the top of the page and under a collection."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/register-by-collection.png" alt-text="Screenshot of the data map Microsoft Purview governance portal window with the register button highlighted both at the top of the page and under a collection."border="true":::
1. Fill in the data source name, and other source information. It lists all the collections where you have scan permission on the bottom of the form. You can select one collection. All assets under this source will belong to the collection you select.
Once you restrict inheritance, you will need to add users directly to the restri
1. The created data source will be put under the selected collection. Select **View details** to see the data source.
- :::image type="content" source="./media/how-to-create-and-manage-collections/see-registered-source.png" alt-text="Screenshot of the data map Microsoft Purview studio window with the newly added source card highlighted."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/see-registered-source.png" alt-text="Screenshot of the data map Microsoft Purview governance portal window with the newly added source card highlighted."border="true":::
1. Select **New scan** to create scan under the data source.
- :::image type="content" source="./media/how-to-create-and-manage-collections/new-scan.png" alt-text="Screenshot of a source Microsoft Purview studio window with the new scan button highlighted."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/new-scan.png" alt-text="Screenshot of a source Microsoft Purview governance portal window with the new scan button highlighted."border="true":::
1. Similarly, at the bottom of the form, you can select a collection, and all assets scanned will be included in the collection. The collections listed here are restricted to subcollections of the data source collection.
The collections listed here are restricted to subcollections of the data source
1. Back in the collection window, you will see the data sources linked to the collection on the sources card.
- :::image type="content" source="./media/how-to-create-and-manage-collections/source-under-collection.png" alt-text="Screenshot of the data map Microsoft Purview studio window with the newly added source card highlighted in the map."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/source-under-collection.png" alt-text="Screenshot of the data map Microsoft Purview governance portal window with the newly added source card highlighted in the map."border="true":::
## Add assets to collections
Assets and sources are also associated with collections. During a scan, if the s
1. Check the collection information in asset details. You can find collection information in the **Collection path** section on right-top corner of the asset details page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/collection-path.png" alt-text="Screenshot of Microsoft Purview studio asset window, with the collection path highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/collection-path.png" alt-text="Screenshot of Microsoft Purview governance portal asset window, with the collection path highlighted." border="true":::
1. Permissions in asset details page: 1. Check the collection-based permission model by following the [add roles and restricting access on collections guide above](#add-roles-and-restrict-access-through-collections). 1. If you don't have read permission on a collection, the assets under that collection will not be listed in search results. If you get the direct URL of one asset and open it, you will see the no access page. Contact your Microsoft Purview admin to grant you the access. You can select the **Refresh** button to check the permission again.
- :::image type="content" source="./media/how-to-create-and-manage-collections/no-access.png" alt-text="Screenshot of Microsoft Purview studio asset window where the user has no permissions, and has no access to information or options." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/no-access.png" alt-text="Screenshot of Microsoft Purview governance portal asset window where the user has no permissions, and has no access to information or options." border="true":::
1. If you have the read permission to one collection but don't have the write permission, you can browse the asset details page, but the following operations are disabled: * Edit the asset. The **Edit** button will be disabled.
Assets and sources are also associated with collections. During a scan, if the s
* Move asset to another collection. The ellipsis button on the right-top corner of Collection path section will be hidden. 1. The assets in **Hierarchy** section are also affected by permissions. Assets without read permission will be grayed.
- :::image type="content" source="./media/how-to-create-and-manage-collections/hierarchy-permissions.png" alt-text="Screenshot of Microsoft Purview studio hierarchy window where the user has only read permissions, and has no access to options." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/hierarchy-permissions.png" alt-text="Screenshot of Microsoft Purview governance portal hierarchy window where the user has only read permissions, and has no access to options." border="true":::
### Move asset to another collection 1. Select the ellipsis button on the right-top corner of Collection path section.
- :::image type="content" source="./media/how-to-create-and-manage-collections/move-asset.png" alt-text="Screenshot of Microsoft Purview studio asset window with the collection path highlighted and the ellipsis button next to collection path selected." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/move-asset.png" alt-text="Screenshot of Microsoft Purview governance portal asset window with the collection path highlighted and the ellipsis button next to collection path selected." border="true":::
1. Select the **Move to another collection** button. 1. In the right side panel, choose the target collection you want move to. You can only see the collections where you have write permissions. The asset can also only be added to the subcollections of the data source collection.
- :::image type="content" source="./media/how-to-create-and-manage-collections/move-select-collection.png" alt-text="Screenshot of Microsoft Purview studio pop-up window with the select a collection dropdown menu highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/move-select-collection.png" alt-text="Screenshot of Microsoft Purview governance portal pop-up window with the select a collection dropdown menu highlighted." border="true":::
1. Select **Move** button on the bottom of the window to move the asset.
Assets and sources are also associated with collections. During a scan, if the s
### Search by collection
-1. In Microsoft Purview, the search bar is located at the top of the Microsoft Purview studio UX.
+1. In Microsoft Purview, the search bar is located at the top of the Microsoft Purview governance portal UX.
:::image type="content" source="./media/how-to-create-and-manage-collections/purview-search-bar.png" alt-text="Screenshot showing the location of the Microsoft Purview search bar" border="true":::
Assets and sources are also associated with collections. During a scan, if the s
1. You can browse data assets, by selecting the **Browse assets** on the homepage.
- :::image type="content" source="./media/how-to-create-and-manage-collections/browse-by-collection.png" alt-text="Screenshot of the catalog Microsoft Purview studio window with the browse assets button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/browse-by-collection.png" alt-text="Screenshot of the catalog Microsoft Purview governance portal window with the browse assets button highlighted." border="true":::
1. On the Browse asset page, select **By collection** pivot. Collections are listed with hierarchical table view. To further explore assets in each collection, select the corresponding collection name.
- :::image type="content" source="./media/how-to-create-and-manage-collections/by-collection-view.png" alt-text="Screenshot of the asset Microsoft Purview studio window with the by collection tab selected."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/by-collection-view.png" alt-text="Screenshot of the asset Microsoft Purview governance portal window with the by collection tab selected."border="true":::
1. On the next page, the search results of the assets under selected collection will be shown. You can narrow the results by selecting the facet filters. Or you can see the assets under other collections by selecting the sub/related collection names.
- :::image type="content" source="./media/how-to-create-and-manage-collections/search-results-by-collection.png" alt-text="Screenshot of the catalog Microsoft Purview studio window with the by collection tab selected."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/search-results-by-collection.png" alt-text="Screenshot of the catalog Microsoft Purview governance portal window with the by collection tab selected."border="true":::
1. To view the details of an asset, select the asset name in the search result. Or you can check the assets and bulk edit them.
- :::image type="content" source="./media/how-to-create-and-manage-collections/view-asset-details.png" alt-text="Screenshot of the catalog Microsoft Purview studio window with the by collection tab selected and asset check boxes highlighted."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/view-asset-details.png" alt-text="Screenshot of the catalog Microsoft Purview governance portal window with the by collection tab selected and asset check boxes highlighted."border="true":::
## Next steps
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
# Resource group and subscription access provisioning by data owner (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data use governance* in Microsoft Purview.
+[Policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data Use Management* in Microsoft Purview.
You can also [register an entire resource group or subscription](register-scan-azure-multiple-sources.md), and create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards. This article describes how this is done.
This article describes how this is done.
## Configuration [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
-### Register the subscription or resource group for data use governance
+### Register the subscription or resource group for Data Use Management
The subscription or resource group needs to be registered with Microsoft Purview to later define access policies. To register your resource, follow the **Prerequisites** and **Register** sections of this guide: - [Register multiple sources in Microsoft Purview](register-scan-azure-multiple-sources.md#prerequisites)
-After you've registered your resources, you'll need to enable data use governance. Data use governance affects the security of your data, as it delegates to certain users to manage access to data resources from within Microsoft Purview.
+After you've registered your resources, you'll need to enable Data Use Management. Data Use Management affects the security of your data, as it delegates to certain users to manage access to data resources from within Microsoft Purview.
-To ensure you securely enable data use governance, and follow best practices, follow this guide to enable data use governance for your resource group or subscription:
+To ensure you securely enable Data Use Management, and follow best practices, follow this guide to enable Data Use Management for your resource group or subscription:
-- [How to enable data use governance](./how-to-enable-data-use-governance.md)
+- [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-In the end, your resource will have the **Data use governance** toggle to **Enabled**, as shown in the picture:
+In the end, your resource will have the **Data Use Management** toggle to **Enabled**, as shown in the picture:
:::image type="content" source="./media/how-to-data-owner-policies-resource-group/register-resource-group-for-policy.png" alt-text="Screenshot that shows how to register a resource group or subscription for policy by toggling the enable tab in the resource editor.":::
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Access policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data use governance* in Microsoft Purview.
+[Access policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data Use Management* in Microsoft Purview.
This article describes how a data owner can delegate in Microsoft Purview management of access to Azure Storage datasets. Currently, these two Azure Storage sources are supported: - Blob storage
This article describes how a data owner can delegate in Microsoft Purview manage
## Configuration [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
-### Register the data sources in Microsoft Purview for Data use governance
+### Register the data sources in Microsoft Purview for Data Use Management
The Azure Storage resources need to be registered first with Microsoft Purview to later define access policies. To register your resources, follow the **Prerequisites** and **Register** sections of these guides:
To register your resources, follow the **Prerequisites** and **Register** sectio
- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Microsoft Purview](register-scan-adls-gen2.md#prerequisites)
-After you've registered your resources, you'll need to enable *Data use governance*. Data use governance can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to data sources that have been registered. Secure practices related to *Data use governance* are described in this guide:
+After you've registered your resources, you'll need to enable *Data Use Management*. Data Use Management can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to data sources that have been registered. Secure practices related to *Data Use Management* are described in this guide:
-- [How to enable data use governance](./how-to-enable-data-use-governance.md)
+- [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-Once your data source has the **Data use governance** toggle **Enabled**, it will look like this picture:
+Once your data source has the **Data Use Management** toggle **Enabled**, it will look like this picture:
:::image type="content" source="./media/how-to-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows how to register a data source for policy by toggling the enable tab in the resource editor.":::
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Last updated 4/18/2022
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-Access policies allow a data owner to delegate in Microsoft Purview access management to a data source. These policies can be authored directly in the Microsoft Purview studio, and after publishing, they get enforced by the data source. This tutorial describes how to create, update, and publish access policies in the Microsoft Purview studio.
+Access policies allow a data owner to delegate in Microsoft Purview access management to a data source. These policies can be authored directly in the Microsoft Purview governance portal, and after publishing, they get enforced by the data source. This tutorial describes how to create, update, and publish access policies in the Microsoft Purview governance portal.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
Access policies allow a data owner to delegate in Microsoft Purview access manag
### Data source configuration
-Before authoring data policies in Microsoft Purview Studio, you'll need to configure the data sources so that they can enforce those policies.
+Before authoring data policies in the Microsoft Purview governance portal, you'll need to configure the data sources so that they can enforce those policies.
1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](azure-purview-connector-overview.md#microsoft-purview-data-sources) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections. 1. Register the data source in Microsoft Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](azure-purview-connector-overview.md) for your resources.
-1. [Enable the data use governance toggle on the data source](how-to-enable-data-use-governance.md#enable-data-use-governance). Additional permissions for this step are described in the linked document.
+1. [Enable the Data Use Management toggle on the data source](how-to-enable-data-use-management.md#enable-data-use-management). Additional permissions for this step are described in the linked document.
## Create a new policy This section describes the steps to create a new policy in Microsoft Purview.
-1. Sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
A newly created policy is in the **draft** state. The process of publishing asso
The steps to publish a policy are as follows:
-1. Sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
The steps to publish a policy are as follows:
Steps to update or delete a policy in Microsoft Purview are as follows.
-1. Sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
purview How To Delete Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-delete-self-service-data-access-policy.md
Last updated 03/22/2022
# How to delete self-service data access policies
-In a Microsoft Purview catalog, you can now [request access](how-to-request-access.md) to data assets. If policies are currently available for the data source type and the data source has [data use governance enabled](how-to-enable-data-use-governance.md), a self-service policy is generated when a data access request is approved.
+In a Microsoft Purview catalog, you can now [request access](how-to-request-access.md) to data assets. If policies are currently available for the data source type and the data source has [Data Use Management enabled](how-to-enable-data-use-management.md), a self-service policy is generated when a data access request is approved.
This article describes how to delete self-service data access policies that have been auto-generated by approved access requests.
This article describes how to delete self-service data access policies that have
Self-service policies must exist to be deleted. To enable and create self-service policies, follow these articles:
-1. [Enable Data Use Governance](how-to-enable-data-use-governance.md) - this will allow Microsoft Purview to create policies for your sources.
+1. [Enable Data Use Management](how-to-enable-data-use-management.md) - this will allow Microsoft Purview to create policies for your sources.
1. [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - this will enable [users to request access to data sources from within Microsoft Purview](how-to-request-access.md). 1. [Approve a self-service data access request](how-to-workflow-manage-requests-approvals.md#approvals) - after approving a request, if your workflow from the previous step includes the ability to create a self-service data policy, your policy will be created and will be viewable.
Only users with **Policy Admin** privilege can delete self-service data access p
## Steps to delete self-service data access policies
-1. Open the Azure portal and launch the [Microsoft Purview Studio](https://web.purview.azure.com/resource/). The Microsoft Purview studio can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
+1. Open the Azure portal and launch the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). The Microsoft Purview governance portal can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
- :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-launch-pic-1.png" alt-text="Screenshot showing a Microsoft Purview account open in the Azure portal, with the Microsoft Purview studio button highlighted.":::
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-launch-pic-1.png" alt-text="Screenshot showing a Microsoft Purview account open in the Azure portal, with the Microsoft Purview governance portal button highlighted.":::
1. Select the policy management tab to launch the self-service access policies.
- :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-2.png" alt-text="Screenshot of the Microsoft Purview studio with the leftmost menu open, and the Data policy page option highlighted.":::
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-2.png" alt-text="Screenshot of the Microsoft Purview governance portal with the leftmost menu open, and the Data policy page option highlighted.":::
1. Open the self-service access policies tab.
- :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-3.png" alt-text="Screenshot of the Microsoft Purview studio open to the Data policy page with self-service access policies highlighted.":::
+ :::image type="content" source="./media/how-to-delete-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-3.png" alt-text="Screenshot of the Microsoft Purview governance portal open to the Data policy page with self-service access policies highlighted.":::
1. Here you'll see all your policies. Select the policies that need to be deleted. The policies can be sorted and filtered by any of the displayed columns to improve your search.
purview How To Enable Data Use Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-management.md
+
+ Title: Enabling Data Use Management on your Microsoft Purview sources
+description: Step-by-step guide on how to enable data use access for your registered sources.
+++++ Last updated : 4/21/2022+++
+# Enable Data Use Management on your Microsoft Purview sources
++
+*Data Use Management* (DUM) is an option within the data source registration in Microsoft Purview. This option lets Microsoft Purview manage data access for your resources. The high level concept is that the data owner allows its data resource to be available for access policies by enabling *DUM*.
+
+Currently, a data owner can enable DUM on a data resource for these types of access policies:
+
+* [Data owner access policies](concept-data-owner-policies.md) - access policies authored via Microsoft Purview data policy experience.
+* [Self-service access policies](concept-self-service-data-access-policy.md) - access policies automatically generated by Microsoft Purview after a [self-service access request](how-to-request-access.md) is approved.
+
+To be able to create any data policy on a resource, DUM must first be enabled on that resource. This article will explain how to enable DUM on your resources in Microsoft Purview.
+
+>[!IMPORTANT]
+>Because Data Use Management directly affects access to your data, it directly affects your data security. Review [**additional considerations**](#additional-considerations-related-to-data-use-management) and [**best practices**](#data-use-management-best-practices) below before enabling DUM in your environment.
+
+## Prerequisites
+
+## Enable Data Use Management
+
+To enable *Data Use Management* for a resource, the resource will first need to be registered in Microsoft Purview.
+To register a resource, follow the **Prerequisites** and **Register** sections of the [source pages](azure-purview-connector-overview.md) for your resources.
+
+Once you have your resource registered, follow the rest of the steps to enable an individual resource for *Data Use Management*.
+
+1. Go to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+
+1. Select the **Data map** tab in the left menu.
+
+1. Select the **Sources** tab in the left menu.
+
+1. Select the source where you want to enable *Data Use Management*.
+
+1. At the top of the source page, select **Edit source**.
+
+1. Set the *Data Use Management* toggle to **Enabled**, as shown in the image below.
++
+## Disable Data Use Management
+
+To disable Data Use Management for a source, resource group, or subscription, a user needs to either be a resource IAM **Owner** or a Microsoft Purview **Data source admin**. Once you have those permissions follow these steps:
+
+1. Go to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+
+1. Select the **Data map** tab in the left menu.
+
+1. Select the **Sources** tab in the left menu.
+
+1. Select the source you want to disable Data Use Management for.
+
+1. At the top of the source page, select **Edit source**.
+
+1. Set the **Data Use Management** toggle to **Disabled**.
+
+## Additional considerations related to Data Use Management
+
+- Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.
+- To disable a source for *Data Use Management*, remove it first from being bound (i.e. published) in any policy.
+- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data Use Management*, either of those roles can independently disable it.
+- Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.
+- To disable a source for *Data Use Management*, remove it first from being bound (i.e., published) in any policy.
+- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data Use Management*, either of those roles can independently disable it.
+- Disabling *Data Use Management* for a subscription will disable it also for all assets registered in that subscription.
+
+> [!WARNING]
+> **Known issues** related to source registration
+> - Moving data sources to a different resource group or subscription is not yet supported. If want to do that, de-register the data source in Microsoft Purview before moving it and then register it again after that happens.
+> - Once a subscription gets disabled for *Data Use Management* any underlying assets that are enabled for *Data Use Management* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that.
+
+## Data Use Management best practices
+- We highly encourage registering data sources for *Data Use Management* and managing all associated access policies in a single Microsoft Purview account.
+- Should you have multiple Microsoft Purview accounts, be aware that **all** data sources belonging to a subscription must be registered for *Data Use Management* in a single Microsoft Purview account. That Microsoft Purview account can be in any subscription in the tenant. The *Data Use Management* toggle will become greyed out when there are invalid configurations. Some examples of valid and invalid configurations follow in the diagram below:
+ - **Case 1** shows a valid configuration where a Storage account is registered in a Microsoft Purview account in the same subscription.
+ - **Case 2** shows a valid configuration where a Storage account is registered in a Microsoft Purview account in a different subscription.
+ - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Microsoft Purview accounts. In that case, the *Data Use Management* toggle will only enable in the Microsoft Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
+- If the *Data Use Management* toggle is greyed out and cannot be enabled, hover over it to know the name of the Microsoft Purview account that has registered the data resource first.
+
+![Diagram shows valid and invalid configurations when using multiple Microsoft Purview accounts to manage policies.](./media/access-policies-common/valid-and-invalid-configurations.png)
+
+## Next steps
+
+- [Create data owner policies for your resources](how-to-data-owner-policy-authoring-generic.md)
+- [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
+- [Enable Microsoft Purview data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md)
purview How To Monitor Scan Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-scan-runs.md
In Microsoft Purview, you can register and scan various types of data sources, a
## Monitor scan runs
-1. Go to your Microsoft Purview account -> open **Microsoft Purview Studio** -> **Data map** -> **Monitoring**.
+1. Go to your Microsoft Purview account -> open **Microsoft Purview governance portal** -> **Data map** -> **Monitoring**.
1. The high-level KPIs show total scan runs within a period. The time period is defaulted at last 30 days, you can also choose to select last seven days. Based on the time filter selected, you can see the distribution of successful, failed, and canceled scan runs by week or by the day in the graph.
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
This article outlines how to make an access request.
1. To find a data asset, use Microsoft Purview's [search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) functionality.
- :::image type="content" source="./media/how-to-request-access/search-or-browse.png" alt-text="Screenshot of the Microsoft Purview studio, with the search bar and browse buttons highlighted.":::
+ :::image type="content" source="./media/how-to-request-access/search-or-browse.png" alt-text="Screenshot of the Microsoft Purview governance portal, with the search bar and browse buttons highlighted.":::
1. Select the asset to go to asset details.
purview How To Search Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-search-catalog.md
The goal of search in Microsoft Purview is to speed up the process of quickly fi
## Searching the catalog
-The search bar can be quickly accessed from the top bar of the Microsoft Purview Studio UX. In the data catalog home page, the search bar is in the center of the screen.
+The search bar can be quickly accessed from the top bar of the Microsoft Purview governance portal UX. In the data catalog home page, the search bar is in the center of the screen.
:::image type="content" source="./media/how-to-search-catalog/purview-search-bar.png" alt-text="Screenshot showing the location of the Microsoft Purview search bar" border="true":::
purview How To View Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-view-self-service-data-access-policy.md
Last updated 03/22/2022
# How to view self-service data access policies
-In a Microsoft Purview catalog, you can now [request access](how-to-request-access.md) to data assets. If policies are currently available for the data source type and the data source has [data use governance enabled](how-to-enable-data-use-governance.md), a self-service policy is generated when a data access request is approved.
+In a Microsoft Purview catalog, you can now [request access](how-to-request-access.md) to data assets. If policies are currently available for the data source type and the data source has [Data Use Management enabled](how-to-enable-data-use-management.md), a self-service policy is generated when a data access request is approved.
This article describes how to view self-service data access policies that have been auto-generated by approved access requests.
This article describes how to view self-service data access policies that have b
Self-service policies must exist for them to be viewed. To enable and create self-service policies, follow these articles:
-1. [Enable Data Use Governance](how-to-enable-data-use-governance.md) - this will allow Microsoft Purview to create policies for your sources.
+1. [Enable Data Use Management](how-to-enable-data-use-management.md) - this will allow Microsoft Purview to create policies for your sources.
1. [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - this will enable [users to request access to data sources from within Microsoft Purview](how-to-request-access.md). 1. [Approve a self-service data access request](how-to-workflow-manage-requests-approvals.md#approvals) - after approving a request, if your workflow from the previous step includes the ability to create a self-service data policy, your policy will be created and will be viewable.
If you need to add or request permissions, follow the [Microsoft Purview permiss
## Steps to view self-service data access policies
-1. Open the Azure portal and launch the [Microsoft Purview Studio](https://web.purview.azure.com/resource/). The Microsoft Purview studio can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
+1. Open the Azure portal and launch the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). The Microsoft Purview governance portal can be launched as shown below or by using the [url directly](https://web.purview.azure.com/resource/).
- :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-launch-pic-1.png" alt-text="Screenshot showing a Microsoft Purview account open in the Azure portal, with the Microsoft Purview studio button highlighted.":::
+ :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-launch-pic-1.png" alt-text="Screenshot showing a Microsoft Purview account open in the Azure portal, with the Microsoft Purview governance portal button highlighted.":::
1. Select the policy management tab to launch the self-service access policies.
- :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-2.png" alt-text="Screenshot of the Microsoft Purview studio with the leftmost menu open, and the Data policy page option highlighted.":::
+ :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-2.png" alt-text="Screenshot of the Microsoft Purview governance portal with the leftmost menu open, and the Data policy page option highlighted.":::
1. Open the self-service access policies tab.
- :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-3.png" alt-text="Screenshot of the Microsoft Purview studio open to the Data policy page with self-service access policies highlighted.":::
+ :::image type="content" source="./media/how-to-view-self-service-data-access-policy/Purview-Studio-self-service-tab-pic-3.png" alt-text="Screenshot of the Microsoft Purview governance portal open to the Data policy page with self-service access policies highlighted.":::
1. Here you'll see all your policies. The policies can be sorted and filtered by any of the displayed columns to improve your search.
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-select-data-source.png" alt-text="Screenshot that allows selection of the data source":::
-1. Provide a suitable **Name** for the data source, select the relevant **Azure subscription**, existing **Data Lake Store account name** and the **collection** and select **Apply**. Leave the **Data use governance** toggle on the **disabled** position until you have a chance to carefully go over this [document](./how-to-access-policies-storage.md).
+1. Provide a suitable **Name** for the data source, select the relevant **Azure subscription**, existing **Data Lake Store account name** and the **collection** and select **Apply**. Leave the **Data Use Management** toggle on the **disabled** position until you have a chance to carefully go over this [document](./how-to-access-policies-storage.md).
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-data-source-details.png" alt-text="Screenshot that shows the details to be entered in order to register the data source":::
To create an access policy for Azure Data Lake Storage Gen 2, follow the guideli
[!INCLUDE [Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)]
-### Enable data use governance
+### Enable Data Use Management
-Data use governance is an option on your Microsoft Purview sources that will allow you to manage access for that source from within Microsoft Purview.
-To enable data use governance, follow [the data use governance guide](how-to-enable-data-use-governance.md#enable-data-use-governance).
+Data Use Management is an option on your Microsoft Purview sources that will allow you to manage access for that source from within Microsoft Purview.
+To enable Data Use Management, follow [the Data Use Management guide](how-to-enable-data-use-management.md#enable-data-use-management).
### Create an access policy
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-select-data-source.png" alt-text="Screenshot that allows selection of the data source":::
-1. Provide a suitable **Name** for the data source, select the relevant **Azure subscription**, existing **Azure Blob Storage account name** and the **collection** and select **Apply**. Leave the **Data use governance** toggle on the **disabled** position until you have a chance to carefully go over this [document](./how-to-access-policies-storage.md).
+1. Provide a suitable **Name** for the data source, select the relevant **Azure subscription**, existing **Azure Blob Storage account name** and the **collection** and select **Apply**. Leave the **Data Use Management** toggle on the **disabled** position until you have a chance to carefully go over this [document](./how-to-access-policies-storage.md).
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-data-source-details.png" alt-text="Screenshot that shows the details to be entered in order to register the data source":::
To create an access policy for an Azure Storage account, follow the guidelines b
[!INCLUDE [Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)]
-### Enable data use governance
+### Enable Data Use Management
-Data use governance is an option on your Microsoft Purview sources that will allow you to manage access for that source from within Microsoft Purview.
-To enable data use governance, follow [the data use governance guide](how-to-enable-data-use-governance.md#enable-data-use-governance).
+Data Use Management is an option on your Microsoft Purview sources that will allow you to manage access for that source from within Microsoft Purview.
+To enable Data Use Management, follow [the Data Use Management guide](how-to-enable-data-use-management.md#enable-data-use-management).
### Create an access policy
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
It's important to register the data source in Microsoft Purview before setting u
If your database server has a firewall enabled, you'll need to update the firewall to allow access in one of two ways:
-1. [Allow Azure connections through the firewall](#allow-azure-connections).
-1. [Install a Self-Hosted Integration Runtime and give it access through the firewall](#self-hosted-integration-runtime).
+1. [Allow Azure connections through the firewall](#allow-azure-connections) - a straightforward option to route traffic through Azure networking, without needing to manage virtual machines.
+1. [Install a Self-Hosted Integration Runtime on a machine in your network and give it access through the firewall](#self-hosted-integration-runtime) - if you have a private VNet set up within Azure, or have any other closed network set up, using a self-hosted integration runtime on a machine within that network will allow you to fully manage traffic flow and utilize your existing network.
+1. [Use a managed virtual network](catalog-managed-vnet.md) - setting up a managed virtual network with your Microsoft Purview account will allow you to connect to Azure SQL using the Azure integration runtime in a closed network.
+
+For more information about the Azure SQL Firewall, see the [SQL Database firewall documentation.](../azure-sql/database/firewall-configure.md) To connect Microsoft Purview through the firewall, follow the steps below.
#### Allow Azure Connections
Enabling Azure connections will allow Microsoft Purview to reach and connect the
A self-hosted integration runtime (SHIR) can be installed on a machine to connect with a resource in a private network. 1. [Create and install a self-hosted integration runtime](./manage-integration-runtimes.md) on a personal machine, or a machine inside the same VNet as your database server.
-1. Check your database server networking configuration to confirm that there is a private endpoint accessible to the SHIR machine. Add the IP of the machine if it doesn't already have access.
+1. Check your database server networking configuration to confirm that there's a private endpoint accessible to the SHIR machine. Add the IP of the machine if it doesn't already have access.
1. If your Azure SQL Server is behind a private endpoint or in a VNet, you can use an [ingestion private endpoint](catalog-private-link-ingestion.md#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) to ensure end-to-end network isolation. ### Authentication for a scan To scan your data source, you'll need to configure an authentication method in the Azure SQL Database.
-The following options are supported:
-* **SQL Authentication**
+>[!IMPORTANT]
+> If you are using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, **system-assigned and user-assigned managed identities will not work**. You need to use Service Principal authentication or SQL authentication.
-* **System-assigned managed identity** - As soon as the Microsoft Purview account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant, and has the same name as your Microsoft Purview account. Depending on the type of resource, specific RBAC role assignments are required for the Microsoft Purview SAMI to be able to scan.
+The following options are supported:
-* **User-assigned managed identity** (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that can be used to allow Microsoft Purview to authenticate against Azure Active Directory. Depending on the type of resource, specific RBAC role assignments are required when using a UAMI credential to run scans.
+* **System-assigned managed identity** (Recommended) - This is an identity associated directly with your Microsoft Purview account that allows you to authenticate directly with other Azure resources without needing to manage a go-between user or credential set. The **system-assigned** managed identity is created when your Microsoft Purview resource is created, is managed by Azure, and uses your Microsoft Purview account's name. The SAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see the [managed identity overview](/active-directory/managed-identities-azure-resources/overview).
-* **Service Principal**- In this method, you can create a new or use an existing service principal in your Azure Active Directory tenant.
+* **User-assigned managed identity** (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that allows Microsoft Purview to authenticate against Azure Active Directory. The **user-assigned** managed by users in Azure, rather than by Azure itself, which gives you more control over security. The UAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see our [guide for user-assigned managed identities.](manage-credentials.md#create-a-user-assigned-managed-identity)
->[!IMPORTANT]
-> If you are using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, system-assigned and user-assigned managed identities will not work. You need to use SQL Authentication or Service Principal Authentication.
+* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documenatation](/active-directory/develop/app-objects-and-service-principals).
+
+* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documenation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication).If you need to create a login, follow this [guide to query an Azure SQL database](../azure-sql/database/connect-query-portal.md), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql)
+ > [!NOTE]
+ > Be sure to select the Azure SQL Database option on the page.
-Select your method of authentication from the tabs below for steps to authenticate with your Azure SQL Database.
+Select your chosen method of authentication from the tabs below for steps to authenticate with your Azure SQL Database.
# [SQL authentication](#tab/sql-authentication)
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
When setting up scan, you can choose to scan an entire Cassandra instance, or sc
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
**If your data store is not publically accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
When setting up scan, you can choose to scan an entire Cassandra instance, or sc
## Register
-This section describes how to register Cassandra in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Cassandra in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
When setting up scan, you can choose to scan an entire Db2 database, or scope th
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.12.7984.1.
When setting up scan, you can choose to scan an entire Db2 database, or scope th
## Register
-This section describes how to register Db2 in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Db2 in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register To register a new Db2 source in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Db2**. Select **Continue**.
On the **Register sources (Db2)** screen, do the following:
1. Enter the **Server** name to connect to a Db2 source. This can either be: * A host name used to connect to the database server. For example: `MyDatabaseServer.com` * An IP address. For example: `192.169.1.2`
- * Its fully qualified JDBC connection string. For example:
-
- ```
- jdbc:db2://COMPUTER_NAME_OR_IP:PORT/DATABASE_NAME
- ```
1. Enter the **Port** used to connect to the database server (446 by default for Db2).
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
When setting up scan, you can choose to scan an entire erwin Mart server, or sco
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When setting up scan, you can choose to scan an entire erwin Mart server, or sco
## Register
-This section describes how to register erwin Mart servers in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register erwin Mart servers in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
The only supported authentication for an erwin Mart source is **Server Authentication** in the form of username and password. ### Steps to register
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **erwin**. Select **Continue.**
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
When setting up scan, you can choose to scan an entire Google BigQuery project,
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When setting up scan, you can choose to scan an entire Google BigQuery project,
## Register
-This section describes how to register a Google BigQuery project in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register a Google BigQuery project in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
When setting up scan, you can choose to scan an entire Hive metastore database,
* You must have an active [Microsoft Purview account](create-catalog-portal.md).
-* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Microsoft Purview Studio. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md).
When setting up scan, you can choose to scan an entire Hive metastore database,
## Register
-This section describes how to register a Hive Metastore database in Microsoft Purview by using [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register a Hive Metastore database in Microsoft Purview by using [the Microsoft Purview governance portal](https://web.purview.azure.com/).
The only supported authentication for a Hive Metastore database is Basic Authentication.
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
When setting up scan, you can choose to scan an entire Looker server, or scope t
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
If your data store is publicly accessible, you can use the managed Azure integration runtime for scan without additional settings. Otherwise, if your data store limits access from on-premises network, private network or specific IPs, you need to configure a self-hosted integration runtime to connect to it:
If your data store is publicly accessible, you can use the managed Azure integra
## Register
-This section describes how to register Looker in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Looker in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
purview Register Scan Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mongodb.md
When setting up scan, you can choose to scan one or more MongoDB database(s) ent
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.16.8093.1.
When setting up scan, you can choose to scan one or more MongoDB database(s) ent
## Register
-This section describes how to register MongoDB in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register MongoDB in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register To register a new MongoDB source in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **MongoDB**. Select **Continue**.
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
When setting up scan, you can choose to scan an entire MySQL server, or scope th
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See [Microsoft Purview Permissions page](catalog-permissions.md) for details.
**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
The MySQL user must have the SELECT, SHOW VIEW and EXECUTE permissions for each
## Register
-This section describes how to register MySQL in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register MySQL in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register To register a new MySQL source in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **MySQL**. Select **Continue**.
On the **Register sources (MySQL)** screen, do the following:
1. Enter the **Server** name to connect to a MySQL source. This can either be: * A host name used to connect to the database server. For example: `MyDatabaseServer.com` * An IP address. For example: `192.169.1.2`
- * Its fully qualified JDBC connection string. For example:
-
- ```
- jdbc:mysql://COMPUTER_NAME_OR_IP/DATABASE_NAME
- ```
1. Enter the **Port** used to connect to the database server (3306 by default for MySQL).
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
When setting up scan, you can choose to specify the database name to scan one da
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). ## Register
-This section describes how to register an on-premises SQL server instance in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an on-premises SQL server instance in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
Follow the steps below to scan on-premises SQL server instances to automatically
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the SQL Server source that you registered.
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
Currently, the Oracle service name isn't captured in the metadata or hierarchy.
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
Currently, the Oracle service name isn't captured in the metadata or hierarchy.
## Register
-This section describes how to register Oracle in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Oracle in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Prerequisites for registration
The only supported authentication for an Oracle source is **Basic authentication
To register a new Oracle source in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Oracle**. Select **Continue**.
On the **Register sources (Oracle)** screen, do the following:
1. Enter the **Host** name to connect to an Oracle source. This can either be: * A host name used by JDBC to connect to the database server. For example: `MyDatabaseServer.com` * An IP address. For example: `192.169.1.2`
- * Its fully qualified JDBC connection string. For example:
-
- ```
- jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=oracleserver1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=oracleserver2)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=oracleserver3)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=orcl)))
- ```
1. Enter the **Port number** used by JDBC to connect to the database server (1521 by default for Oracle).
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
When setting up scan, you can choose to scan an entire PostgreSQL database, or s
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
The PostgreSQL user must have read access to system tables in order to access ad
## Register
-This section describes how to register PostgreSQL in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register PostgreSQL in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register To register a new PostgreSQL source in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **PostgreSQL**. Select **Continue**.
On the **Register sources (PostgreSQL)** screen, do the following:
1. Enter the **Server** name to connect to a PostgreSQL source. This can either be: * A host name used to connect to the database server. For example: `MyDatabaseServer.com` * An IP address. For example: `192.169.1.2`
- * Its fully qualified JDBC connection string. For example:
-
- ```
- jdbc:postgresql://COMPUTER_NAME_OR_IP:PORT/DATABASE_NAME
- ```
1. Enter the **Port** used to connect to the database server (5432 by default for PostgreSQL).
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This article outlines how to register a Power BI tenant, and how to authenticate
- An active [Microsoft Purview account](create-catalog-portal.md). -- You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+- You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
- If delegated auth is used: - Make sure proper [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to Power BI admin user that is used for the scan.
This is a suitable scenario, if both Microsoft Purview and Power BI tenant are c
To create and run a new scan, do the following:
-1. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
+1. In the Microsoft Purview governance portal, navigate to the **Data map** in the left menu.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
-8. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
+8. In the Microsoft Purview governance portal, navigate to the **Data map** in the left menu.
9. Navigate to **Sources**.
To create and run a new scan using Azure runtime, perform the following steps:
11. Under **Advanced settings**, enable **Allow Public client flows**.
-12. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu. Navigate to **Sources**.
+12. In the Microsoft Purview governance portal, navigate to the **Data map** in the left menu. Navigate to **Sources**.
13. Select the registered Power BI source from cross tenant.
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
On the **Register sources (Teradata)** screen, do the following:
1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Enter the **Host** name to connect to a Teradata source. It can also be an IP address or a fully qualified connection string to the server.
+1. Enter the **Host** name to connect to a Teradata source. It can also be an IP address of the server.
1. Select a collection or create a new one (Optional)
purview Sensitivity Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/sensitivity-insights.md
Title: Sensitivity label reporting on your data in Microsoft Purview using Microsoft Purview Insights
-description: This how-to guide describes how to view and use Microsoft Purview Sensitivity label reporting on your data.
+description: This how-to guide describes how to view and use sensitivity label reporting on your data.
Previously updated : 09/27/2021 Last updated : 04/22/2022 # Customer intent: As a security officer, I need to understand how to use Microsoft Purview Insights to learn about sensitive data identified and classified and labeled during scanning.
This how-to guide describes how to access, view, and filter security insights provided by sensitivity labels applied to your data. > [!IMPORTANT]
-> Microsoft Purview Sensitivity Label Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Sensitivity labels in Microsoft Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADLS) GEN 1, Azure Data Lake Storage (ADLS) GEN 2, SQL Server, Azure SQL Database, Azure SQL Managed Instance, Amazon S3 buckets, Amazon RDS databases (public preview), Power BI
In this how-to guide, you'll learn how to:
## Prerequisites
-Before getting started with Microsoft Purview insights, make sure that you've completed the following steps:
+Before getting started with Microsoft Purview Insights, make sure that you've completed the following steps:
- Set up your Azure resources and populated the relevant accounts with test data -- [Extended Microsoft 365 sensitivity labels to assets in Microsoft Purview](create-sensitivity-label.md), and created or selected the labels you want to apply to your data.
+- [Extended sensitivity labels to assets in the Microsoft Purview Data Map](how-to-automatically-label-your-content.md), and created or selected the labels you want to apply to your data.
- Set up and completed a scan on the test data in each data source. For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md) and [Create a scan rule set](create-a-scan-rule-set.md).
Before getting started with Microsoft Purview insights, make sure that you've co
For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md) and [Automatically label your data in Microsoft Purview](create-sensitivity-label.md).
-## Use Microsoft Purview Sensitivity labeling insights
+## Use Microsoft Purview Insights for sensitivity labels
-In Microsoft Purview, classifications are similar to subject tags, and are used to mark and identify data of a specific type that's found within your data estate during scanning.
+Classifications are similar to subject tags, and are used to mark and identify data of a specific type that's found within your data estate during scanning.
Sensitivity labels enable you to state how sensitive certain data is in your organization. For example, a specific project name might be highly confidential within your organization, while that same term is not confidential to other organizations.
Classifications are matched directly, such as a social security number, which ha
In contrast, sensitivity labels are applied when one or more classifications and conditions are found together. In this context, [conditions](/microsoft-365/compliance/apply-sensitivity-label-automatically) refer to all the parameters that you can define for unstructured data, such as **proximity to another classification**, and **% confidence**.
-Microsoft Purview uses the same classifications, also known as [sensitive information types](/microsoft-365/compliance/sensitive-information-type-entity-definitions), as Microsoft 365. This enables you to extend your existing sensitivity labels across your Microsoft Purview assets.
+Microsoft Purview Insights uses the same classifications, also known as [sensitive information types](/microsoft-365/compliance/sensitive-information-type-entity-definitions), as those used with Microsoft 365 apps and services. This enables you to extend your existing sensitivity labels to assets in the data map.
> [!NOTE] > After you have scanned your source types, give **Sensitivity labeling** Insights a couple of hours to reflect the new assets.
Microsoft Purview uses the same classifications, also known as [sensitive inform
1. In the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Sensitivity labels** to display the Microsoft Purview **Sensitivity labeling insights** report. > [!NOTE]
- > If this report is empty, you may not have extended your sensitivity labels to Microsoft Purview. For more information, see [Automatically label your data in Microsoft Purview](create-sensitivity-label.md).
+ > If this report is empty, you may not have extended your sensitivity labels to Microsoft Purview Data Map. For more information, see [Labeling in the Microsoft Purview Data Map](create-sensitivity-label.md).
:::image type="content" source="media/insights/sensitivity-labeling-insights-small.png" alt-text="Sensitivity labeling insights":::
Do any of the following to learn more:
|**Browse assets** | To browse through the assets found with a specific label or source, select one or more labels or sources, depending on the report you're viewing, and then select **Browse assets** :::image type="icon" source="medi). | | | |
-## Sensitivity label integration with Microsoft 365 compliance
+## Sensitivity label integration with Microsoft Purview Information Protection
-Close integration with [Microsoft Information Protection](/microsoft-365/compliance/information-protection) offered in Microsoft 365 means that Microsoft Purview enables direct ways to extend visibility into your data estate, and classify and label your data.
+Close integration with [Microsoft Purview Information Protection](/microsoft-365/compliance/information-protection) means that you have direct ways to extend visibility into your data estate, and classify and label your data.
-For your Microsoft 365 sensitivity labels to be extended to your assets in Microsoft Purview, you must actively turn on Information Protection for Microsoft Purview, in the Microsoft 365 compliance center.
+For sensitivity labels to be extended to your assets in the data map, you must actively turn on this capability in the Microsoft Purview compliance portal.
-For more information, see [Automatically label your data in Microsoft Purview](create-sensitivity-label.md).
+For more information, see [How to automatically apply sensitivity labels to your data in the Microsoft Purview Data Map](how-to-automatically-label-your-content.md).
## Next steps
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
Previously updated : 03/15/2022 Last updated : 04/22/2022 # Customer Intent: As a Data and Data Security administrator, I want to deploy Microsoft Purview as a unified data governance solution.
This article lists prerequisites that help you get started quickly on Microsoft
|:|:|:|:| |1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to Microsoft Purview for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> | |2 |An active Azure Subscription |*Subscription Owner* |An Azure subscription is needed to deploy Microsoft Purview and its managed resources. If you don't have an Azure subscription, create a [free subscription](https://azure.microsoft.com/free/) before you begin. |
-|3 |Define whether you plan to deploy a Microsoft Purview with managed Event Hub | N/A |A managed Event Hub is created as part of Microsoft Purview account creation, see Microsoft Purview account creation. You can publish messages to the Event Hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to Event Hub kafka topic ATLAS_ENTITIES and user can consume and process it. |
+|3 |Define whether you plan to deploy a Microsoft Purview with a managed event hub | N/A |A managed event hub is created as part of Microsoft Purview account creation, see Microsoft Purview account creation. You can publish messages to the event hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to the event hub kafka topic ATLAS_ENTITIES and user can consume and process it. |
|4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](/azure/azure-resource-manager/management/resource-providers-and-types) in the Azure Subscription that is designated for Microsoft Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). |
-|5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Microsoft Purview</li><li>Azure Storage</li><li>Azure Event Hub (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, please follow our [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Microsoft Purview accounts. |
+|5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Microsoft Purview</li><li>Azure Storage</li><li>Azure Event Hubs (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, please follow our [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Microsoft Purview accounts. |
|6 | Define your network security requirements. | Network and Security architects. |<ul><li> Review [Microsoft Purview network architecture and best practices](concept-best-practices-network.md) to define what scenario is more relevant to your network requirements. </li><li>If private network is needed, use [Microsoft Purview Managed IR](catalog-managed-vnet.md) to scan Azure data sources when possible to reduce complexity and administrative overhead. </li></ul> | |7 |An Azure Virtual Network and Subnet(s) for Microsoft Purview private endpoints. | *Network Contributor* to create or update Azure VNet. |Use this step if you're planning to deploy [private endpoint connectivity with Microsoft Purview](catalog-private-link.md): <ul><li>Private endpoints for **Ingestion**.</li><li>Private endpoint for Microsoft Purview **Account**.</li><li>Private endpoint for Microsoft Purview **Portal**.</li></ul> <br> Deploy [Azure Virtual Network](../virtual-network/quick-create-portal.md) if you need one. | |8 |Deploy private endpoint for Azure data sources. |*Network Contributor* to set up private endpoints for each data source. |Perform this step, if you're planning to use [Private Endpoint for Ingestion](catalog-private-link-end-to-end.md). |
This article lists prerequisites that help you get started quickly on Microsoft
|13 |Deploy Self-hosted integration runtime VMs inside your network. |Azure: *Virtual Machine Contributor* <br> On-prem: Application owner |Use this step if you're planning to perform any scans using [Self-hosted Integration Runtime](manage-integration-runtimes.md). | |14 |Create a Self-hosted integration runtime inside Microsoft Purview. |Data curator <br> VM Administrator or application owner |Use this step if you're planning to use Self-hosted Integration Runtime instead of Managed Integration Runtime or Azure Integration Runtime. <br><br> <br> [download](https://www.microsoft.com/en-us/download/details.aspx?id=39717) | |15 |Register your Self-hosted integration runtime | Virtual machine administrator |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **Private Endpoint** to scan to **any** data sources. |
-|16 |Grant Azure RBAC **Reader** role to **Microsoft Purview MSI** at data sources' Subscriptions |*Subscription owner* or *User Access Administrator* |Use this step if you're planning to register [multiple](register-scan-azure-multiple-sources.md) or **any** of the following data sources: <ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md)</li><li>[Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</li><li>[Azure SQL Database](register-scan-azure-sql-database.md)</li><li>[Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)</li><li>[Azure Synapse Analytics](register-scan-synapse-workspace.md)</li></ul> |
+|16 |Grant Azure RBAC **Reader** role to **Microsoft Purview MSI** at data sources' Subscriptions |*Subscription owner* or *User Access Administrator* |Use this step if you're planning to register [multiple](register-scan-azure-multiple-sources.md) or **any** of the following data sources: <ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md)</li><li>[Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</li><li>[Azure SQL Database](register-scan-azure-sql-database.md)</li><li>[Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md)</li><li>[Azure Synapse Analytics](register-scan-synapse-workspace.md)</li></ul> |
|17 |Grant Azure RBAC **Storage Blob Data Reader** role to **Microsoft Purview MSI** at data sources Subscriptions. |*Subscription owner* or *User Access Administrator* | **Skip** this step if you are using Private Endpoint to connect to data sources. Use this step if you have these data sources:<ul><li>[Azure Blob Storage](register-scan-azure-blob-storage-source.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li><li>[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#using-a-system-or-user-assigned-managed-identity-for-scanning)</li></ul> | |18 |Enable network connectivity to allow AzureServices to access data sources: <br> e.g. Enable "**Allow trusted Microsoft services to access this storage account**". |*Owner* or *Contributor* at Data source |Use this step if **Service Endpoint** is used in your data sources. (Don't use this step if Private Endpoint is used) |
-|19 |Enable **Azure Active Directory Authentication** on **Azure SQL Servers**, **Azure SQL Database Managed Instance** and **Azure Synapse Analytics** |Azure SQL Server Contributor |Use this step if you have **Azure SQL DB** or **Azure SQL Database Managed Instance** or **Azure Synapse Analytics** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
-|20 |Grant **Microsoft Purview MSI** account with **db_datareader** role to Azure SQL databases and Azure SQL Database Managed Instance databases |Azure SQL Administrator |Use this step if you have **Azure SQL DB** or **Azure SQL Database Managed Instance** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
+|19 |Enable **Azure Active Directory Authentication** on **Azure SQL Servers**, **Azure SQL Managed Instance** and **Azure Synapse Analytics** |Azure SQL Server Contributor |Use this step if you have **Azure SQL DB** or **Azure SQL Managed Instance** or **Azure Synapse Analytics** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
+|20 |Grant **Microsoft Purview MSI** account with **db_datareader** role to Azure SQL databases and Azure SQL Managed Instance databases |Azure SQL Administrator |Use this step if you have **Azure SQL DB** or **Azure SQL Managed Instance** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
|21 |Grant Azure RBAC **Storage Blob Data Reader** to **Synapse SQL Server** for staging Storage Accounts |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you are using Private Endpoint to connect to data sources. | |22 |Grant Azure RBAC **Reader** role to **Microsoft Purview MSI** at **Synapse workspace** resources |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you are using Private Endpoint to connect to data sources. | |23 |Grant Azure **Purview MSI account** with **db_datareader** role |Azure SQL Administrator |Use this step if you have **Azure Synapse Analytics (Dedicated SQL databases)**. <br> **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
This article lists prerequisites that help you get started quickly on Microsoft
|29 | Create a new connection to Azure Key Vault from the Microsoft Purview governance portal | *Data source admin* | Use this step if you are planing to use any of the following [authentication options](manage-credentials.md#create-a-new-credential) to scan a data source in Microsoft Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul> |30 |Deploy a private endpoint for Power BI tenant |*Power BI Administrator* <br> *Network contributor* |Use this step if you're planning to register a Power BI tenant as data source and your Microsoft Purview account is set to **deny public access**. <br> For more information, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links). | |31 |Connect Azure Data Factory to Microsoft Purview from Azure Data Factory Portal. **Manage** -> **Microsoft Purview**. Select **Connect to a Purview account**. <br> Validate if Azure resource tag **catalogUri** exists in ADF Azure resource. |Azure Data Factory Contributor / Data curator |Use this step if you have **Azure Data Factory**. |
-|32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Microsoft Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning in extending **Sensitivity Labels from Microsoft 365 to Microsoft Purview** <br> For more information, see [licensing requirements to use sensitivity labels on files and database columns in Microsoft Purview](sensitivity-labels-frequently-asked-questions.yml) |
-|33 |Consent "**Extend labeling to assets in Microsoft Purview**" |Compliance Administrator <br> Azure Information Protection Administrator |Use this step if you are interested in extending Sensitivity Labels from Microsoft 365 to Microsoft Purview. <br> Use this step if you are interested in extending **Sensitivity Labels** from Microsoft 365 to Microsoft Purview. |
+|32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Microsoft Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning to extend **sensitivity labels to Microsoft Purview Data Map** <br> For more information, see [licensing requirements to use sensitivity labels on files and database columns in Microsoft Purview](sensitivity-labels-frequently-asked-questions.yml) |
+|33 |Consent "**Extend labeling to assets in Microsoft Purview Data Map**" |Compliance Administrator <br> Azure Information Protection Administrator |Use this step if you are interested in extending sensitivity labels to your data in the data map. <br> For more information, see [Labeling in the Microsoft Purview Data Map](create-sensitivity-label.md). |
|34 |Create new collections and assign roles in Microsoft Purview |*Collection admin* | [Create a collection and assign permissions in Microsoft Purview](./quickstart-create-collection.md). | |36 |Register and scan Data Sources in Microsoft Purview |*Data Source admin* <br> *Data Reader* or *Data Curator* | For more information, see [supported data sources and file types](azure-purview-connector-overview.md) | |35 |Grant access to data roles in the organization |*Collection admin* |Provide access to other teams to use Microsoft Purview: <ul><li> Data curator</li><li>Data reader</li><li>Collection admin</li><li>Data source admin</li><li>Policy Author</li><li>Workflow admin</li></ul> <br> For more information, see [Access control in Microsoft Purview](catalog-permissions.md). |
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
In this tutorial, you learn how to:
> [!div class="checklist"] > * Prepare your Azure environment > * Configure permissions to allow Microsoft Purview to connect to your resources
-> * Register your Azure Storage resource for data use governance
+> * Register your Azure Storage resource for Data Use Management
> * Create and publish a policy for your resource group or subscription ## Prerequisites
In this tutorial, you learn how to:
[!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
-### Register the data sources in Microsoft Purview for data use governance
+### Register the data sources in Microsoft Purview for Data Use Management
-Your Azure Storage account needs to be registered in Microsoft Purview to later define access policies, and during registration we'll enable data use governance. **Data use governance** is an available feature in Microsoft Purview that allows users to manage access to a resource from within Microsoft Purview. This allows you to centralize data discovery and access management, however it's a feature that directly impacts your data security.
+Your Azure Storage account needs to be registered in Microsoft Purview to later define access policies, and during registration we'll enable Data Use Management. **Data Use Management** is an available feature in Microsoft Purview that allows users to manage access to a resource from within Microsoft Purview. This allows you to centralize data discovery and access management, however it's a feature that directly impacts your data security.
> [!WARNING]
-> Before enabling data use governance for any of your resources, read through our [**data use governance article**](how-to-enable-data-use-governance.md).
+> Before enabling Data Use Management for any of your resources, read through our [**Data Use Management article**](how-to-enable-data-use-management.md).
>
-> This article includes data use governance best practices to help you ensure that your information is secure.
+> This article includes Data Use Management best practices to help you ensure that your information is secure.
-To register your resource and enable data use governance, follow these steps:
+To register your resource and enable Data Use Management, follow these steps:
> [!Note] > You need to be an owner of the subscription or resource group to be able to add a managed identity on an Azure resource.
To register your resource and enable data use governance, follow these steps:
:::image type="content" source="media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows the boxes for selecting a storage account."::: 1. In the **Select a collection** box, select a collection or create a new one (optional).
- 1. Set the *Data use governance* toggle to **Enabled**, as shown in the image below.
+ 1. Set the *Data Use Management* toggle to **Enabled**, as shown in the image below.
- :::image type="content" source="./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows Data use governance toggle set to active on the registered resource page.":::
+ :::image type="content" source="./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows Data Use Management toggle set to active on the registered resource page.":::
>[!TIP]
- >If the data use governance toggle is greyed out and unable to be selected:
- > 1. Confirm you have followed all prerequisites to enable Data use governance across your resources.
+ >If the Data Use Management toggle is greyed out and unable to be selected:
+ > 1. Confirm you have followed all prerequisites to enable Data Use Management across your resources.
> 1. Confirm that you have selected a storage account to be registered.
- > 1. It may be that this resource is already registered in another Microsoft Purview account. Hover over it to know the name of the Microsoft Purview account that has registered the data resource.first. Only one Microsoft Purview account can register a resource for data use governance at at time.
+ > 1. It may be that this resource is already registered in another Microsoft Purview account. Hover over it to know the name of the Microsoft Purview account that has registered the data resource.first. Only one Microsoft Purview account can register a resource for Data Use Management at at time.
- 1. Select **Register** to register the resource group or subscription with Microsoft Purview with data use governance enabled.
+ 1. Select **Register** to register the resource group or subscription with Microsoft Purview with Data Use Management enabled.
>[!TIP]
-> For more information about data use governance, including best practices or known issues, see our [data use governance article](how-to-enable-data-use-governance.md).
+> For more information about Data Use Management, including best practices or known issues, see our [Data Use Management article](how-to-enable-data-use-management.md).
## Create a data owner policy
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
Service name requirements:
Azure Cognitive Search is available in most regions, as listed in the [**Products available by region**](https://azure.microsoft.com/global-infrastructure/services/?products=search) page.
-As a rule, if you're using multiple Azure services, putting all of them in the same region minimizes or voids bandwidth charges. There are no charges for outbound data when services are in the same region.
+As a rule, if you're using multiple Azure services, putting all of them in the same region minimizes or voids bandwidth charges. There are no charges for data exchanges among services when all of them are in the same region.
Two notable exceptions might lead to provisioning one or more search services in a separate region:
search Search Howto Managed Identities Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-sql.md
In this section you'll give your Azure Cognitive Search service permission to re
:::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select the appropriate **Reader** role.
+1. On the **Role** tab, select the appropriate **Reader** role.
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Previously updated : 04/08/2022 Last updated : 04/22/2022
In PowerShell, use [New-AzRoleAssignment](/powershell/module/az.resources/new-az
If [built-in roles](#built-in-roles-used-in-search) don't provide the right combination of permissions, you can create a [custom role](../role-based-access-control/custom-roles.md) to support the operations you require
-For example, you might want to augment a read-only role to include listing the indexes on the search service (Microsoft.Search/searchServices/indexes/read), or create a role that can fully manage indexes, including the ability to create indexes and read data.
+For example, you might want to augment a query execution (reader role) to include listing indexes by name. Normally, listing the indexes on a search service is considered an administrative right.
-The PowerShell example shows the JSON syntax for creating a custom role.
+### [**Azure portal**](#tab/custom-role-portal)
+
+These steps are derived from [Create or update Azure custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md). Cloning from an existing role is supported in a search service page.
+
+These steps create a custom role that augments search query rights to include listing indexes by name. Typically, listing indexes is considered an admin function.
+
+1. In the Azure portal, navigate to your search service.
+
+1. In the left-navigation pane, select **Access Control (IAM)**.
+
+1. In the action bar, select **Roles**.
+
+1. Right-click **Search Index Data Reader** (or another role) and select **Clone** to open the **Create a custom role** wizard.
+
+1. On the Basics tab, provide a name for the custom role, such as "Search Index Explorer", and then click **Next**.
+
+1. On the Permissions tab, select **Add permission**.
+
+1. On the Add permissions tab, search for and then select the **Microsoft Search** tile.
+
+1. Set the permissions for your custom role:
+
+ + Under Microsoft.Search/operations, select **Read : List all available operations**.
+ + Under Microsoft.Search/searchServices/indexes, select **Read : Read Index**.
+
+ The JSON definition looks like the following example:
+
+ ```json
+ {
+ "properties": {
+ "roleName": "search index explorer",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/a5b1ca8b-bab3-4c26-aebe-4cf7ec4791a0/resourceGroups/heidist-free-search-svc/providers/Microsoft.Search/searchServices/demo-search-svc"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Search/operations/read",
+ "Microsoft.Search/searchServices/indexes/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Search/searchServices/indexes/documents/read"
+ ],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
+
+1. Select **Review + create** to create the role. You can now assign users and groups to the role.
### [**Azure PowerShell**](#tab/custom-role-ps)
+The PowerShell example shows the JSON syntax for creating a custom role.
+ 1. Review the [list of atomic permissions](../role-based-access-control/resource-provider-operations.md#microsoftsearch) to determine which ones you need. 1. Set up a PowerShell session to create the custom role. For detailed instructions, see [Azure PowerShell](../role-based-access-control/custom-roles-powershell.md)
The PowerShell example shows the JSON syntax for creating a custom role.
} ```
-### [**Azure portal**](#tab/custom-role-portal)
-
-1. Review the [list of atomic permissions](../role-based-access-control/resource-provider-operations.md#microsoftsearch) to determine which ones you need.
-
-1. See [Create or update Azure custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) for steps.
-
-1. Clone or create a role, or use JSON to specify the custom role (see the PowerShell tab for JSON syntax).
- ### [**REST API**](#tab/custom-role-rest) 1. Review the [list of atomic permissions](../role-based-access-control/resource-provider-operations.md#microsoftsearch) to determine which ones you need.
security Pen Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/pen-testing.md
documentationcenter: na - ms.assetid: 695d918c-a9ac-4eba-8692-af4526734ccc
One type of pen test that you canΓÇÖt perform is any kind of [Denial of Service
> [!Note] > You may only simulate attacks using Microsoft approved testing partners:
-> - [Red Button](https://www.red-button.net/): Work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment.
> - [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud): A self-service traffic generator where your customers can generate traffic against DDoS Protection-enabled public endpoints for simulations.
+> - [Red Button](https://www.red-button.net/): Work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment.
>
-> To learn more about the BreakingPoint Cloud simulation, see [testing with simulation partners](../../ddos-protection/test-through-simulations.md).
+> To learn more about these simulation partners, see [testing with simulation partners](../../ddos-protection/test-through-simulations.md).
## Next steps
service-fabric Service Fabric Application Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-lifecycle.md
See the [Application upgrade tutorial](service-fabric-application-upgrade-tutori
## Remove 1. An *operator* can delete a specific instance of a running service in the cluster without removing the entire application using the [**DeleteServiceAsync** method](/dotnet/api/system.fabric.fabricclient.servicemanagementclient), the [**Remove-ServiceFabricService** cmdlet](/powershell/module/servicefabric/remove-servicefabricservice), or the [**Delete Service** REST operation](/rest/api/servicefabric/delete-a-service). 2. An *operator* can also delete an application instance and all of its services using the [**DeleteApplicationAsync** method](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient), the [**Remove-ServiceFabricApplication** cmdlet](/powershell/module/servicefabric/remove-servicefabricapplication), or the [**Delete Application** REST operation](/rest/api/servicefabric/delete-an-application).
-3. Once the application and services have stopped, the *operator* can unprovision the application type using the [**UnprovisionApplicationAsync** method](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient), the [**Unregister-ServiceFabricApplicationType** cmdlet](/powershell/module/servicefabric/unregister-servicefabricapplicationtype), or the [**Unprovision an Application** REST operation](/rest/api/servicefabric/unprovision-an-application). Unprovisioning the application type does not remove the application package from the ImageStore. You must remove the application package manually.
+3. Once the application and services have stopped, the *operator* can unprovision the application type using the [**UnprovisionApplicationAsync** method](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient), the [**Unregister-ServiceFabricApplicationType** cmdlet](/powershell/module/servicefabric/unregister-servicefabricapplicationtype), or the [**Unprovision an Application** REST operation](/rest/api/servicefabric/unprovision-an-application). Unprovisioning the application type does not remove the application package from the ImageStore.
4. An *operator* removes the application package from the ImageStore using the [**RemoveApplicationPackage** method](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient) or the [**Remove-ServiceFabricApplicationPackage** cmdlet](/powershell/module/servicefabric/remove-servicefabricapplicationpackage). See [Deploy an application](service-fabric-deploy-remove-applications.md) for examples.
+## Cleaning up files and data on nodes
+
+The replication of application files will distribute eventually the files to all nodes depending on balancing actions. This can create disk pressure depending on the number of applications and their file size.
+Even when no active instance is running on a node, the files from a former instance will be kept. The same is true for data from reliable collections used by stateful services. This serves the purpose of higher availability. In case of a new application instance on the same node no files must be copied. For reliable collections, only the delta must be replicated.
+
+To remove the application binaries completely you have to unregister the application type.
+
+Recommendations to reduce disk pressure:
+
+1. [Remove-ServiceFabricApplicationPackage](service-fabric-deploy-remove-applications.md#remove-an-application-package-from-the-image-store) this removes the package from temporary upload location.
+1. [Unregister-ServiceFabricApplicationType](service-fabric-deploy-remove-applications.md#unregister-an-application-type) releases storage space by removing the application type files from image store service and all nodes. The deletion manager runs every hour per default.
+1. [CleanupUnusedApplicationTypes](service-fabric-cluster-fabric-settings.md)
+ cleans up old unused application versions automatically.
+ ```ARM template
+ {
+ "name": "Management",
+ "parameters": [
+ {
+ "name": "CleanupUnusedApplicationTypes",
+ "value": true
+ },
+ {
+ "name": "MaxUnusedAppTypeVersionsToKeep",
+ "value": "3"
+ }
+ ]
+ }
+ ```
+1. [Remove-ServiceFabricClusterPackage](/powershell/module/servicefabric/remove-servicefabricclusterpackage) removes old unused runtime installation binaries.
+
+>[!Note]
+> A feature is under development to allow Service Fabric to delete application folders once the application is evacuated from the node.
++ ## Next steps For more information on developing, testing, and managing Service Fabric applications and services, see:
service-fabric Service Fabric Tutorial Deploy App To Party Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-app-to-party-cluster.md
When the new cluster is ready, you can deploy the Voting application directly fr
In Solution Explorer, right-click on **Voting** and select **Publish**. The **Publish** dialog box appears.
-In **Connection Endpoint**, select the endpoint for the cluster you created in the previous step. For example, "mytestcluster.southcentral.cloudapp.azure.com:19000". If you select **Advanced Connection Parameters**, the certificate information should be auto-filled.
+In **Connection Endpoint**, select the endpoint for the cluster you created in the previous step. For example, "mytestcluster.southcentralus.cloudapp.azure.com:19000". If you select **Advanced Connection Parameters**, the certificate information should be auto-filled.
![Publish a Service Fabric application](./media/service-fabric-tutorial-deploy-app-to-party-cluster/publish-app.png) Select **Publish**.
-Once the application is deployed, open a browser and enter the cluster address followed by **:8080**. Or enter another port if one is configured. An example is `http://mytestcluster.southcentral.cloudapp.azure.com:8080`. You see the application running in the cluster in Azure. In the voting web page, try adding and deleting voting options and voting for one or more of these options.
+Once the application is deployed, open a browser and enter the cluster address followed by **:8080**. Or enter another port if one is configured. An example is `http://mytestcluster.southcentralus.cloudapp.azure.com:8080`. You see the application running in the cluster in Azure. In the voting web page, try adding and deleting voting options and voting for one or more of these options.
![Service Fabric voting sample](./media/service-fabric-tutorial-deploy-app-to-party-cluster/application-screenshot-new-azure.png)
site-recovery Azure To Azure How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md
The following steps describe how to add a role assignment to your storage accoun
:::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select one of the roles listed in the beginning of this section.
+1. On the **Role** tab, select one of the roles listed in the beginning of this section.
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+1. Select your Azure subscription.
+ 1. Select **System-assigned managed identity**, search for a vault, and then select it. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Storage firewalls for virtual networks | Supported | If restrict virtual
General purpose V2 storage accounts (Both Hot and Cool tier) | Supported | Transaction costs increase substantially compared to General purpose V1 storage accounts Generation 2 (UEFI boot) | Supported NVMe disks | Not supported
-Azure shared disks | Not supported
+Azure Shared Disks | Not supported
+Ultra Disks | Not supported
Secure transfer option | Supported Write accelerator enabled disks | Not supported Tags | Supported | User-generated tags are replicated every 24 hours.
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
The following steps describe how to add a role assignment to your storage accoun
:::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Roles** tab, select one of the roles listed in the beginning of this section.
+1. On the **Role** tab, select one of the roles listed in the beginning of this section.
1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+1. Select your Azure subscription.
+ 1. Select **System-assigned managed identity**, search for a vault, and then select it. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
spatial-anchors Tutorial Share Anchors Across Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md
In this tutorial, you'll learn how to:
Follow the instructions [here](../how-tos/setup-unity-project.md#download-asa-packages) to download and import the ASA SDK packages required for the HoloLens platform. ## Deploy the Sharing Anchors service
+> [!NOTE]
+> In this tutorial we will be using the free tier of the Azure App Service. The free tier will time out after [20 min](https://docs.microsoft.com/azure/architecture/framework/services/compute/azure-app-service/reliability#configuration-recommendations) of inactivity and reset the memory cache.
## [Visual Studio](#tab/VS)
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/build-configuration.md
If you want to skip building the API, you can bypass the automatic build and dep
Steps to skip building the API: -- In the *staticwebapp.config.json* file, set `apiRuntime` to the correct language and version. Refer to [Configure Azure Static Web Apps](configuration.md#selecting-the-api-language-runtime-version) for the list of supported languages and versions.
+- In the *staticwebapp.config.json* file, set `apiRuntime` to the correct runtime and version. Refer to [Configure Azure Static Web Apps](configuration.md#selecting-the-api-language-runtime-version) for the list of supported runtimes and versions.
```json { "platform": {
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
For example, the following configuration shows how you can add a unique identifi
- Keys are case insensitive - Values are case-sensitive
+## Trailing slash
+
+A trailing slash is the `/` at the end of a URL. Conventionally, trailing slash URL refers to a directory on the web server, while a non-trailing slash indicates a file.
+
+Search engines treat the two URLs separately, regardless of whether it's a file or a directory. When the same content is rendered at both of these URLs, your website serves duplicate content which can negatively impact search engine optimization (SEO). When explicitly configured, Static Web Apps applies a set of URL normalization and redirect rules that help improve your websiteΓÇÖs performance and SEO.
+
+The following normalization and redirect rules will apply for each of the available configurations:
+
+### Always
+
+When setting `trailingSlash` to `always`, all requests that don't include a trailing slash are redirected to a trailing slash URL. For example, `/contact` is redirected to `/contact/`.
+
+```json
+"trailingSlash": "always"
+```
+
+| Requests to... | returns... | with the status... | and path... |
+|--|--|--|--|
+| _/about_ | The _/about/https://docsupdatetracker.net/index.html_ file | `301` | _/about/_ |
+| _/about/_ | The _/about/https://docsupdatetracker.net/index.html_ file | `200` | _/about/_ |
+| _/about/https://docsupdatetracker.net/index.html_ | The _/about/https://docsupdatetracker.net/index.html_ file | `301` | _/about/_ |
+| _/contact_ | The _/contact.html_ file | `301` | _/contact/_ |
+| _/contact/_ | The _/contact.html_ file | `200` | _/contact/_ |
+| _/contact.html_ | The _/contact.html_ file | `301` | _/contact/_ |
+
+### Never
+
+When setting `trailingSlash` to `never`, all requests ending in a trailing slash are redirected to a non-trailing slash URL. For example, `/contact/` is redirected to `/contact`.
+
+```json
+"trailingSlash": "never"
+```
+
+| Requests to... | returns... | with the status... | and path... |
+|--|--|--|--|
+| _/about_ | The _/about/https://docsupdatetracker.net/index.html_ file | `200` | _/about_ |
+| _/about/_ | The _/about/https://docsupdatetracker.net/index.html_ file | `301` | _/about_ |
+| _/about/https://docsupdatetracker.net/index.html_ | The _/about/https://docsupdatetracker.net/index.html_ file | `301` | _/about_ |
+| _/contact_ | The _/contact.html_ file | `200` | _/contact_ |
+| _/contact/_ | The _/contact.html_ file | `301` | _/contact_ |
+| _/contact.html_ | The _/contact.html_ file | `301` | _/contact_ |
+
+### Auto
+
+When setting `trailingSlash` to `auto`, all requests to folders are redirected to a URL with a trailing slash. All requests to files are redirected to a non-trailing slash URL.
+
+```json
+"trailingSlash": "auto"
+```
+
+| Requests to... | returns... | with the status... | and path... |
+|--|--|--|--|
+| _/about_ | The _/about/https://docsupdatetracker.net/index.html_ file | `301` | _/about/_ |
+| _/about/_ | The _/about/https://docsupdatetracker.net/index.html_ file | `200` | _/about/_ |
+| _/about/https://docsupdatetracker.net/index.html_ | The _/about/https://docsupdatetracker.net/index.html_ file | `301` | _/about/_ |
+| _/contact_ | The _/contact.html_ file | `200` | _/contact_ |
+| _/contact/_ | The _/contact.html_ file | `301` | _/contact_ |
+| _/contact.html_ | The _/contact.html_ file | `301` | _/contact_ |
+
+For optimal website performance, configure a trailing slash strategy using one of the `always`, `never` or `auto` modes.
+
+By default, when the `trailingSlash` configuration is omitted, Static Web Apps applies the following rules:
+
+| Requests to... | returns... | with the status... | and path... |
+|--|--|--|--|
+| _/about_ | The _/about/https://docsupdatetracker.net/index.html_ file | `200` | _/about_ |
+| _/about/_ | The _/about/https://docsupdatetracker.net/index.html_ file | `200` | _/about/_ |
+| _/about/https://docsupdatetracker.net/index.html_ | The _/about/https://docsupdatetracker.net/index.html_ file | `200` | _/about/https://docsupdatetracker.net/index.html_ |
+| _/contact_ | The _/contact.html_ file | `200` | _/contact_ |
+| _/contact/_ | The _/contact.html_ file | `301` | _/contact_ |
+| _/contact.html_ | The _/contact.html_ file | `200` | _/contact.html_ |
++ ## Example configuration file ```json
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
Immutable storage for Azure Blob Storage enables users to store business-critical data in a WORM (Write Once, Read Many) state. While in a WORM state, data cannot be modified or deleted for a user-specified interval. By configuring immutability policies for blob data, you can protect your data from overwrites and deletes.
-Immutable storage for Azure Blob storage supports two types of immutability policies:
+Immutable storage for Azure Blob Storage supports two types of immutability policies:
- **Time-based retention policies**: With a time-based retention policy, users can set policies to store data for a specified interval. When a time-based retention policy is set, objects can be created and read, but not modified or deleted. After the retention period has expired, objects can be deleted but not overwritten. To learn more about time-based retention policies, see [Time-based retention policies for immutable blob data](immutable-time-based-retention-policy-overview.md).
Immutable storage helps healthcare organization, financial institutions, and rel
Typical applications include: -- **Regulatory compliance**: Immutable storage for Azure Blob storage helps organizations address SEC 17a-4(f), CFTC 1.31(d), FINRA, and other regulations.
+- **Regulatory compliance**: Immutable storage for Azure Blob Storage helps organizations address SEC 17a-4(f), CFTC 1.31(d), FINRA, and other regulations.
- **Secure document retention**: Immutable storage for blobs ensures that data can't be modified or deleted by any user, not even by users with account administrative privileges.
Immutable storage support for accounts with a hierarchical namespace is in previ
Keep in mind that you cannot rename or move a blob when the blob is in the immutable state and the account has a hierarchical namespace enabled. Both the blob name and the directory structure provide essential container-level data that cannot be modified once the immutable policy is in place. > [!IMPORTANT]
-> Immutable storage for Azure Blob storage in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Immutable storage for Azure Blob Storage in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Recommended blob types
For more information about blob inventory, see [Azure Storage blob inventory (pr
## Pricing
-There is no additional capacity charge for using immutable storage. Immutable data is priced in the same way as mutable data. For pricing details on Azure Blob storage, see the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
+There is no additional capacity charge for using immutable storage. Immutable data is priced in the same way as mutable data. For pricing details on Azure Blob Storage, see the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
Creating, modifying, or deleting a time-based retention policy or legal hold on a blob version results in a write transaction charge.
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/partner-overview.md
description: List of Microsoft partner companies that build customer solutions f
Previously updated : 04/12/2021 Last updated : 04/21/2022
This article highlights Microsoft partners that are integrated with Azure Storag
| - | -- | -- | |![Commvault company logo](./medi)| |![Datadobi company logo](./media/datadob-logo.png) |**Datadobi**<br> Datadobi can optimize your unstructured storage environments. DobiProtect helps you keep a "golden copy" of your most business-critical network attached storage (NAS) data on Azure. This helps protect against cyberthreats, ransomware, accidental deletions, and software vulnerabilities. To keep storage costs to a minimum, select just the data that you'll need when disaster strikes. When disaster does occur, recover your data entirely, restore just a subset of data, or fail over to your golden copy. |[Partner page](https://datadobi.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobiprotect?tab=Overview)|
+|![Rubrik company logo](./media/rubrik-logo.png) |**Rubrik**<br>Rubrik and Microsoft deliver Zero Trust Data Security solutions. These solutions keep your data safe and enable business recovery in the face of cyber attacks and operational failures. Rubrik tightly integrates with Microsoft Azure Storage to ensure your data and applications are available for rapid recovery, immutable and trusted to keep your business running without interruptions. Choose from the multiple solutions offered by Rubrik to protect your data and applications across on-premises and Microsoft Azure.|[Partner page](https://www.rubrik.com/partners/technology-partners/microsoft)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/rubrik_inc.rubrik_cloud_data_management?tab=Overview)|
![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)| | ![Veeam company logo](./medi)| | ![Veritas company logo](./media/veritas-logo.png) |**Veritas**<br>Veritas Technologies solutions cover multi-cloud data management, data protection, storage optimization as well as compliance readiness and workload portability.<br><br>NetBackup offers a unified data protection and recovery solution. NetBackup helps you standardize across your environment, greatly reducing complexity and risk regardless of workload or cloud. NetBackup offers push-button orchestrated disaster recovery, seamless workload and data portability, and resiliency and mobility between Azure Stack environments, or between Azure regions.<br><br>Veritas Backup Exec provides simple, rapid, and secure offsite backup to Azure for your in-house virtual and physical environments. It also protects cloud-based workloads in Azure.|[Partner page](https://www.veritas.com/partners/microsoft-azure)<br>Azure Marketplace:<br>[NetBackup](https://azuremarketplace.microsoft.com/marketplace/apps/veritas.veritas-netbackup-8-s?tab=Overview)<br>[Backup Exec](https://azuremarketplace.microsoft.com/marketplace/apps/veritas.backup-exec-20?tab=Overview)| | ![Zerto company logo](./media/zerto-logo.png) |**Zerto**<br>Zerto helps customers accelerate IT transformation through a single platform for cloud data management and protection. Zerto enables an always-on customer experience by simplifying the protection, recovery, and mobility of applications and data across private, public, hybrid clouds and is optimized for Azure.|[Partner page](https://www.zerto.com/azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/zerto.zerto?tab=overview)|
-Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu).
+Are you a storage partner but your solution isn't listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu).
## Next steps
stream-analytics Sql Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-reference-data.md
Title: Use SQL Database reference data in an Azure Stream Analytics job description: This article describes how to use a SQL Database as reference data input for an Azure Stream Analytics job in the Azure portal and in Visual Studio.--++ Previously updated : 01/29/2019 Last updated : 04/20/2022 # Use reference data from a SQL Database for an Azure Stream Analytics job
Use the following steps to add Azure SQL Database as a reference input source us
![Inputs is selected in the left navigation pane. On Inputs, + Add reference input is selected, revealing a drop-down list that shows the values Blob storage and SQL Database.](./media/sql-reference-data/stream-analytics-inputs.png)
-2. Fill out the Stream Analytics Input Configurations. Choose the database name, server name, username and password. If you want your reference data input to refresh periodically, choose ΓÇ£OnΓÇ¥ to specify the refresh rate in DD:HH:MM. If you have large data sets with a short refresh rate, you can use a [delta query](sql-reference-data.md#delta-query).
+2. Fill out the Stream Analytics Input Configurations. Choose the database name, server name, username and password. If you want your reference data input to refresh periodically, choose ΓÇ£OnΓÇ¥ to specify the refresh rate in DD:HH:MM. If you have large data sets with a short refresh rate. Delta query enables you to track changes within your reference data by retreiving all of the rows in SQL Database that were inserted or deleted within a start time, @deltaStartTime, and an end time @deltaEndTime.
+
+Please see [delta query](sql-reference-data.md#delta-query).
![When SQL Database is selected, the SQL Database New input page appears. There is a configuration form in the left pane, and a Snapshot query in the right pane.](./media/sql-reference-data/sql-input-config.png)
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
Previously updated : 03/31/2022 Last updated : 04/22/2022
The following table describes the built-in roles and the scopes at which they ca
|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace |Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime|
+|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for notebooks and pipeline runs. Includes ability to list and view details of serverless SQL pools, Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace |
|Synapse Credential User|Runtime and configuration-time use of secrets within credentials and linked services in activities like pipeline runs. To run pipelines, this role is required, scoped to the workspace system identity. </br></br>_Scoped to a credential, permits access to data via a linked service that is protected by the credential (also requires compute use permission) </br>Allows execution of pipelines protected by the workspace system identity credential(with additional compute use permission)_|Workspace </br>Linked Service</br>Credential |Synapse Linked Data Manager|Creation and management of managed private endpoints, linked services, and credentials. Can create managed private endpoints that use linked services protected by credentials|Workspace| |Synapse User|List and view details of SQL pools, Apache Spark pools, Integration runtimes, and published linked services and credentials. Doesn't include other published code artifacts.  Can create new artifacts but can't run or publish without additional permissions. </br></br>_Can list and read Spark pools, Integration runtimes._|Workspace, Spark pool</br>Linked service </br>Credential|
Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, dele
|Synapse Artifact Publisher|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action| |Synapse Artifact User|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action| |Synapse Compute Operator |workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action|
+|Synapse Monitoring Operator |workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/bigDataPools/viewLogs/action|
|Synapse Credential User|workspaces/read</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action| |Synapse Linked Data Manager|workspaces/read</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete| |Synapse User|workspaces/read|
The following table lists Synapse actions and the built-in roles that permit the
Action|Role --|--
-workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
+workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
workspaces/roleAssignments/write, delete|Synapse Administrator workspaces/managedPrivateEndpoint/write, delete|Synapse Administrator</br>Synapse Linked Data Manager
-workspaces/bigDataPools/useCompute/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator
+workspaces/bigDataPools/useCompute/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator </br>Synapse Monitoring Operator
workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator
-workspaces/integrationRuntimes/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
-workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
+workspaces/integrationRuntimes/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator
+workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator
workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User workspaces/notebooks/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/sparkJobDefinitions/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher
The table below lists Synapse RBAC scopes and the roles that can be assigned at
Scope|Roles --|--
-Workspace |Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
+Workspace |Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
Apache Spark pool | Synapse Administrator </br>Synapse Contributor </br> Synapse Compute Operator Integration runtime | Synapse Administrator </br>Synapse Contributor </br> Synapse Compute Operator Linked service |Synapse Administrator </br>Synapse Credential User
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
Previously updated : 03/31/2022 Last updated : 04/22/2022
Commit changes to a KQL script to the Git repo|Requires Git permissions on the r
APACHE SPARK POOLS| Create an Apache Spark pool|Azure Owner or Contributor on the workspace| Monitor Apache Spark applications| Synapse User|read
-View the logs for notebook and job execution |Synapse Compute Operator|
+View the logs for notebook and job execution |Synapse Monitoring Operator|
Cancel any notebook or Spark job running on an Apache Spark pool|Synapse Compute Operator on the Apache Spark pool.|bigDataPools/useCompute Create a notebook or job definition|Synapse User, or </br>Azure Owner, Contributor, or Reader on the workspace</br> *Additional permissions are required to run, publish, or commit changes*|read</br></br></br></br></br>
-List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor on the workspace|artifacts/read
+List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Monitoring Operator on the workspace|artifacts/read
Run a notebook and review its output, or submit a Spark job|Synapse Apache Spark Administrator, Synapse Compute Operator on the selected Apache Spark pool|bigDataPools/useCompute Publish or delete a notebook or job definition (including output) to the service|Artifact Publisher on the workspace, Synapse Apache Spark Administrator|notebooks/write, delete Commit changes to a notebook or job definition to the Git repo|Git permissions|none PIPELINES, INTEGRATION RUNTIMES, DATAFLOWS, DATASETS & TRIGGERS| Create, update, or delete an Integration runtime|Azure Owner or Contributor on the workspace|
-Monitor Integration runtime status|Synapse Compute Operator|read, integrationRuntimes/viewLogs
-Review pipeline runs|Synapse Artifact Publisher/Synapse Contributor|read, pipelines/viewOutputs
+Monitor Integration runtime status|Synapse Monitoring Operator|read, integrationRuntimes/viewLogs
+Review pipeline runs|Synapse Monitoring Operator|read, pipelines/viewOutputs
Create a pipeline |Synapse User</br>*Additional Synapse permissions are required to debug, add triggers, publish, or commit changes*|read Create a dataflow or dataset |Synapse User</br>*Additional Synapse permissions are required to publish, or commit changes*|read
-List and open a published pipeline |Synapse Artifact User | artifacts/read
+List and open a published pipeline |Synapse Artifact User, Synapse Monitoring Operator | artifacts/read
Preview dataset data|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity| Debug a pipeline using the default Integration runtime|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity credential|read, </br>credentials/useSecret Create a trigger, including trigger now (requires permission to execute the pipeline)|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity|read, credentials/useSecret/action
traffic-manager Traffic Manager Nested Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-nested-profiles.md
description: This article explains the 'Nested Profiles' feature of Azure Traffic Manager documentationcenter: ''--++ na Previously updated : 10/22/2018- Last updated : 04/22/2022+ # Nested Traffic Manager profiles
The monitoring settings in a Traffic Manager profile apply to all endpoints with
![Traffic Manager endpoint monitoring with per-endpoint settings][10]
+## Example 6: Endpoint monitoring with Multivalue Nested Profiles using IPv4 and IPv6 endpoints
+
+Suppose you have both IPv4 and IPv6 nested children endpoints, and you want to set thresholds for minimum children healthy for both. There are new parameters that will enable you to define the minimum number of these healthy endpoints that are expected for each type. The parameters **Minimum IPv4 endpoints** and **Minimum IPv6 endpoints** will determine the minimum number of healthy endpoints needed for each parameter, in order for the parent to be marked as healthy.
+
+The default number for the total minimum child endpoints is always 1, and the default number for IPv4 and IPv6 endpoints is 0 to ensure backwards compatibility.
+
+![Traffic Manager min-child behavior][11]
+
+In this example, the **East US** endpoint is unhealthy, because it doesn't satisfy the requirement to have at least 1 healthy IPv4 endpoint, which is set by the **ipv4-min-child** property.
+ ## FAQs * [How do I configure nested profiles?](./traffic-manager-faqs.md#traffic-manager-nested-profiles)
Learn how to [create a Traffic Manager profile](./quickstart-create-traffic-mana
[7]: ./media/traffic-manager-nested-profiles/figure-7.png [8]: ./media/traffic-manager-nested-profiles/figure-8.png [9]: ./media/traffic-manager-nested-profiles/figure-9.png
-[10]: ./media/traffic-manager-nested-profiles/figure-10.png
+[10]: ./media/traffic-manager-nested-profiles/figure-10.png
+[11]: ./media/traffic-manager-nested-profiles/figure-11.png
virtual-desktop Shortpath Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/shortpath-public.md
Use the following table for reference when configuring firewalls for RDP Shortpa
| Name | Source | Destination Port | Protocol | Destination | Action | |-|--||-||--| | RDP Shortpath Server Endpoint | VM Subnet | 1024-65535 | UDP | * | Allow |
-| STUN Access | VM Subnet | 3478 | UDP | 13.107.17.41/24, 13.107.64.0/18, 20.202.0.0/16, 52.112.0.0/14, 52.120.0.0/14 | Allow |
+| STUN Access | VM Subnet | 3478 | UDP | 13.107.17.41/32, 13.107.64.0/18, 20.202.0.0/16, 52.112.0.0/14, 52.120.0.0/14 | Allow |
#### Client network | Name | Source | Destination Port | Protocol | Destination | Action | |-|-||-||--| | RDP Shortpath Server Endpoint | Client network | 1024-65535 | UDP | Public IP addresses assigned to NAT Gateway or Azure Firewall | Allow |
-| STUN Access | Client network | 3478 | UDP | 13.107.17.41/24, 13.107.64.0/18, 20.202.0.0/16, 52.112.0.0/14, 52.120.0.0/14 | Allow |
+| STUN Access | Client network | 3478 | UDP | 13.107.17.41/32, 13.107.64.0/18, 20.202.0.0/16, 52.112.0.0/14, 52.120.0.0/14 | Allow |
> [!NOTE] > The IP ranges for STUN servers used in preview would change at the feature's release to General Availability.
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
description: Overview of Azure managed disks, which handle the storage accounts
Previously updated : 02/03/2022 Last updated : 04/22/2022
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/premium-storage-performance.md
Before you begin, if you are new to Premium Storage, first read the [Select an A
We assess whether an application is performing well or not using performance indicators like, how fast an application is processing a user request, how much data an application is processing per request, how many requests is an application processing in a specific period of time, how long a user has to wait to get a response after submitting their request. The technical terms for these performance indicators are, IOPS, Throughput or Bandwidth, and Latency.
-In this section, we will discuss the common performance indicators in the context of Premium Storage. In the following section, Gathering Application Requirements, you will learn how to measure these performance indicators for your application. Later in Optimizing Application Performance, you will learn about the factors affecting these performance indicators and recommendations to optimize them.
+In this section, we will discuss the common performance indicators in the context of Premium Storage. In the following section, Gathering Application Requirements, you will learn how to measure these performance indicators for your application. Later in [Optimize application performance](#optimize-application-performance), you will learn about the factors affecting these performance indicators and recommendations to optimize them.
## IOPS
-IOPS, or Input/output Operations Per Second, is the number of requests that your application is sending to the storage disks in one second. An input/output operation could be read or write, sequential, or random. Online Transaction Processing (OLTP) applications like an online retail website need to process many concurrent user requests immediately. The user requests are insert and update intensive database transactions, which the application must process quickly. Therefore, OLTP applications require very high IOPS. Such applications handle millions of small and random IO requests. If you have such an application, you must design the application infrastructure to optimize for IOPS. In the later section, *Optimizing Application Performance*, we discuss in detail all the factors that you must consider to get high IOPS.
+IOPS, or Input/output Operations Per Second, is the number of requests that your application is sending to the storage disks in one second. An input/output operation could be read or write, sequential, or random. Online Transaction Processing (OLTP) applications like an online retail website need to process many concurrent user requests immediately. The user requests are insert and update intensive database transactions, which the application must process quickly. Therefore, OLTP applications require very high IOPS. Such applications handle millions of small and random IO requests. If you have such an application, you must design the application infrastructure to optimize for IOPS. In [Optimize application performance](#optimize-application-performance), we discuss in detail all the factors that you must consider to get high IOPS.
When you attach a premium storage disk to your high scale VM, Azure provisions for you a guaranteed number of IOPS as per the disk specification. For example, a P50 disk provisions 7500 IOPS. Each high scale VM size also has a specific IOPS limit that it can sustain. For example, a Standard GS5 VM has 80,000 IOPS limit.
There is a relation between throughput and IOPS as shown in the formula below.
![Relation of IOPS and throughput](linux/media/premium-storage-performance/image1.png)
-Therefore, it is important to determine the optimal throughput and IOPS values that your application requires. As you try to optimize one, the other also gets affected. In a later section, *Optimizing Application Performance*, we will discuss in more details about optimizing IOPS and Throughput.
+Therefore, it is important to determine the optimal throughput and IOPS values that your application requires. As you try to optimize one, the other also gets affected. In [Optimize application performance](#optimize-application-performance), we will discuss in more details about optimizing IOPS and Throughput.
## Latency
-Latency is the time it takes an application to receive a single request, send it to the storage disks and send the response to the client. This is a critical measure of an application's performance in addition to IOPS and Throughput. The Latency of a premium storage disk is the time it takes to retrieve the information for a request and communicate it back to your application. Premium Storage provides consistent low latencies. Premium Disks are designed to provide single-digit millisecond latencies for most IO operations. If you enable ReadOnly host caching on premium storage disks, you can get much lower read latency. We will discuss Disk Caching in more detail in later section on *Optimizing Application Performance*.
+Latency is the time it takes an application to receive a single request, send it to the storage disks and send the response to the client. This is a critical measure of an application's performance in addition to IOPS and Throughput. The Latency of a premium storage disk is the time it takes to retrieve the information for a request and communicate it back to your application. Premium Storage provides consistent low latencies. Premium Disks are designed to provide single-digit millisecond latencies for most IO operations. If you enable ReadOnly host caching on premium storage disks, you can get much lower read latency. We discuss Disk Caching in more detail in [Disk caching](#disk-caching).
When you are optimizing your application to get higher IOPS and Throughput, it will affect the latency of your application. After tuning the application performance, always evaluate the latency of the application to avoid unexpected high latency behavior.
Next, measure the maximum performance requirements of your application throughou
> [!NOTE] > You should consider scaling these numbers based on expected future growth of your application. It is a good idea to plan for growth ahead of time, because it could be harder to change the infrastructure for improving performance later.
-If you have an existing application and want to move to Premium Storage, first build the checklist above for the existing application. Then, build a prototype of your application on Premium Storage and design the application based on guidelines described in *Optimizing Application Performance* in a later section of this document. The next article describes the tools you can use to gather the performance measurements.
+If you have an existing application and want to move to Premium Storage, first build the checklist above for the existing application. Then, build a prototype of your application on Premium Storage and design the application based on guidelines described in [Optimize application performance](#optimize-application-performance). The next article describes the tools you can use to gather the performance measurements.
### Counters to measure application performance requirements
virtual-machines Snapshot Copy Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/snapshot-copy-managed-disk.md
Previously updated : 09/16/2021 Last updated : 04/22/2022 # Create a snapshot of a virtual hard disk
Follow these steps to take a snapshot with the `az snapshot create` command and
## Next steps
-Deploy a virtual machine from a snapshot. Create a managed disk from a snapshot and then attach the new managed disk as the OS disk.
+To recover using a snapshot, you must create a new disk from the snapshot, then either deploy a new VM, and use the managed disk as the OS disk, or attach the disk as a data disk to an existing VM.
# [Portal](#tab/portal)
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
Information on Red Hat support policies for all versions of RHEL can be found on
* RHEL SAP PAYG images in Azure (RHEL for SAP, RHEL for SAP HANA, and RHEL for SAP Business Applications) are connected to dedicated RHUI channels that remain on the specific RHEL minor version as required for SAP certification.
-* Access to Azure-hosted RHUI is limited to the VMs within the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653). If you're proxying all VM traffic via an on-premises network infrastructure, you might need to set up user-defined routes for the RHEL PAYG VMs to access the Azure RHUI. If that is the case, user-defined routes will need to be added for _all_ RHUI IP addresses.
+* Access to Azure-hosted RHUI is limited to the VMs within the [Azure datacenter IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519). If you're proxying all VM traffic via an on-premises network infrastructure, you might need to set up user-defined routes for the RHEL PAYG VMs to access the Azure RHUI. If that is the case, user-defined routes will need to be added for _all_ RHUI IP addresses.
## Image update behavior
If you experience problems connecting to Azure RHUI from your Azure RHEL PAYG VM
1. If it points to a location with the following pattern, `mirrorlist.*cds[1-4].cloudapp.net`, a configuration update is required. You're using the old VM snapshot, and you need to update it to point to the new Azure RHUI.
-1. Access to Azure-hosted RHUI is limited to VMs within the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653).
+1. Access to Azure-hosted RHUI is limited to VMs within the [Azure datacenter IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
1. If you're using the new configuration, have verified that the VM connects from the Azure IP range, and still can't connect to Azure RHUI, file a support case with Microsoft or Red Hat.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
### Why am I seeing a message and button called "Update router to latest software version" in portal?
-The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets (VMSS) based deployments. This will enable the virtual hub router to now be availability zone aware and have enhanced scaling out capabilities during high CPU usage. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. The Cloud Services infrastructure will be deprecated soon. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router using the Azure portal.
+The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets (VMSS) based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. The Cloud Services infrastructure will be deprecated soon. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you will have to update your virtual hub router via Azure Portal.
YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new VMSS based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update. If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Title: 'Tutorial - Create and manage a VPN gateway: Azure portal'
-description: In this tutorial, learn how to create, deploy, and manage an Azure VPN gateway using the portal.
+ Title: 'Tutorial ΓÇô Create & manage a VPN gateway ΓÇô Azure portal'
+description: In this tutorial, learn how to create, deploy, and manage an Azure VPN gateway using the portal.
Previously updated : 03/18/2022 Last updated : 04/22/2022
-#Customer intent: I want to create a VPN gateway for my virtual network so that I can connect to my VNet and communicate with resources remotely.
-# Tutorial: Create and manage a VPN gateway using Azure portal
+# Tutorial: Create and manage a VPN gateway using the Azure portal
-Azure VPN gateways provide cross-premises connectivity between customer premises and Azure. This tutorial covers basic Azure VPN gateway deployment items such as creating and managing a VPN gateway. You can also create a gateway using [Azure CLI](create-routebased-vpn-gateway-cli.md) or [Azure PowerShell](create-routebased-vpn-gateway-powershell.md). If you want to learn more about the configuration settings used in this tutorial, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
+Azure VPN gateways provide cross-premises connectivity between customer premises and Azure. This tutorial covers basic Azure VPN Gateway deployment items, such as creating and managing a VPN gateway. You can also create a gateway using [Azure CLI](create-routebased-vpn-gateway-cli.md) or [Azure PowerShell](create-routebased-vpn-gateway-powershell.md). If you want to learn more about the configuration settings used in this tutorial, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
In this tutorial, you learn how to:
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
Previously updated : 07/26/2021 Last updated : 04/22/2022 ms.devlang: azurecli
A VPN gateway is a type of virtual network gateway that sends encrypted traffic
A VPN gateway connection relies on the configuration of multiple resources, each of which contains configurable settings. The sections in this article discuss the resources and settings that relate to a VPN gateway for a virtual network created in [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can find descriptions and topology diagrams for each connection solution in the [About VPN Gateway](vpn-gateway-about-vpngateways.md) article.
-The values in this article apply VPN gateways (virtual network gateways that use the -GatewayType Vpn). This article does not cover all gateway types or zone-redundant gateways.
+The values in this article apply VPN gateways (virtual network gateways that use the -GatewayType Vpn). This article doesn't cover all gateway types or zone-redundant gateways.
* For values that apply to -GatewayType 'ExpressRoute', see [Virtual Network Gateways for ExpressRoute](../expressroute/expressroute-about-virtual-network-gateways.md).
The values in this article apply VPN gateways (virtual network gateways that use
## <a name="gwtype"></a>Gateway types
-Each virtual network can only have one virtual network gateway of each type. When you are creating a virtual network gateway, you must make sure that the gateway type is correct for your configuration.
+Each virtual network can only have one virtual network gateway of each type. When you're creating a virtual network gateway, you must make sure that the gateway type is correct for your configuration.
The available values for -GatewayType are:
New-AzVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName testrg `
**Azure portal**
-If you use the Azure portal to create a Resource Manager virtual network gateway, you can select the gateway SKU by using the dropdown. The options you are presented with correspond to the Gateway type and VPN type that you select.
+If you use the Azure portal to create a Resource Manager virtual network gateway, you can select the gateway SKU by using the dropdown. The options you're presented with correspond to the Gateway type and VPN type that you select.
**PowerShell**
az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWPIP --r
### <a name="resizechange"></a>Resizing or changing a SKU
-If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. In comparison, when you resize a gateway SKU, there is not much downtime because you do not have to delete and rebuild the gateway. If you have the option to resize your gateway SKU, rather than change it, you will want to do that. However, there are rules regarding resizing:
+If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. In comparison, when you resize a gateway SKU, there isn't much downtime because you don't have to delete and rebuild the gateway. If you have the option to resize your gateway SKU, rather than change it, you will want to do that. However, there are rules regarding resizing:
1. With the exception of the Basic SKU, you can resize a VPN gateway SKU to another VPN gateway SKU within the same generation (Generation1 or Generation2). For example, VpnGw1 of Generation1 can be resized to VpnGw2 of Generation1 but not to VpnGw2 of Generation2. 2. When working with the old gateway SKUs, you can resize between Standard and HighPerformance SKUs.
New-AzVirtualNetworkGatewayConnection -Name localtovon -ResourceGroupName testrg
## <a name="vpntype"></a>VPN types
-When you create the virtual network gateway for a VPN gateway configuration, you must specify a VPN type. The VPN type that you choose depends on the connection topology that you want to create. For example, a P2S connection requires a RouteBased VPN type. A VPN type can also depend on the hardware that you are using. S2S configurations require a VPN device. Some VPN devices only support a certain VPN type.
+When you create the virtual network gateway for a VPN gateway configuration, you must specify a VPN type. The VPN type that you choose depends on the connection topology that you want to create. For example, a P2S connection requires a RouteBased VPN type. A VPN type can also depend on the hardware that you're using. S2S configurations require a VPN device. Some VPN devices only support a certain VPN type.
The VPN type you select must satisfy all the connection requirements for the solution you want to create. For example, if you want to create a S2S VPN gateway connection and a P2S VPN gateway connection for the same virtual network, you would use VPN type *RouteBased* because P2S requires a RouteBased VPN type. You would also need to verify that your VPN device supported a RouteBased VPN connection.
There are two VPN types:
[!INCLUDE [vpn-gateway-vpntype](../../includes/vpn-gateway-vpntype-include.md)]
-The following PowerShell example specifies the `-VpnType` as *RouteBased*. When you are creating a gateway, you must make sure that the -VpnType is correct for your configuration.
+The following PowerShell example specifies the `-VpnType` as *RouteBased*. When you're creating a gateway, you must make sure that the -VpnType is correct for your configuration.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName testrg `
Before you create a VPN gateway, you must create a gateway subnet. The gateway s
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
-When you are planning your gateway subnet size, refer to the documentation for the configuration that you are planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.) if you have the available address space to do so. This will accommodate most configurations.
+When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Additionally, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future additional configurations. While you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.) if you have the available address space to do so. This will accommodate most configurations.
The following Resource Manager PowerShell example shows a gateway subnet named GatewaySubnet. You can see the CIDR notation specifies a /27, which allows for enough IP addresses for most configurations that currently exist.
Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.0.3.0/2
A local network gateway is different than a virtual network gateway. When creating a VPN gateway configuration, the local network gateway usually represents your on-premises network and the corresponding VPN device. In the classic deployment model, the local network gateway was referred to as a Local Site.
-You give the local network gateway a name, the public IP address or the fully qualified domain name (FQDN) of the on-premises VPN device, and specify the address prefixes that are located on the on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration that you have specified for your local network gateway, and routes packets accordingly. If you use Border Gateway Protocol (BGP) on your VPN device, you will provide the BGP peer IP address of your VPN device and the autonomous system number (ASN) of your on premises network. You also specify local network gateways for VNet-to-VNet configurations that use a VPN gateway connection.
+You give the local network gateway a name, the public IP address or the fully qualified domain name (FQDN) of the on-premises VPN device, and specify the address prefixes that are located on the on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration that you've specified for your local network gateway, and routes packets accordingly. If you use Border Gateway Protocol (BGP) on your VPN device, you'll provide the BGP peer IP address of your VPN device and the autonomous system number (ASN) of your on-premises network. You also specify local network gateways for VNet-to-VNet configurations that use a VPN gateway connection.
The following PowerShell example creates a new local network gateway:
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
# What is VPN Gateway?
-A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.
+VPN Gateway sends encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use VPN Gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. A VPN gateway is a specific type of virtual network gateway. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.
## <a name="whatis"></a>What is a virtual network gateway?
Because you can create multiple connection configurations using VPN Gateway, you
### <a name="planningtable"></a>Planning table
-The following table can help you decide the best connectivity option for your solution. Note that ExpressRoute is not a part of VPN Gateway, but is included in the table.
+The following table can help you decide the best connectivity option for your solution. Note that ExpressRoute isn't a part of VPN Gateway, but is included in the table.
[!INCLUDE [cross-premises](../../includes/vpn-gateway-cross-premises-include.md)]