Updates from: 05/10/2021 03:03:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/conditional-access-user-flow.md
Previously updated : 04/22/2021 Last updated : 05/06/2021
The following example shows a Conditional Access technical profile that is used
</TechnicalProfile> ```
+To ensure that Identity Protection signals are evaluated properly, you'll want to call the `ConditionalAccessEvaluation` technical profile for all users, including both [local and social accounts](technical-overview.md#consumer-accounts). Otherwise, Identity Protection will indicate an incorrect degree of risk associated with users.
+ ::: zone-end
-In the **Remediation** phase that follows, the user is challenged with MFA. Once complete, Azure AD B2C informs Identity Protection that the identified sign-in threat has been remediated and by which method. In this example, Azure AD B2C signals that the user has successfully completed the multi-factor authentication challenge.
+In the *Remediation* phase that follows, the user is challenged with MFA. Once complete, Azure AD B2C informs Identity Protection that the identified sign-in threat has been remediated and by which method. In this example, Azure AD B2C signals that the user has successfully completed the multi-factor authentication challenge.
+
+The remediation may also happen through other channels. For example, when the account's password is reset, either by the administrator or by the user. You can check the the user *Risk state* in the [risky users report](identity-protection-investigate-risk.md#navigating-the-risky-users-report).
+
+> [!IMPORTANT]
+> To remediate the risk successfully within the journey, make sure the *Remediation* technical profile is called after the *Evaluation* technical profile is executed. If *Evaluation* is invoked without *Remediation*, the risk state will be *At risk*.
+
+When the *Evaluation* technical profile recommendation returns `Block`, the call to the *Evaluation* technical profile is not required. The risk state is set to *At risk*.
::: zone pivot="b2c-custom-policy"
To add a Conditional Access policy:
|||| |**Report-only**|P1, P2| Report-only allows administrators to evaluate the impact of Conditional Access policies before enabling them in their environment. We recommend you check policy with this state, and determine the impact to end users without requiring multi-factor authentication or blocking users. For more information, see [Review Conditional Access outcomes in the audit report](#review-conditional-access-outcomes-in-the-audit-report)| | **On**| P1, P2| The access policy is evaluated and not enforced. |
- | **Off** | P1, P2| The access policy is not activated and has no affect on the users. |
+ | **Off** | P1, P2| The access policy is not activated and has no effect on the users. |
1. Enable your test Conditional Access policy by selecting **Create**.
-## Add Conditional Access to a user flow
-
-After you've added the Azure AD Conditional Access policy, enable conditional access in your user flow or custom policy. When you enable conditional access, you don't need to specify a policy name.
-
-Multiple Conditional Access policies may apply to an individual user at any time. In this case, the most strict access control policy takes precedence. For example, if one policy requires multi-factor authentication (MFA), while the other blocks access, the user will be blocked.
- ## Conditional Access Template 1: Sign-in risk-based Conditional Access Most users have a normal behavior that can be tracked, when they fall outside of this norm it could be risky to allow them to just sign in. You may want to block that user or maybe just ask them to perform multi-factor authentication to prove that they are really who they say they are.
-A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Organizations with P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../active-directory/identity-protection/concept-identity-protection-risks.md#sign-in-risk). Please note the [limitations on Identity Protection detections for B2C](./identity-protection-investigate-risk.md?pivots=b2c-user-flow#service-limitations-and-considerations).
+A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Azure AD B2C tenants with P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../active-directory/identity-protection/concept-identity-protection-risks.md#sign-in-risk). Please note the [limitations on Identity Protection detections for B2C](./identity-protection-investigate-risk.md?pivots=b2c-user-flow#service-limitations-and-considerations).
If risk is detected, users can perform multi-factor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators.
-Organizations should choose one of the following options to enable a sign-in risk-based Conditional Access policy requiring multi-factor authentication (MFA) when sign-in risk is medium OR high.
+Configure Conditional Access through the Azure portal or Microsoft Graph APIs to enable a sign-in risk-based Conditional Access policy requiring MFA when the sign-in risk is *medium* or *high*.
### Enable with Conditional Access policy
Organizations should choose one of the following options to enable a sign-in ris
9. Confirm your settings and set **Enable policy** to **On**. 10. Select **Create** to create to enable your policy.
-### Enable with Conditional Access APIs
+### Enable with Conditional Access APIs (optional)
-To create a Sign-in risk-based Conditional Access policy with Conditional Access APIs, please refer to the documentation for [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api).
+Create a sign-in risk-based Conditional Access policy with MS Graph APIs. For more information, see [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api).
-The following template can be used to create a Conditional Access policy with display name "CA002: Require MFA for medium+ sign-in risk" in report-only mode.
+The following template can be used to create a Conditional Access policy with display name "Template 1: Require MFA for medium+ sign-in risk" in report-only mode.
```json {
The following template can be used to create a Conditional Access policy with di
} ```
+## Add Conditional Access to a user flow
+
+After you've added the Azure AD Conditional Access policy, enable Conditional Access in your user flow or custom policy. When you enable Conditional Access, you don't need to specify a policy name.
+
+Multiple Conditional Access policies may apply to an individual user at any time. In this case, the most strict access control policy takes precedence. For example, if one policy requires MFA while the other blocks access, the user will be blocked.
+ ## Enable multi-factor authentication (optional) When adding Conditional Access to a user flow, consider the use of **Multi-factor authentication (MFA)**. Users can use a one-time code via SMS or voice, or a one-time password via email for multi-factor authentication. MFA settings are independent from Conditional Access settings. You can choose from these MFA options:
When adding Conditional Access to a user flow, consider the use of **Multi-facto
- **Always on** - MFA is always required regardless of your Conditional Access setup. If users aren't already enrolled in MFA, they're prompted to enroll during sign-in. During sign-up, users are prompted to enroll in MFA. - **Conditional (Preview)** - MFA is required only when an active Conditional Access Policy requires it. If the result of the Conditional Access evaluation is an MFA challenge with no risk, MFA is enforced during sign-in. If the result is an MFA challenge due to risk *and* the user is not enrolled in MFA, sign-in is blocked. During sign-up, users aren't prompted to enroll in MFA.
-> [!IMPORTANT]
-> If your Conditional Access policy grants access with MFA but the user hasn't enrolled a phone number, the user may be blocked.
- ::: zone pivot="b2c-user-flow" To enable Conditional Access for a user flow, make sure the version supports Conditional Access. These user flow versions are labeled **Recommended**.
To enable Conditional Access for a user flow, make sure the version supports Con
1. Get the example of a conditional access policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/conditional-access). 1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`. 1. Upload the policy files.
+
+### Configure claim other than phone number to be used for MFA
+
+In the Conditional Access policy above, the `DoesClaimExist` claim transformation method checks if a claim contains a value, for example if the `strongAuthenticationPhoneNumber` claim contains a phone number.
+
+The claims transformation isn't limited to the `strongAuthenticationPhoneNumber` claim. Depending on the scenario, you can use any other claim. In the following XML snippet, the `strongAuthenticationEmailAddress` claim is checked instead. The claim you choose must have a valid value, otherwise the `IsMfaRegistered` claim will be set to `False`. When set to `False`, the Conditional Access policy evaluation returns a `Block` grant type, preventing the user from completing user flow.
+
+```XML
+ <ClaimsTransformation Id="IsMfaRegisteredCT" TransformationMethod="DoesClaimExist">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="strongAuthenticationEmailAddress" TransformationClaimType="inputClaim" />
+ </InputClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="IsMfaRegistered" TransformationClaimType="outputClaim" />
+ </OutputClaims>
+ </ClaimsTransformation>
+```
## Test your custom policy
To review the result of a Conditional Access event:
## Next steps
-[Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)
+[Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)
active-directory-b2c Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-protection-investigate-risk.md
Azure AD B2C Premium P2 is required for some Identity Protection features. If ne
With the information provided by the risky users report, administrators can find: -- Which users are at risk, have had risk remediated, or have had risk dismissed?
+- The **Risk state**, showing which users are **At risk**, have had risk **Remediated**, or have had risk **Dismissed**
- Details about detections - History of all risky sign-ins - Risk history
Administrators can then choose to take action on these events. Administrators ca
- Block user from signing in - Investigate further using Azure ATP
+An administrator can choose to dismiss a user's risk in the Azure portal or programmatically through the Microsoft Graph API [Dismiss User Risk](https://docs.microsoft.com/graph/api/riskyusers-dismiss?view=graph-rest-beta&preserve-view=true). Administrator privileges are required to dismiss a user's risk. Remediating a risk can be performed by the risky user or by an administrator on the user's behalf, for example through a password reset.
+ ### Navigating the risky users report 1. Sign in to the [Azure portal](https://portal.azure.com/).
Administrators can then choose to return to the user's risk or sign-ins report t
## Next steps -- [Add Conditional Access to a user flow](conditional-access-user-flow.md).
+- [Add Conditional Access to a user flow](conditional-access-user-flow.md).
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
In this quickstart, you download and run a code sample that demonstrates how a J
See [How the sample works](#how-the-sample-works) for an illustration.
-> [!IMPORTANT]
-> MSAL React [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
- ## Prerequisites * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
active-directory Tutorial V2 React https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-v2-react.md
+
+ Title: "Tutorial: Create a React single-page app that uses auth code flow | Azure"
+
+description: In this tutorial, you create a React SPA that can sign in users and use the auth code flow to obtain an access token from the Microsoft identity platform and call the Microsoft Graph API.
+++++++ Last updated : 04/16/2021++++
+# Tutorial: Sign in users and call the Microsoft Graph API from a React single-page app (SPA) using auth code flow
+
+In this tutorial, you build a React single-page application (SPA) that signs in users and calls Microsoft Graph by using the authorization code flow with PKCE. The SPA you build uses the Microsoft Authentication Library (MSAL) for React.
+
+In this tutorial:
+> [!div class="checklist"]
+> * Create a React project with `npm`
+> * Register the application in the Azure portal
+> * Add code to support user sign-in and sign-out
+> * Add code to call Microsoft Graph API
+> * Test the app
+
+MSAL React supports the authorization code flow in the browser instead of the implicit grant flow. MSAL React does **NOT** support the implicit flow.
+
+## Prerequisites
+
+* [Node.js](https://nodejs.org/en/download/) for running a local webserver
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+
+## How the tutorial app works
++
+The application you create in this tutorial enables a React SPA to query the Microsoft Graph API by acquiring security tokens from the the Microsoft identity platform. It uses the Microsoft Authentication Library (MSAL) for React, a wrapper of the MSAL.js v2 library. MSAL React enables React 16+ applications to authenticate enterprise users by using Azure Active Directory (Azure AD), and also users with Microsoft accounts and social identities like Facebook, Google, and LinkedIn. The library also enables applications to get access to Microsoft cloud services and Microsoft Graph.
+
+In this scenario, after a user signs in, an access token is requested and added to HTTP requests in the authorization header. Token acquisition and renewal are handled by the Microsoft Authentication Library for React (MSAL React).
+
+### Libraries
+
+This tutorial uses the following libraries:
+
+|Library|Description|
+|||
+|[MSAL React](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react)|Microsoft Authentication Library for JavaScript React Wrapper|
+|[MSAL Browser](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser)|Microsoft Authentication Library for JavaScript v2 browser package|
+
+## Get the completed code sample
+
+Prefer to download this tutorial's completed sample project instead? To run the project by using a local web server, such as Node.js, clone the [ms-identity-javascript-react-spa](https://github.com/Azure-Samples/ms-identity-javascript-react-spa) repository:
+
+`git clone https://github.com/Azure-Samples/ms-identity-javascript-react-spa`
+
+Then, to configure the code sample before you execute it, skip to the [configuration step](#register-your-application).
+
+To continue with the tutorial and build the application yourself, move on to the next section, [Prerequisites](#prerequisites).
+
+## Create your project
+
+Once you have [Node.js](https://nodejs.org/en/download/) installed, open up a terminal window and then run the following commands:
+
+```console
+npx create-react-app msal-react-tutorial # Create a new React app
+cd msal-react-tutorial # Change to the app directory
+npm install @azure/msal-browser @azure/msal-react # Install the MSAL packages
+npm install react-bootstrap bootstrap # Install Bootstrap for styling
+```
+
+You have now bootstrapped a small React project using [Create React App](https://create-react-app.dev/docs/getting-started). This will be the starting point the rest of this tutorial will build on. If you would like to see the changes to your app as you are working through this tutorial you can run the following command:
+
+```console
+npm start
+```
+
+A browser window should be opened to your app automatically. If it does not, open your browser and navigate to http://localhost:3000. Each time you save a file with updated code the page will reload to reflect the changes.
+
+## Register your application
+
+Follow the steps in [Single-page application: App registration](./scenario-spa-app-registration.md) to create an app registration for your SPA by using the Azure portal.
+
+In the [Redirect URI: MSAL.js 2.0 with auth code flow](scenario-spa-app-registration.md#redirect-uri-msaljs-20-with-auth-code-flow) step, enter `http://localhost:3000`, the default location where create-react-app will serve your application.
+
+### Configure your JavaScript SPA
+
+1. Create a file named *authConfig.js* in the *src* folder to contain your configuration parameters for authentication, and then add the following code:
+
+ ```javascript
+ export const msalConfig = {
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
+ redirectUri: "Enter_the_Redirect_Uri_Here",
+ },
+ cache: {
+ cacheLocation: "sessionStorage", // This configures where your cache will be stored
+ storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
+ }
+ };
+
+ // Add scopes here for ID token to be used at Microsoft identity platform endpoints.
+ export const loginRequest = {
+ scopes: ["User.Read"]
+ };
+
+ // Add the endpoints here for Microsoft Graph API services you'd like to use.
+ export const graphConfig = {
+ graphMeEndpoint: "Enter_the_Graph_Endpoint_Here/v1.0/me"
+ };
+ ```
+
+1. Modify the values in the `msalConfig` section as described here:
+
+ |Value name| About|
+ |-||
+ |`Enter_the_Application_Id_Here`| The **Application (client) ID** of the application you registered.|
+ |`Enter_the_Cloud_Instance_Id_Here`| The Azure cloud instance in which your application is registered. For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).
+ |`Enter_the_Tenant_Info_Here`| Set to one of the following options: If your application supports *accounts in this organizational directory*, replace this value with the directory (tenant) ID or tenant name (for example, **contoso.microsoft.com**). If your application supports *accounts in any organizational directory*, replace this value with **organizations**. If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**. |
+ |`Enter_the_Redirect_Uri_Here`|Replace with **http://localhost:3000**.|
+ |`Enter_the_Graph_Endpoint_Here`| The instance of the Microsoft Graph API the application should communicate with. For the **global** Microsoft Graph API endpoint, replace both instances of this string with `https://graph.microsoft.com`. For endpoints in **national** cloud deployments, see [National cloud deployments](/graph/deployments) in the Microsoft Graph documentation.|
+
+ For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
+
+1. Open up the *src/index.js* file and add the following imports:
+
+ ```javascript
+ import "bootstrap/dist/css/bootstrap.min.css";
+ import { PublicClientApplication } from "@azure/msal-browser";
+ import { MsalProvider } from "@azure/msal-react";
+ import { msalConfig } from "./authConfig";
+ ```
+
+1. Underneath the imports in *src/index.js* create a `PublicClientApplication` instance using the configuration from step 1.
+
+ ```javascript
+ const msalInstance = new PublicClientApplication(msalConfig);
+ ```
+
+1. Find the `<App />` component in *src/index.js* and wrap it in the `MsalProvider` component. Your render function should look like this:
+
+ ```jsx
+ ReactDOM.render(
+ <React.StrictMode>
+ <MsalProvider instance={msalInstance}>
+ <App />
+ </MsalProvider>
+ </React.StrictMode>,
+ document.getElementById("root")
+ );
+ ```
++
+## Sign in users
+
+Create a folder in *src* called *components* and create a file inside this folder named *SignInButton.jsx*. Add the code from either of the following sections to invoke login using a popup window or a full-frame redirect:
+
+### Sign in using popups
+
+Add the following code to *src/components/SignInButton.jsx* to create a button component that will invoke a popup login when selected:
+
+```jsx
+import React from "react";
+import { useMsal } from "@azure/msal-react";
+import { loginRequest } from "../authConfig";
+import Button from "react-bootstrap/Button";
+
+function handleLogin(instance) {
+ instance.loginPopup(loginRequest).catch(e => {
+ console.error(e);
+ });
+}
+
+/**
+ * Renders a button which, when selected, will open a popup for login
+ */
+export const SignInButton = () => {
+ const { instance } = useMsal();
+
+ return (
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogin(instance)}>Sign in using Popup</Button>
+ );
+}
+```
+
+### Sign in using redirects
+
+Add the following code to *src/components/SignInButton.jsx* to create a button component that will invoke a redirect login when selected:
+
+```jsx
+import React from "react";
+import { useMsal } from "@azure/msal-react";
+import { loginRequest } from "../authConfig";
+import Button from "react-bootstrap/Button";
+
+function handleLogin(instance) {
+ instance.loginRedirect(loginRequest).catch(e => {
+ console.error(e);
+ });
+}
+
+/**
+ * Renders a button which, when selected, will redirect the page to the login prompt
+ */
+export const SignInButton = () => {
+ const { instance } = useMsal();
+
+ return (
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogin(instance)}>Sign in using Redirect</Button>
+ );
+}
+```
+
+### Add the sign-in button
+
+1. Create another file in the *components* folder named *PageLayout.jsx* and add the following code to create a navbar component that will contain the sign-in button you just created:
+
+ ```jsx
+ import React from "react";
+ import Navbar from "react-bootstrap/Navbar";
+ import { useIsAuthenticated } from "@azure/msal-react";
+ import { SignInButton } from "./SignInButton";
+
+ /**
+ * Renders the navbar component with a sign-in button if a user is not authenticated
+ */
+ export const PageLayout = (props) => {
+ const isAuthenticated = useIsAuthenticated();
+
+ return (
+ <>
+ <Navbar bg="primary" variant="dark">
+ <a className="navbar-brand" href="/">MSAL React Tutorial</a>
+ { isAuthenticated ? <span>Signed In</span> : <SignInButton /> }
+ </Navbar>
+ <h5><center>Welcome to the Microsoft Authentication Library For React Tutorial</center></h5>
+ <br />
+ <br />
+ {props.children}
+ </>
+ );
+ };
+ ```
+
+2. Now open *src/App.js* and add replace the existing content with the following code:
+
+ ```jsx
+ import React from "react";
+ import { PageLayout } from "./components/PageLayout";
+
+ function App() {
+ return (
+ <PageLayout>
+ <p>This is the main app content!</p>
+ </PageLayout>
+ );
+ }
+
+ export default App;
+ ```
+
+Your app now has a sign-in button which is only displayed for unauthenticated users!
+
+When a user selects the **Sign in using Popup** or **Sign in using Redirect** button for the first time, the `onClick` handler calls `loginPopup` (or `loginRedirect`) to sign in the user. The `loginPopup` method opens a pop-up window with the *Microsoft identity platform endpoint* to prompt and validate the user's credentials. After a successful sign-in, *msal.js* initiates the [authorization code flow](v2-oauth2-auth-code-flow.md).
+
+At this point, a PKCE-protected authorization code is sent to the CORS-protected token endpoint and is exchanged for tokens. An ID token, access token, and refresh token are received by your application and processed by *msal.js*, and the information contained in the tokens is cached.
+
+## Sign users out
+
+In *src/components* create a file named *SignOutButton.jsx*. Add the code from either of the following sections to invoke logout using a popup window or a full-frame redirect:
+
+### Sign out using popups
+
+Add the following code to *src/components/SignOutButton.jsx* to create a button component that will invoke a popup logout when selected:
+
+```jsx
+import React from "react";
+import { useMsal } from "@azure/msal-react";
+import Button from "react-bootstrap/Button";
+
+function handleLogout(instance) {
+ instance.logoutPopup().catch(e => {
+ console.error(e);
+ });
+}
+
+/**
+ * Renders a button which, when selected, will open a popup for logout
+ */
+export const SignOutButton = () => {
+ const { instance } = useMsal();
+
+ return (
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogout(instance)}>Sign out using Popup</Button>
+ );
+}
+```
+
+### Sign out using redirects
+
+Add the following code to *src/components/SignOutButton.jsx* to create a button component that will invoke a redirect logout when selected:
+
+```jsx
+import React from "react";
+import { useMsal } from "@azure/msal-react";
+import Button from "react-bootstrap/Button";
+
+function handleLogout(instance) {
+ instance.logoutRedirect().catch(e => {
+ console.error(e);
+ });
+}
+
+/**
+ * Renders a button which, when selected, will redirect the page to the logout prompt
+ */
+export const SignOutButton = () => {
+ const { instance } = useMsal();
+
+ return (
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogout(instance)}>Sign out using Redirect</Button>
+ );
+}
+```
+
+### Add the sign-out button
+
+Update your `PageLayout` component in *src/components/PageLayout.jsx* to render the new `SignOutButton` component for authenticated users. Your code should look like this:
+
+```jsx
+import React from "react";
+import Navbar from "react-bootstrap/Navbar";
+import { useIsAuthenticated } from "@azure/msal-react";
+import { SignInButton } from "./SignInButton";
+import { SignOutButton } from "./SignOutButton";
+
+/**
+ * Renders the navbar component with a sign-in button if a user is not authenticated
+ */
+export const PageLayout = (props) => {
+ const isAuthenticated = useIsAuthenticated();
+
+ return (
+ <>
+ <Navbar bg="primary" variant="dark">
+ <a className="navbar-brand" href="/">MSAL React Tutorial</a>
+ { isAuthenticated ? <SignOutButton /> : <SignInButton /> }
+ </Navbar>
+ <h5><center>Welcome to the Microsoft Authentication Library For React Tutorial</center></h5>
+ <br />
+ <br />
+ {props.children}
+ </>
+ );
+};
+```
+
+## Conditionally render components
+
+In order to render certain components only for authenticated or unauthenticated users use the `AuthenticateTemplate` and/or `UnauthenticatedTemplate` as demonstrated below.
+
+1. Add the following import to *src/App.js*:
+
+ ```javascript
+ import { AuthenticatedTemplate, UnauthenticatedTemplate } from "@azure/msal-react";
+ ```
+
+1. In order to render certain components only for authenticated users update your `App` function in *src/App.js* with the following code:
+
+ ```jsx
+ function App() {
+ return (
+ <PageLayout>
+ <AuthenticatedTemplate>
+ <p>You are signed in!</p>
+ </AuthenticatedTemplate>
+ </PageLayout>
+ );
+ }
+ ```
+
+1. To render certain components only for unauthenticated users, such as a suggestion to login, update your `App` function in *src/App.js* with the following code:
+
+ ```jsx
+ function App() {
+ return (
+ <PageLayout>
+ <AuthenticatedTemplate>
+ <p>You are signed in!</p>
+ </AuthenticatedTemplate>
+ <UnauthenticatedTemplate>
+ <p>You are not signed in! Please sign in.</p>
+ </UnauthenticatedTemplate>
+ </PageLayout>
+ );
+ }
+ ```
+
+## Acquire a token
+
+1. Before calling an API, such as Microsoft Graph, you'll need to acquire an access token. Add a new component to *src/App.js* called `ProfileContent` with the following code:
+
+ ```jsx
+ function ProfileContent() {
+ const { instance, accounts, inProgress } = useMsal();
+ const [accessToken, setAccessToken] = useState(null);
+
+ const name = accounts[0] && accounts[0].name;
+
+ function RequestAccessToken() {
+ const request = {
+ ...loginRequest,
+ account: accounts[0]
+ };
+
+ // Silently acquires an access token which is then attached to a request for Microsoft Graph data
+ instance.acquireTokenSilent(request).then((response) => {
+ setAccessToken(response.accessToken);
+ }).catch((e) => {
+ instance.acquireTokenPopup(request).then((response) => {
+ setAccessToken(response.accessToken);
+ });
+ });
+ }
+
+ return (
+ <>
+ <h5 className="card-title">Welcome {name}</h5>
+ {accessToken ?
+ <p>Access Token Acquired!</p>
+ :
+ <Button variant="secondary" onClick={RequestAccessToken}>Request Access Token</Button>
+ }
+ </>
+ );
+ };
+ ```
+
+1. Update your imports in *src/App.js* to match the following:
+
+ ```js
+ import React, { useState } from "react";
+ import { PageLayout } from "./components/PageLayout";
+ import { AuthenticatedTemplate, UnauthenticatedTemplate, useMsal } from "@azure/msal-react";
+ import { loginRequest } from "./authConfig";
+ import Button from "react-bootstrap/Button";
+ ```
+
+1. Finally, add your new `ProfileContent` component as a child of the `AuthenticatedTemplate` in your `App` component in *src/App.js*. Your `App` component should look like this:
+
+ ```javascript
+ function App() {
+ return (
+ <PageLayout>
+ <AuthenticatedTemplate>
+ <ProfileContent />
+ </AuthenticatedTemplate>
+ <UnauthenticatedTemplate>
+ <p>You are not signed in! Please sign in.</p>
+ </UnauthenticatedTemplate>
+ </PageLayout>
+ );
+ }
+ ```
+
+The code above will render a button for signed in users, allowing them to request an access token for Microsoft Graph when the button is selected.
+
+After a user signs in, your app shouldn't ask users to reauthenticate every time they need to access a protected resource (that is, to request a token). To prevent such reauthentication requests, call `acquireTokenSilent` which will first look for a cached, unexpired access token then, if needed, use the refresh token to obtain a new access token. There are some situations, however, where you might need to force users to interact with the Microsoft identity platform. For example:
+
+- Users need to re-enter their credentials because the session has expired.
+- The refresh token has expired.
+- Your application is requesting access to a resource and you need the user's consent.
+- Two-factor authentication is required.
+
+Calling `acquireTokenPopup` opens a pop-up window (or `acquireTokenRedirect` redirects users to the Microsoft identity platform). In that window, users need to interact by confirming their credentials, giving consent to the required resource, or completing the two-factor authentication.
+
+> [!NOTE]
+> If you're using Internet Explorer, we recommend that you use the `loginRedirect` and `acquireTokenRedirect` methods due to a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups) with Internet Explorer and pop-up windows.
+
+## Call the Microsoft Graph API
+
+1. Create file named *graph.js* in the *src* folder and add the following code for making REST calls to the Microsoft Graph API:
+
+ ```javascript
+ import { graphConfig } from "./authConfig";
+
+ /**
+ * Attaches a given access token to a Microsoft Graph API call. Returns information about the user
+ */
+ export async function callMsGraph(accessToken) {
+ const headers = new Headers();
+ const bearer = `Bearer ${accessToken}`;
+
+ headers.append("Authorization", bearer);
+
+ const options = {
+ method: "GET",
+ headers: headers
+ };
+
+ return fetch(graphConfig.graphMeEndpoint, options)
+ .then(response => response.json())
+ .catch(error => console.log(error));
+ }
+ ```
+
+1. Next create a file named *ProfileData.jsx* in *src/components* and add the following code:
+
+ ```javascript
+ import React from "react";
+
+ /**
+ * Renders information about the user obtained from Microsoft Graph
+ */
+ export const ProfileData = (props) => {
+ return (
+ <div id="profile-div">
+ <p><strong>First Name: </strong> {props.graphData.givenName}</p>
+ <p><strong>Last Name: </strong> {props.graphData.surname}</p>
+ <p><strong>Email: </strong> {props.graphData.userPrincipalName}</p>
+ <p><strong>Id: </strong> {props.graphData.id}</p>
+ </div>
+ );
+ };
+ ```
+
+1. Next, open *src/App.js* and add these to the imports:
+
+ ```javascript
+ import { ProfileData } from "./components/ProfileData";
+ import { callMsGraph } from "./graph";
+ ```
+
+1. Finally, update your `ProfileContent` component in *src/App.js* to call Microsoft Graph and display the profile data after acquiring the token. Your `ProfileContent` component should look like this:
+
+ ```javascript
+ function ProfileContent() {
+ const { instance, accounts } = useMsal();
+ const [graphData, setGraphData] = useState(null);
+
+ const name = accounts[0] && accounts[0].name;
+
+ function RequestProfileData() {
+ const request = {
+ ...loginRequest,
+ account: accounts[0]
+ };
+
+ // Silently acquires an access token which is then attached to a request for Microsoft Graph data
+ instance.acquireTokenSilent(request).then((response) => {
+ callMsGraph(response.accessToken).then(response => setGraphData(response));
+ }).catch((e) => {
+ instance.acquireTokenPopup(request).then((response) => {
+ callMsGraph(response.accessToken).then(response => setGraphData(response));
+ });
+ });
+ }
+
+ return (
+ <>
+ <h5 className="card-title">Welcome {name}</h5>
+ {graphData ?
+ <ProfileData graphData={graphData} />
+ :
+ <Button variant="secondary" onClick={RequestProfileData}>Request Profile Information</Button>
+ }
+ </>
+ );
+ };
+ ```
+
+In the changes made above, the `callMSGraph()` method is used to make an HTTP `GET` request against a protected resource that requires a token. The request then returns the content to the caller. This method adds the acquired token in the *HTTP Authorization header*. In the sample application created in this tutorial, the protected resource is the Microsoft Graph API *me* endpoint which displays the signed-in user's profile information.
+
+## Test your application
+
+You've completed creation of the application and are now ready to launch the web server and test the app's functionality.
+
+1. Serve your app by running the following command from within the root of your project folder:
+
+ ```console
+ npm start
+ ```
+1. A browser window should be opened to your app automatically. If it does not, open your browser and navigate to `http://localhost:3000`. You should see a page that looks like the one below.
+
+ :::image type="content" source="media/tutorial-v2-react/react-01-unauthenticated.png" alt-text="Web browser displaying sign-in dialog":::
+
+1. Select the sign-in button to sign in.
+
+### Provide consent for application access
+
+The first time you sign in to your application, you're prompted to grant it access to your profile and sign you in:
++
+If you consent to the requested permissions, the web applications displays your name, signifying a successful login:
++
+### Call the Graph API
+
+After you sign in, select **See Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API:
++
+### More information about scopes and delegated permissions
+
+The Microsoft Graph API requires the *user.read* scope to read a user's profile. By default, this scope is automatically added in every application that's registered in the Azure portal. Other APIs for Microsoft Graph, as well as custom APIs for your back-end server, might require additional scopes. For example, the Microsoft Graph API requires the *Mail.Read* scope in order to list the user's email.
+
+As you add scopes, your users might be prompted to provide additional consent for the added scopes.
++
+## Next steps
+
+If you'd like to dive deeper into JavaScript single-page application development on the Microsoft identity platform, see our multi-part scenario series:
+
+> [!div class="nextstepaction"]
+> [Scenario: Single-page application](scenario-spa-overview.md)
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Title: Sign in to Windows virtual machine in Azure using Azure Active Directory (Preview)
+ Title: Sign in to Windows virtual machine in Azure using Azure Active Directory
description: Azure AD sign in to an Azure VM running Windows
-# Sign in to Windows virtual machine in Azure using Azure Active Directory authentication (Preview)
+# Login to Windows virtual machine in Azure using Azure Active Directory authentication
-Organizations can now utilize Azure Active Directory (AD) authentication for their Azure virtual machines (VMs) running **Windows Server 2019 Datacenter edition** or **Windows 10 1809** and later. Using Azure AD to authenticate to VMs provides you with a way to centrally control and enforce policies. Tools like Azure role-based access control (Azure RBAC) and Azure AD Conditional Access allow you to control who can access a VM. This article shows you how to create and configure a Windows Server 2019 VM to use Azure AD authentication.
+Organizations can now improve the security of Windows virtual machines (VMs) in Azure by integrating with Azure Active Directory (AD) authentication. You can now use Azure AD as a core authentication platform to RDP into a **Windows Server 2019 Datacenter edition** or **Windows 10 1809** and later. Additionally, you will be able to centrally control and enforce Azure RBAC and Conditional Access policies that allow or deny access to the VMs. This article shows you how to create and configure a Windows VM and login with Azure AD based authentication.
-> [!NOTE]
-> Azure AD sign in for Azure Windows VMs is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-There are many benefits of using Azure AD authentication to log in to Windows VMs in Azure, including:
+There are many security benefits of using Azure AD based authentication to login to Windows VMs in Azure, including:
+- Use your corporate AD credentials to login to Windows VMs in Azure.
+- Reduce your reliance on local administrator accounts, you do not need to worry about credential loss/theft, users configuring weak credentials etc.
+- Password complexity and password lifetime policies configured for your Azure AD directory help secure Windows VMs as well.
+- With Azure role-based access control (Azure RBAC), specify who can login to a VM as a regular user or with administrator privileges. When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.
+- With Conditional Access, configure policies to require multi-factor authentication and other signals such as low user and sign in risk before you can RDP to Windows VMs.
+- Use Azure deploy and audit policies to require Azure AD login for Windows VMs and to flag use of no approved local account on the VMs.
+- Login to Windows VMs with Azure Active Directory also works for customers that use Federation Services.
+- Automate and scale Azure AD join with MDM auto enrollment with Intune of Azure Windows VMs that are part for your VDI deployments. MDM enrollment does not apply to Windows Server 2019 VM depolyments
-- Utilize the same federated or managed Azure AD credentials you normally use.-- No longer have to manage local administrator accounts.-- Azure RBAC allows you to grant the appropriate access to VMs based on need and remove it when it is no longer needed.-- Before allowing access to a VM, Azure AD Conditional Access can enforce additional requirements such as:
- - Multi-factor authentication
- - Sign-in risk check
-- Automate and scale Azure AD join of Azure Windows VMs that are part for your VDI deployments. > [!NOTE] > Once you enable this capability, your Windows VMs in Azure will be Azure AD joined. You cannot join it to other domain like on-premises AD or Azure AD DS. If you need to do so, you will need to disconnect the VM from your Azure AD tenant by uninstalling the extension.
There are many benefits of using Azure AD authentication to log in to Windows VM
### Supported Azure regions and Windows distributions
-The following Windows distributions are currently supported during the preview of this feature:
+The following Windows distributions are currently supported for this feature:
- Windows Server 2019 Datacenter - Windows 10 1809 and later
The following Windows distributions are currently supported during the preview o
> [!IMPORTANT] > Remote connection to VMs joined to Azure AD is only allowed from Windows 10 PCs that are either Azure AD registered (starting Windows 10 20H1), Azure AD joined or hybrid Azure AD joined to the **same** directory as the VM.
-The following Azure regions are currently supported during the preview of this feature:
+This feature is now available in the following Azure clouds:
+
+- Azure Global
+- Azure Government
+- Azure China
-- All Azure global regions
-> [!IMPORTANT]
-> To use this preview feature, only deploy a supported Windows distribution and in a supported Azure region. The feature is currently not supported in Azure Government or sovereign clouds.
### Network requirements To enable Azure AD authentication for your Windows VMs in Azure, you need to ensure your VMs network configuration permits outbound access to the following endpoints over TCP port 443: -- `https://enterpriseregistration.windows.net`-- `https://login.microsoftonline.com`-- `https://device.login.microsoftonline.com`-- `https://pas.windows.net`
+For Azure Global
+- https://enterpriseregistration.windows.net For device registration.
+- http://169.254.169.254 For Azure Instance Metadata Service endpoint.
+- https://login.microsoftonline.com For authentication flows.
+- https://pas.windows.net For Azure RBAC flows.
++
+For Azure Government
+- https://enterpriseregistration.microsoftonline.us For device registration.
+- http://169.254.169.254 For Azure Instance Metadata Service.
+- https://login.microsoftonline.us For authentication flows.
+- https://pasff.usgovcloudapi.net For Azure RBAC flows.
++
+For Azure China
+- https://enterpriseregistration.partner.microsoftonline.cn For device registration.
+- http://169.254.169.254 Azure Instance Metadata Service endpoint.
+- https://login.chinacloudapi.cn For authentication flows.
+- https://pas.chinacloudapi.cn For Azure RBAC flows.
+ ## Enabling Azure AD login in for Windows VM in Azure
To create a Windows Server 2019 Datacenter VM in Azure with Azure AD logon:
1. Type **Windows Server** in Search the Marketplace search bar. 1. Click **Windows Server** and choose **Windows Server 2019 Datacenter** from Select a software plan dropdown. 1. Click on **Create**.
-1. On the "Management" tab, enable the option to **Login with AAD credentials (Preview)** under the Azure Active Directory section from Off to **On**.
+1. On the "Management" tab, enable the option to **Login with AAD credentials** under the Azure Active Directory section from Off to **On**.
1. Make sure **System assigned managed identity** under the Identity section is set to **On**. This action should happen automatically once you enable Login with Azure AD credentials.
-1. Go through the rest of the experience of creating a virtual machine. During this preview, you will have to create an administrator username and password for the VM.
+1. Go through the rest of the experience of creating a virtual machine. You will have to create an administrator username and password for the VM.
![Login with Azure AD credentials create a VM](./media/howto-vm-sign-in-azure-ad-windows/azure-portal-login-with-azure-ad.png)
This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension
Exit code 51 translates to "This extension is not supported on the VM's operating system".
-At Public Preview, the AADLoginForWindows extension is only intended to be installed on Windows Server 2019 or Windows 10 (Build 1809 or later). Ensure the version of Windows is supported. If the build of Windows is not supported, uninstall the VM Extension.
+The AADLoginForWindows extension is only intended to be installed on Windows Server 2019 or Windows 10 (Build 1809 or later). Ensure the version of Windows is supported. If the build of Windows is not supported, uninstall the VM Extension.
### Troubleshoot sign-in issues
If you have not deployed Windows Hello for Business and if that is not an option
> [!NOTE] > Windows Hello for Business PIN authentication with RDP has been supported by Windows 10 for several versions, however support for Biometric authentication with RDP was added in Windows 10 version 1809. Using Windows Hello for Business authentication during RDP is only available for deployments that use cert trust model and currently not available for key trust model.
-## Preview feedback
-
-Share your feedback about this preview feature or report issues using it on the [Azure AD feedback forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
+Share your feedback about this feature or report issues using it on the [Azure AD feedback forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
## Next steps
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-bindings.md
Title: Bindings for Durable Functions - Azure description: How to use triggers and bindings for the Durable Functions extension for Azure Functions. Previously updated : 12/17/2019 Last updated : 05/07/2021 # Bindings for Durable Functions (Azure Functions)
-The [Durable Functions](durable-functions-overview.md) extension introduces two new trigger bindings that control the execution of orchestrator and activity functions. It also introduces an output binding that acts as a client for the Durable Functions runtime.
+The [Durable Functions](durable-functions-overview.md) extension introduces three trigger bindings that control the execution of orchestrator, entity, and activity functions. It also introduces an output binding that acts as a client for the Durable Functions runtime.
## Orchestration trigger
-The orchestration trigger enables you to author [durable orchestrator functions](durable-functions-types-features-overview.md#orchestrator-functions). This trigger supports starting new orchestrator function instances and resuming existing orchestrator function instances that are "awaiting" a task.
+The orchestration trigger enables you to author [durable orchestrator functions](durable-functions-types-features-overview.md#orchestrator-functions). This trigger executes when a new orchestration instance is scheduled and when an existing orchestration instance receives an event. Examples of events that can trigger orchestrator functions include durable timer expirations, activity function responses, and events raised by external clients.
-When you use the Visual Studio tools for Azure Functions, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute.
+When you author functions in .NET, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute.
-When you write orchestrator functions in scripting languages (for example, JavaScript or C# scripting), the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
+When you write orchestrator functions in scripting languages, like JavaScript, Python, or PowerShell, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
```json {
When you write orchestrator functions in scripting languages (for example, JavaS
* `orchestration` is the name of the orchestration that clients must use when they want to start new instances of this orchestrator function. This property is optional. If not specified, the name of the function is used.
-Internally this trigger binding polls a series of queues in the default storage account for the function app. These queues are internal implementation details of the extension, which is why they are not explicitly configured in the binding properties.
+Internally, this trigger binding polls the configured durable store for new orchestration events, such as orchestration start events, durable timer expiration events, activity function response events, and external events raised by other functions.
### Trigger behavior Here are some notes about the orchestration trigger:
-* **Single-threading** - A single dispatcher thread is used for all orchestrator function execution on a single host instance. For this reason, it is important to ensure that orchestrator function code is efficient and doesn't perform any I/O. It is also important to ensure that this thread does not do any async work except when awaiting on Durable Functions-specific task types.
-* **Poison-message handling** - There is no poison message support in orchestration triggers.
+* **Single-threading** - A single dispatcher thread is used for all orchestrator function execution on a single host instance. For this reason, it's important to ensure that orchestrator function code is efficient and doesn't perform any I/O. It is also important to ensure that this thread does not do any async work except when awaiting on Durable Functions-specific task types.
+* **Poison-message handling** - There's no poison message support in orchestration triggers.
* **Message visibility** - Orchestration trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy. * **Return values** - Return values are serialized to JSON and persisted to the orchestration history table in Azure Table storage. These return values can be queried by the orchestration client binding, described later. > [!WARNING]
-> Orchestrator functions should never use any input or output bindings other than the orchestration trigger binding. Doing so has the potential to cause problems with the Durable Task extension because those bindings may not obey the single-threading and I/O rules. If you'd like to use other bindings, add them to an Activity function called from your Orchestrator function.
+> Orchestrator functions should never use any input or output bindings other than the orchestration trigger binding. Doing so has the potential to cause problems with the Durable Task extension because those bindings may not obey the single-threading and I/O rules. If you'd like to use other bindings, add them to an activity function called from your orchestrator function. For more information about coding constraints for orchestrator functions, see the [Orchestrator function code constraints](durable-functions-code-constraints.md) documentation.
> [!WARNING]
-> JavaScript orchestrator functions should never be declared `async`.
+> JavaScript and Python orchestrator functions should never be declared `async`.
-### Trigger usage (.NET)
+### Trigger usage
The orchestration trigger binding supports both inputs and outputs. Here are some things to know about input and output handling:
-* **inputs** - .NET orchestration functions support only `DurableOrchestrationContext` as a parameter type. Deserialization of inputs directly in the function signature is not supported. Code must use the `GetInput<T>` (.NET) or `getInput` (JavaScript) method to fetch orchestrator function inputs. These inputs must be JSON-serializable types.
-* **outputs** - Orchestration triggers support output values as well as inputs. The return value of the function is used to assign the output value and must be JSON-serializable. If a .NET function returns `Task` or `void`, a `null` value will be saved as the output.
+* **inputs** - Orchestration triggers can be invoked with inputs, which are accessed through the context input object. All inputs must be JSON-serializable.
+* **outputs** - Orchestration triggers support output values as well as inputs. The return value of the function is used to assign the output value and must be JSON-serializable.
### Trigger sample The following example code shows what the simplest "Hello World" orchestrator function might look like:
-#### C#
+# [C#](#tab/csharp)
```csharp [FunctionName("HelloWorld")] public static string Run([OrchestrationTrigger] IDurableOrchestrationContext context) { string name = context.GetInput<string>();
+ // ... do some work ...
return $"Hello {name}!"; } ```+ > [!NOTE] > The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `DurableOrchestrationContext` instead of `IDurableOrchestrationContext`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-#### JavaScript (Functions 2.0 only)
+# [JavaScript](#tab/javascript)
```javascript const df = require("durable-functions"); module.exports = df.orchestrator(function*(context) { const name = context.df.getInput();
+ // ... do some work ...
return `Hello ${name}!`; }); ``` > [!NOTE]
-> The `context` object in JavaScript does not represent the DurableOrchestrationContext, but the [function context as a whole](../functions-reference-node.md#context-object). You can access orchestration methods via the `context` object's `df` property.
+> The `durable-functions` library takes care of calling the `context.done` method when the generator function exits.
-> [!NOTE]
-> JavaScript orchestrators should use `return`. The `durable-functions` library takes care of calling the `context.done` method.
+# [Python](#tab/python)
+
+```python
+import azure.durable_functions as df
+
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ input_ = context.get_input()
+ # Do some work
+ return f"Hello {name}!"
+
+main = df.Orchestrator.create(orchestrator_function)
+```
++ Most orchestrator functions call activity functions, so here is a "Hello World" example that demonstrates how to call an activity function:
-#### C#
+# [C#](#tab/csharp)
```csharp [FunctionName("HelloWorld")]
public static async Task<string> Run(
> [!NOTE] > The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `DurableOrchestrationContext` instead of `IDurableOrchestrationContext`. For more information about the differences between versions, see the [Durable Functions versions](durable-functions-versions.md) article.
-#### JavaScript (Functions 2.0 only)
+# [JavaScript](#tab/javascript)
```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
}); ```
+# [Python](#tab/python)
+
+```python
+import azure.durable_functions as df
+
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ input_ = context.get_input()
+ result = yield context.call_activity('SayHello', name)
+ return result
+
+main = df.Orchestrator.create(orchestrator_function)
+```
+++ ## Activity trigger The activity trigger enables you to author functions that are called by orchestrator functions, known as [activity functions](durable-functions-types-features-overview.md#activity-functions).
-If you're using Visual Studio, the activity trigger is configured using the `ActivityTriggerAttribute` .NET attribute.
+If you're authoring functions in .NET, the activity trigger is configured using the [ActivityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.activitytriggerattribute) .NET attribute.
-If you're using VS Code or the Azure portal for development, the activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+If you're using JavaScript, Python, or PowerShell, the activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using VS Code or the Azure portal for development, the activity trigge
* `activity` is the name of the activity. This value is the name that orchestrator functions use to invoke this activity function. This property is optional. If not specified, the name of the function is used.
-Internally this trigger binding polls a queue in the default storage account for the function app. This queue is an internal implementation detail of the extension, which is why it is not explicitly configured in the binding properties.
+Internally, this trigger binding polls the configured durable store for new activity execution events.
### Trigger behavior Here are some notes about the activity trigger: * **Threading** - Unlike the orchestration trigger, activity triggers don't have any restrictions around threading or I/O. They can be treated like regular functions.
-* **Poison-message handling** - There is no poison message support in activity triggers.
+* **Poison-message handling** - There's no poison message support in activity triggers.
* **Message visibility** - Activity trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy.
-* **Return values** - Return values are serialized to JSON and persisted to the orchestration history table in Azure Table storage.
-
-> [!WARNING]
-> The storage backend for activity functions is an implementation detail and user code should not interact with these storage entities directly.
+* **Return values** - Return values are serialized to JSON and persisted to the configured durable store.
-### Trigger usage (.NET)
+### Trigger usage
The activity trigger binding supports both inputs and outputs, just like the orchestration trigger. Here are some things to know about input and output handling:
-* **inputs** - .NET activity functions natively use `DurableActivityContext` as a parameter type. Alternatively, an activity function can be declared with any parameter type that is JSON-serializable. When you use `DurableActivityContext`, you can call `GetInput<T>` to fetch and deserialize the activity function input.
-* **outputs** - Activity functions support output values as well as inputs. The return value of the function is used to assign the output value and must be JSON-serializable. If a .NET function returns `Task` or `void`, a `null` value will be saved as the output.
-* **metadata** - .NET activity functions can bind to a `string instanceId` parameter to get the instance ID of the parent orchestration.
+* **inputs** - Activity triggers can be invoked with inputs from an orchestrator function. All inputs must be JSON-serializable.
+* **outputs** - Activity functions support output values as well as inputs. The return value of the function is used to assign the output value and must be JSON-serializable.
+* **metadata** - .NET activity functions can bind to a `string instanceId` parameter to get the instance ID of the calling orchestration.
### Trigger sample
-The following example code shows what a simple "Hello World" activity function might look like:
+The following example code shows what a simple `SayHello` activity function might look like:
-#### C#
+# [C#](#tab/csharp)
```csharp [FunctionName("SayHello")]
public static string SayHello([ActivityTrigger] IDurableActivityContext helloCon
} ```
-> [!NOTE]
-> The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `DurableActivityContext` instead of `IDurableActivityContext`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-
-The default parameter type for the .NET `ActivityTriggerAttribute` binding is `IDurableActivityContext`. However, .NET activity triggers also support binding directly to JSON-serializeable types (including primitive types), so the same function could be simplified as follows:
+The default parameter type for the .NET `ActivityTriggerAttribute` binding is [IDurableActivityContext](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableactivitycontext) (or [DurableActivityContext](/dotnet/api/microsoft.azure.webjobs.durableactivitycontext?view=azure-dotnet-legacy&preserve-view=true) for Durable Functions v1). However, .NET activity triggers also support binding directly to JSON-serializeable types (including primitive types), so the same function could be simplified as follows:
```csharp [FunctionName("SayHello")]
public static string SayHello([ActivityTrigger] string name)
} ```
-#### JavaScript (Functions 2.0 only)
+# [JavaScript](#tab/javascript)
```javascript module.exports = async function(context) {
module.exports = async function(context, name) {
}; ```
+# [Python](#tab/python)
+
+```python
+def main(name: str) -> str:
+ return f"Hello {name}!"
+```
++ ### Using input and output bindings
module.exports = async function (context) {
## Orchestration client
-The orchestration client binding enables you to write functions that interact with orchestrator functions. These functions are sometimes referred to as [client functions](durable-functions-types-features-overview.md#client-functions). For example, you can act on orchestration instances in the following ways:
+The orchestration client binding enables you to write functions that interact with orchestrator functions. These functions are often referred to as [client functions](durable-functions-types-features-overview.md#client-functions). For example, you can act on orchestration instances in the following ways:
* Start them. * Query their status.
The orchestration client binding enables you to write functions that interact wi
* Send events to them while they're running. * Purge instance history.
-If you're using Visual Studio, you can bind to the orchestration client by using the `OrchestrationClientAttribute` .NET attribute for Durable Functions 1.0. Starting in the Durable Functions 2.0, you can bind to the orchestration client by using the `DurableClientAttribute` .NET attribute.
+If you're using .NET, you can bind to the orchestration client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) attribute ([OrchestrationClientAttribute](/dotnet/api/microsoft.azure.webjobs.orchestrationclientattribute?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x).
-If you're using scripting languages (for example, *.csx* or *.js* files) for development, the orchestration trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+If you're using scripting languages, like JavaScript, Python, or PowerShell, the durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using scripting languages (for example, *.csx* or *.js* files) for dev
### Client usage
-In .NET functions, you typically bind to `IDurableOrchestrationClient`, which gives you full access to all orchestration client APIs supported by Durable Functions. In the older Durable Functions 2.x releases, you instead bind to the `DurableOrchestrationClient` class. In JavaScript, the same APIs are exposed by the object returned from `getClient`. APIs on the client object include:
-
-* `StartNewAsync`
-* `GetStatusAsync`
-* `TerminateAsync`
-* `RaiseEventAsync`
-* `PurgeInstanceHistoryAsync`
-* `CreateCheckStatusResponse`
-* `CreateHttpManagementPayload`
-
-Alternatively, .NET functions can bind to `IAsyncCollector<T>` where `T` is `StartOrchestrationArgs` or `JObject`.
+In .NET functions, you typically bind to [IDurableClient](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableclient) ([DurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclient?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions. In other languages, you must use the language-specific SDK to get access to a client object.
-For more information on these operations, see the `IDurableOrchestrationClient` API documentation.
+Here's an example queue-triggered function that starts a "HelloWorld" orchestration.
-### Client sample (Visual Studio development)
-
-Here is an example queue-triggered function that starts a "HelloWorld" orchestration.
+# [C#](#tab/csharp)
```csharp [FunctionName("QueueStart")]
public static Task Run(
> [!NOTE] > The previous C# code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `OrchestrationClient` attribute instead of the `DurableClient` attribute, and you must use the `DurableOrchestrationClient` parameter type instead of `IDurableOrchestrationClient`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-### Client sample (not Visual Studio)
-
-If you're not using Visual Studio for development, you can create the following *function.json* file. This example shows how to configure a queue-triggered function that uses the durable orchestration client binding:
+# [JavaScript](#tab/javascript)
+**function.json**
```json { "bindings": [
If you're not using Visual Studio for development, you can create the following
} ```
-> [!NOTE]
-> The previous JSON is for Durable Functions 2.x. For Durable Functions 1.x, you must use `orchestrationClient` instead of the `durableClient` as the trigger type. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-
-Following are language-specific samples that start new orchestrator function instances.
-
-#### C# Script Sample
-
-The following sample shows how to use the durable orchestration client binding to start a new function instance from a queue-triggered C# function:
-
-```csharp
-#r "Microsoft.Azure.WebJobs.Extensions.DurableTask"
-
-using Microsoft.Azure.WebJobs.Extensions.DurableTask;
-
-public static Task Run(string input, IDurableOrchestrationClient starter)
-{
- return starter.StartNewAsync("HelloWorld", input);
-}
-```
-
-> [!NOTE]
-> The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use the `DurableOrchestrationClient` parameter type instead of `IDurableOrchestrationClient`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-
-#### JavaScript Sample
-
-The following sample shows how to use the durable orchestration client binding to start a new function instance from a JavaScript function:
-
+**index.js**
```javascript const df = require("durable-functions");
module.exports = async function (context) {
}; ```
-More details on starting instances can be found in [Instance management](durable-functions-instance-management.md).
-
-## Entity trigger
-
-Entity triggers allow you to author [entity functions](durable-functions-entities.md). This trigger supports processing events for a specific entity instance.
-
-When you use the Visual Studio tools for Azure Functions, the entity trigger is configured using the `EntityTriggerAttribute` .NET attribute.
-
-> [!NOTE]
-> Entity triggers are available starting in Durable Functions 2.x.
-
-Internally this trigger binding polls a series of queues in the default storage account for the function app. These queues are internal implementation details of the extension, which is why they are not explicitly configured in the binding properties.
-
-### Trigger behavior
-
-Here are some notes about the entity trigger:
-
-* **Single-threaded**: A single dispatcher thread is used to process operations for a particular entity. If multiple messages are sent to a single entity concurrently, the operations will be processed one-at-a-time.
-* **Poison-message handling** - There is no poison message support in entity triggers.
-* **Message visibility** - Entity trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy.
-* **Return values** - Entity functions do not support return values. There are specific APIs that can be used to save state or pass values back to orchestrations.
-
-Any state changes made to an entity during its execution will be automatically persisted after execution has completed.
-
-### Trigger usage (.NET)
+# [Python](#tab/python)
-Every entity function has a parameter type of `IDurableEntityContext`, which has the following members:
-
-* **EntityName**: the name of the currently executing entity.
-* **EntityKey**: the key of the currently executing entity.
-* **EntityId**: the ID of the currently executing entity.
-* **OperationName**: the name of the current operation.
-* **HasState**: whether the entity exists, that is, has some state.
-* **GetState\<TState>()**: gets the current state of the entity. If it does not already exist, it is created and initialized to `default<TState>`. The `TState` parameter must be a primitive or JSON-serializeable type.
-* **GetState\<TState>(initfunction)**: gets the current state of the entity. If it does not already exist, it is created by calling the provided `initfunction` parameter. The `TState` parameter must be a primitive or JSON-serializeable type.
-* **SetState(arg)**: creates or updates the state of the entity. The `arg` parameter must be a JSON-serializeable object or primitive.
-* **DeleteState()**: deletes the state of the entity.
-* **GetInput\<TInput>()**: gets the input for the current operation. The `TInput` type parameter must be a primitive or JSON-serializeable type.
-* **Return(arg)**: returns a value to the orchestration that called the operation. The `arg` parameter must be a primitive or JSON-serializeable object.
-* **SignalEntity(EntityId, scheduledTimeUtc, operation, input)**: sends a one-way message to an entity. The `operation` parameter must be a non-null string, the optional `scheduledTimeUtc` must be a UTC datetime at which to invoke the operation, and the `input` parameter must be a primitive or JSON-serializeable object.
-* **CreateNewOrchestration(orchestratorFunctionName, input)**: starts a new orchestration. The `input` parameter must be a primitive or JSON-serializeable object.
-
-The `IDurableEntityContext` object passed to the entity function can be accessed using the `Entity.Current` async-local property. This approach is convenient when using the class-based programming model.
-
-### Trigger sample (C# function-based syntax)
-
-The following code is an example of a simple *Counter* entity implemented as a durable function. This function defines three operations, `add`, `reset`, and `get`, each of which operate on an integer state.
-
-```csharp
-[FunctionName("Counter")]
-public static void Counter([EntityTrigger] IDurableEntityContext ctx)
+**function.json**
+```json
{
- switch (ctx.OperationName.ToLowerInvariant())
+ "bindings": [
{
- case "add":
- ctx.SetState(ctx.GetState<int>() + ctx.GetInput<int>());
- break;
- case "reset":
- ctx.SetState(0);
- break;
- case "get":
- ctx.Return(ctx.GetState<int>()));
- break;
+ "name": "input",
+ "type": "queueTrigger",
+ "queueName": "durable-function-trigger",
+ "direction": "in"
+ },
+ {
+ "name": "starter",
+ "type": "durableClient",
+ "direction": "in"
}
+ ]
} ```
-For more information on the function-based syntax and how to use it, see [Function-Based Syntax](durable-functions-dotnet-entities.md#function-based-syntax).
-
-### Trigger sample (C# class-based syntax)
-
-The following example is an equivalent implementation of the `Counter` entity using classes and methods.
+**__init__.py**
+```python
+import json
+import azure.functions as func
+import azure.durable_functions as df
-```csharp
-[JsonObject(MemberSerialization.OptIn)]
-public class Counter
-{
- [JsonProperty("value")]
- public int CurrentValue { get; set; }
-
- public void Add(int amount) => this.CurrentValue += amount;
-
- public void Reset() => this.CurrentValue = 0;
+async def main(msg: func.QueueMessage, starter: str) -> None:
+ client = df.DurableOrchestrationClient(starter)
+ payload = msg.get_body().decode('utf-8')
+ instance_id = await client.start_new("HelloWorld", client_input=payload)
+```
- public int Get() => this.CurrentValue;
+
- [FunctionName(nameof(Counter))]
- public static Task Run([EntityTrigger] IDurableEntityContext ctx)
- => ctx.DispatchAsync<Counter>();
-}
-```
+More details on starting instances can be found in [Instance management](durable-functions-instance-management.md).
-The state of this entity is an object of type `Counter`, which contains a field that stores the current value of the counter. To persist this object in storage, it is serialized and deserialized by the [Json.NET](https://www.newtonsoft.com/json) library.
+## Entity trigger
-For more information on the class-based syntax and how to use it, see [Defining entity classes](durable-functions-dotnet-entities.md#defining-entity-classes).
+Entity triggers allow you to author [entity functions](durable-functions-entities.md). This trigger supports processing events for a specific entity instance.
> [!NOTE]
-> The function entry point method with the `[FunctionName]` attribute *must* be declared `static` when using entity classes. Non-static entry point methods may result in multiple object initialization and potentially other undefined behaviors.
-
-Entity classes have special mechanisms for interacting with bindings and .NET dependency injection. For more information, see [Entity construction](durable-functions-dotnet-entities.md#entity-construction).
+> Entity triggers are available starting in Durable Functions 2.x.
-### Trigger sample (JavaScript)
+Internally, this trigger binding polls the configured durable store for new entity operations that need to be executed.
-The following code is an example of a simple *Counter* entity implemented as a durable function written in JavaScript. This function defines three operations, `add`, `reset`, and `get`, each of which operate on an integer state.
+### Trigger behavior
-**function.json**
-```json
-{
- "bindings": [
- {
- "name": "context",
- "type": "entityTrigger",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
+Here are some notes about the entity trigger:
-**index.js**
-```javascript
-const df = require("durable-functions");
+* **Single-threaded**: A single dispatcher thread is used to process operations for a particular entity. If multiple messages are sent to a single entity concurrently, the operations will be processed one-at-a-time.
+* **Poison-message handling** - There's no poison message support in entity triggers.
+* **Message visibility** - Entity trigger messages are dequeued and kept invisible for a configurable duration. The visibility of these messages is renewed automatically as long as the function app is running and healthy.
+* **Return values** - Entity functions don't support return values. There are specific APIs that can be used to save state or pass values back to orchestrations.
-module.exports = df.entity(function(context) {
- const currentValue = context.df.getState(() => 0);
- switch (context.df.operationName) {
- case "add":
- const amount = context.df.getInput();
- context.df.setState(currentValue + amount);
- break;
- case "reset":
- context.df.setState(0);
- break;
- case "get":
- context.df.return(currentValue);
- break;
- }
-});
-```
+Any state changes made to an entity during its execution will be automatically persisted after execution has completed.
-> [!NOTE]
-> Durable entities are available in JavaScript starting with version **1.3.0** of the `durable-functions` npm package.
+For more information and examples on defining and interacting with entity triggers, see the [Durable Entities](durable-functions-entities.md) documentation.
## Entity client The entity client binding enables you to asynchronously trigger [entity functions](#entity-trigger). These functions are sometimes referred to as [client functions](durable-functions-types-features-overview.md#client-functions).
-If you're using Visual Studio, you can bind to the entity client by using the `DurableClientAttribute` .NET attribute.
+If you're using .NET precompiled functions, you can bind to the entity client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) .NET attribute.
> [!NOTE] > The `[DurableClientAttribute]` can also be used to bind to the [orchestration client](#orchestration-client).
-If you're using scripting languages (for example, *.csx* or *.js* files) for development, the entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+If you're using scripting languages (like C# scripting, JavaScript, or Python) for development, the entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using scripting languages (for example, *.csx* or *.js* files) for dev
> [!NOTE] > In most cases, we recommend that you omit the optional properties and rely on the default behavior.
-### Entity client usage
-
-In .NET functions, you typically bind to `IDurableEntityClient`, which gives you full access to all client APIs supported by Durable Entities. You can also bind to the `IDurableOrchestrationClient` interface, which provides access to client APIs for both entities and orchestrations. APIs on the client object include:
-
-* **ReadEntityStateAsync\<T>**: reads the state of an entity. It returns a response that indicates whether the target entity exists, and if so, what its state is.
-* **SignalEntityAsync**: sends a one-way message to an entity, and waits for it to be enqueued.
-* **ListEntitiesAsync**: queries for the state of multiple entities. Entities can be queried by *name* and *last operation time*.
-
-There is no need to create the target entity before sending a signal - the entity state can be created from within the entity function that handles the signal.
-
-> [!NOTE]
-> It's important to understand that the "signals" sent from the client are simply enqueued, to be processed asynchronously at a later time. In particular, the `SignalEntityAsync` usually returns before the entity even starts the operation, and it is not possible to get back the return value or observe exceptions. If stronger guarantees are required (e.g. for workflows), *orchestrator functions* should be used, which can wait for entity operations to complete, and can process return values and observe exceptions.
-
-### Example: client signals entity directly - C#
-
-Here is an example queue-triggered function that invokes a "Counter" entity.
-
-```csharp
-[FunctionName("AddFromQueue")]
-public static Task Run(
- [QueueTrigger("durable-function-trigger")] string input,
- [DurableClient] IDurableEntityClient client)
-{
- // Entity operation input comes from the queue message content.
- var entityId = new EntityId(nameof(Counter), "myCounter");
- int amount = int.Parse(input);
- return client.SignalEntityAsync(entityId, "Add", amount);
-}
-```
-
-### Example: client signals entity via interface - C#
-
-Where possible, we recommend [accessing entities through interfaces](durable-functions-dotnet-entities.md#accessing-entities-through-interfaces) because it provides more type checking. For example, suppose the `Counter` entity mentioned earlier implemented an `ICounter` interface, defined as follows:
-
-```csharp
-public interface ICounter
-{
- void Add(int amount);
- void Reset();
- Task<int> Get();
-}
-
-public class Counter : ICounter
-{
- // ...
-}
-```
-
-Client code can then use `SignalEntityAsync<ICounter>` to generate a type-safe proxy:
-
-```csharp
-[FunctionName("UserDeleteAvailable")]
-public static async Task AddValueClient(
- [QueueTrigger("my-queue")] string message,
- [DurableClient] IDurableEntityClient client)
-{
- var target = new EntityId(nameof(Counter), "myCounter");
- int amount = int.Parse(message);
- await client.SignalEntityAsync<ICounter>(target, proxy => proxy.Add(amount));
-}
-```
-
-The `proxy` parameter is a dynamically generated instance of `ICounter`, which internally translates the call to `Add` into the equivalent (untyped) call to `SignalEntityAsync`.
-
-> [!NOTE]
-> The `SignalEntityAsync` APIs represent one-way operations. If an entity interfaces returns `Task<T>`, the value of the `T` parameter will always be null or `default`.
-
-In particular, it does not make sense to signal the `Get` operation, as no value is returned. Instead, clients can use either `ReadStateAsync` to access the counter state directly, or can start an orchestrator function that calls the `Get` operation.
-
-### Example: client signals entity - JavaScript
-
-Here is an example queue-triggered function that signals a "Counter" entity in JavaScript.
-
-**function.json**
-```json
-{
- "bindings": [
- {
- "name": "input",
- "type": "queueTrigger",
- "queueName": "durable-entity-trigger",
- "direction": "in",
- },
- {
- "name": "starter",
- "type": "durableClient",
- "direction": "in"
- }
- ],
- "disabled": false
- }
-```
-
-**index.js**
-```javascript
-const df = require("durable-functions");
-
-module.exports = async function (context) {
- const client = df.getClient(context);
- const entityId = new df.EntityId("Counter", "myCounter");
- await client.signalEntity(entityId, "add", 1);
-};
-```
-
-> [!NOTE]
-> Durable entities are available in JavaScript starting with version **1.3.0** of the `durable-functions` npm package.
+For more information and examples on interacting with entities as a client, see the [Durable Entities](durable-functions-entities.md#access-entities) documentation.
<a name="host-json"></a> ## host.json settings
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
public static async Task Run(
``` > [!NOTE]
-> Dynamically generated proxies are also available in .NET for signaling entities in a type-safe way. And in addition to signaling, clients can also query for the state of an entity function using [type-safe methods](durable-functions-bindings.md#entity-client-usage) on the orchestration client binding.
+> Dynamically generated proxies are also available in .NET for signaling entities in a type-safe way. And in addition to signaling, clients can also query for the state of an entity function using [type-safe methods](durable-functions-dotnet-entities.md#accessing-entities-through-interfaces) on the orchestration client binding.
# [JavaScript](#tab/javascript)
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
+
+ Title: Access on premises SQL Server from Data Factory Managed VNET using Private Endpoint
+description: This tutorial provides steps for using the Azure portal to setup Private Link Service and access on-prem SQL Server from Managed VNET using Private Endpoint.
++++ Last updated : 05/06/2021++
+# Tutorial: How to access on premises SQL Server from Data Factory Managed VNET using Private Endpoint
+
+This tutorial provides steps for using the Azure portal to setup Private Link Service and access on-prem SQL Server from Managed VNET using Private Endpoint.
+++
+## Prerequisites
+
+* **Azure subscription**. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* **Virtual Network**. If you donΓÇÖt have a Virtual Network, create one following [Create Virtual Network](https://docs.microsoft.com/azure/virtual-network/quick-create-portal).
+* **Virtual network to on-premises network**. Create a connection between virtual network and on-premises network either using [ExpressRoute](https://docs.microsoft.com/azure/expressroute/expressroute-howto-linkvnet-portal-resource-manager?toc=/azure/virtual-network/toc.json) or [VPN](https://docs.microsoft.com/azure/vpn-gateway/tutorial-site-to-site-portal?toc=/azure/virtual-network/toc.json).
+* **Data Factory with Managed VNET enabled**. If you donΓÇÖt have a Data Factory or Managed VNET is not enabled, create one following [Create Data Factory with Managed VNET](https://docs.microsoft.com/azure/data-factory/tutorial-copy-data-portal-private).
+
+## Create subnets for resources
+
+**Use the portal to create subnets in your virtual network.**
+
+| Subnet | Description |
+|: |: |
+|be-subnet |subnet for backend servers|
+|fe-subnet |subnet for standard internal load balancer|
+|pls-subnet|subnet for Private Link Service|
++
+## Create a standard load balancer
+
+Use the portal to create a standard internal load balancer.
+
+1. On the top left-hand side of the screen, select **Create a resource > Networking > Load Balancer**.
+2. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ |: |: |
+ |Subscription|Select your subscription.|
+ |Resource group|Select your resource group.|
+ |Name|Enter **myLoadBalancer**.|
+ |Region|Select **East US**.|
+ |Type|Select **Internal**.|
+ |SKU|Select **Standard**.|
+ |Virtual network|Select your virtual network.|
+ |Subnet|Select **fe-subnet** created in the previous step.|
+ |IP address assignment|Select **Dynamic**.|
+ |Availability zone|Select **Zone-redundant**.|
+
+3. Accept the defaults for the remaining settings, and then select **Review + create**.
+4. In the **Review + create** tab, select **Create**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/create-load-balancer.png" alt-text="Screenshot that shows the step to create standard load balancer.":::
+
+## Create load balancer resources
+
+### Create a backend pool
+
+A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
+
+Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+2. Under **Settings**, select **Backend pools**, then select **Add**.
+3. On the **Add a backend pool** page, for name, type **myBackendPool**, as the name for your backend pool, and then select **Add**.
+
+### Create a health probe
+
+The load balancer monitors the status of your app with a health probe.
+
+The health probe adds or removes VMs from the load balancer based on their response to health checks.
+
+Create a health probe named **myHealthProbe** to monitor the health of the VMs.
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+2. Under **Settings**, select **Health probes**, then select **Add**.
+
+ | Setting | Value |
+ |: |: |
+ |Name|Enter **myHealthProbe**.|
+ |Protocol|Select **TCP**.|
+ |Port|Enter 22.|
+ |Interval|Enter **15** for number of **Interval** in seconds between probe attempts.|
+ |Unhealthy threshold|Select **2** for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.|
+
+3. Leave the rest the defaults and select **OK**.
+
+### Create a load balancer rule
+
+A load balancer rule is used to define how traffic is distributed to the VMs. You define
+the frontend IP configuration for the incoming traffic and the backend IP pool to receive
+the traffic. The source and destination port are defined in the rule.
+
+In this section, you'll create a load balancer rule:
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+2. Under **Settings**, select **Load-balancing rules**, then select **Add**.
+3. Use these values to configure the load-balancing rule:
+
+ | Setting | Value |
+ |: |: |
+ |Name|Enter **myRule**.|
+ |IP Version|Select **IPv4**.|
+ |Frontend IP address|Select **LoadBalancerFrontEnd**.|
+ |Protocol|Select **TCP**.|
+ |Port|Enter **1433**.|
+ |Backend port|Enter **1433**.|
+ |Backend pool|Select **myBackendPool**.|
+ |Health probe|Select **myHealthProbe**.|
+ |Idle timeout (minutes)|Move the slider to **15** minutes.|
+ |TCP reset|Select **Disabled**.|
+
+4. Leave the rest of the defaults and then select **OK**.
+
+## Create a private link service
+
+In this section, you'll create a Private Link service behind a standard load balancer.
+
+1. On the upper-left part of the page in the Azure portal, select **Create a resource**.
+2. Search for **Private Link** in the **Search the Marketplace** box.
+3. Select **Create**.
+4. In **Overview** under **Private Link Center**, select the blue **Create private link service** button.
+5. In the **Basics** tab under **Create private link service**, enter, or select the following
+information:
+
+ |Setting |Value |
+ ||--|
+ |**Project details**||
+ |Subscription |Select your subscription.|
+ |Resource Group |Select your resource group.|
+ |**Instance details**||
+ |Name |Enter **myPrivateLinkService**.|
+ |Region |Select **East US**.|
+
+6. Select the **Outbound settings** tab or select **Next: Outbound settings** at the
+bottom of the page.
+7. In the **Outbound settings** tab, enter or select the following information:
+
+ | Setting | Value |
+ |: |: |
+ |Load balancer|Select **myLoadBalancer**.|
+ |Load balancer frontend IP address|Select **LoadBalancerFrontEnd**.|
+ |Source NAT subnet|Select **pls-subnet**.|
+ |Enable TCP proxy V2|Leave the default of **No**.|
+ |**Private IP address settings**||
+ |Leave the default settings.||
+
+8. Select the **Access security** tab or select **Next: Access security** at the bottom of
+the page.
+9. Leave the default of **Role-based access control only** in the **Access security** tab.
+10. Select the **Tags** tab or select **Next: Tags** at the bottom of the page.
+11. Select the **Review + create** tab or select **Next: Review + create** at the bottom of
+the page.
+12. Select **Create** in the **Review + create** tab.
++
+## Create backend servers
+
+1. On the upper-left side of the portal, select **Create a resource > Compute > Virtual machine**.
+2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+
+ |Setting |Value|
+ ||--|
+ |**Project details**||
+ |Subscription |Select your Azure subscription.|
+ |Resource Group |Select your resource group.|
+ |**Instance details**||
+ |Virtual machine name |Enter **myVM1**.|
+ |Region |Select **East US**.|
+ |Availability Options |Select **Availability zones**.|
+ |Availability zone |Select **1**.|
+ |Image |Select **Ubuntu Server 18.04LTS ΓÇô Gen1**.|
+ |Azure Spot instance |Select **No**.|
+ |Size |Choose VM size or take default setting.|
+ |**Administrator account**||
+ |Username |Enter a username.|
+ |SSH public key source |Generate new key pair.|
+ |Key pair name |mySSHKey.|
+ |**Inbound port rules**||
+ |Public inbound ports |None|
+
+3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+4. In the Networking tab, select or enter:
+
+ | Setting |Value|
+ ||--|
+ |**Network interface**||
+ |Virtual network |Select your virtual network.|
+ |Subnet |**be-subnet**.|
+ |Public IP |Select **None**.|
+ |NIC network security group |Select **None**.|
+ |**Load balancing**||
+ |Place this virtual machine behind an existing load balancing solution?|Select **Yes**.|
+ |**Load balancing settings**||
+ |Load balancing options |Select **Azure load balancing**.|
+ |Select a load balancer |Select **myLoadBalancer**.|
+ |Select a backend pool |Select **myBackendPool**.|
+
+5. Select **Review + create**.
+6. Review the settings, and then select **Create**.
+7. You can repeat step 1 to 6 to have more than 1 backend server VM for HA.
+
+## Creating Forwarding Rule to Endpoint
+
+1. Login and copy script [ip_fwd.sh](https://github.com/sajitsasi/az-ip-fwd/blob/main/ip_fwd.sh) to your backend server VMs.
+2. Run the script on with the following options:<br/>
+ **sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433**<br/>
+ <FQDN/IP> is your target SQL Server IP.<br/>
+ >[!Note]
+ >FQDN doesnΓÇÖt work for on premise SQL Server unless you add a record in Azure DNS zone.
+3. Run below command and check the iptables in your backend server VMs. You can see one record in your iptables with your target IP.<br/>
+ **sudo iptables -t nat -v -L PREROUTING -n --line-number**
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/command-record-1.png" alt-text="Screenshot that shows the command record.":::
+
+ >[!Note]
+ > If you have more than one SQL Server or data sources, you need to define multiple load balancer rules and IP table records with different ports. Otherwise, there will be some conflict. For example,<br/>
+ >
+ >| |Port in load balancer rule|Backend port in load balance rule|Command run in backend server VM|
+ >|||--||
+ >|**SQL Server 1**|1433 |1433 |sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433|
+ >|**SQL Server 2**|1434 |1434 |sudo ./ip_fwd.sh -i eth0 -f 1434 -a <FQDN/IP> -b 1433|
+
+## Create a Private Endpoint to Private Link Service
+
+1. Select All services in the left-hand menu, select All resources, and then select your
+data factory from the resources list.
+2. Select **Author & Monitor** to launch the Data Factory UI in a separate tab.
+3. Go to the **Manage** tab and then go to the **Managed private endpoints** section.
+4. Select + **New** under **Managed private endpoints**.
+5. Select the **Private Link Service** tile from the list and select **Continue**.
+6. Enter the name of private endpoint and select **myPrivateLinkService** in private link service list.
+7. Add FQDN of your target on premises SQL Server and NAT IPs of your private link Service.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/link-service-nat-ip.png" alt-text="Screenshot that shows the NAT IP in the linked service." lightbox="./media/tutorial-managed-virtual-network/link-service-nat-ip-expanded.png":::
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint.png" alt-text="Screenshot that shows the private endpoint settings.":::
+
+8. Create private endpoint.
+
+## Create a linked service and test the connection
+
+1. Go to the **Manage** tab and then go to the **Managed private endpoints** section.
+2. Select + **New** under **Linked Service**.
+3. Select the **SQL Server** tile from the list and select **Continue**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-1.png" alt-text="Screenshot that shows the linked service creation page.":::
+
+4. Enable **Interactive Authoring**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-2.png" alt-text="Screenshot that shows how to enable Interactive Authoring.":::
+
+5. Input the **FQDN** of your on-prem SQL Server, **user name** and **password**.
+6. Then click **Test connection**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-3.png" alt-text="Screenshot that shows the SQL server linked service creation page.":::
+
+## Troubleshooting
+
+Go to the backend server VM, confirm telnet the SQL Server works: **telnet **<**FQDN**>** 1433**.
+
+## Next steps
+
+Advance to the following tutorial to learn about accessing Microsoft Azure SQL Managed Instance from Data Factory Managed VNET using Private Endpoint:
+
+> [!div class="nextstepaction"]
+> [Access SQL Managed Instance from Data Factory Managed VNET](tutorial-managed-virtual-network-sql-managed-instance.md)
data-factory Tutorial Managed Virtual Network Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md
+
+ Title: Access Microsoft Azure SQL Managed Instance from Data Factory Managed VNET using Private Endpoint
+description: This tutorial provides steps for using the Azure portal to setup Private Link Service and access SQL Managed Instance from Managed VNET using Private Endpoint.
++++ Last updated : 05/06/2021++
+# Tutorial: How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint
+
+This tutorial provides steps for using the Azure portal to setup Private Link Service and
+access SQL Managed Instance from Managed VNET using Private Endpoint.
++
+## Prerequisites
+
+* **Azure subscription**. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* **Virtual Network**. If you donΓÇÖt have a Virtual Network, create one following [Create Virtual Network](https://docs.microsoft.com/azure/virtual-network/quick-create-portal).
+* **Virtual network to on-premises network**. Create a connection between virtual network and on-premises network either using [ExpressRoute](https://docs.microsoft.com/azure/expressroute/expressroute-howto-linkvnet-portal-resource-manager?toc=/azure/virtual-network/toc.json) or [VPN](https://docs.microsoft.com/azure/vpn-gateway/tutorial-site-to-site-portal?toc=/azure/virtual-network/toc.json).
+* **Data Factory with Managed VNET enabled**. If you donΓÇÖt have a Data Factory or Managed VNET is not enabled, create one following [Create Data Factory with Managed VNET](https://docs.microsoft.com/azure/data-factory/tutorial-copy-data-portal-private).
+
+## Create subnets for resources
+
+**Use the portal to create subnets in your virtual network.**
+
+| Subnet | Description |
+|: |: |
+|be-subnet |subnet for backend servers|
+|fe-subnet |subnet for standard internal load balancer|
+|pls-subnet|subnet for Private Link Service|
++
+## Create a standard load balancer
+
+Use the portal to create a standard internal load balancer.
+
+1. On the top left-hand side of the screen, select **Create a resource > Networking > Load Balancer**.
+2. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ |: |: |
+ |Subscription|Select your subscription.|
+ |Resource group|Select your resource group.|
+ |Name|Enter **myLoadBalancer**.|
+ |Region|Select **East US**.|
+ |Type|Select **Internal**.|
+ |SKU|Select **Standard**.|
+ |Virtual network|Select your virtual network.|
+ |Subnet|Select **fe-subnet** created in the previous step.|
+ |IP address assignment|Select **Dynamic**.|
+ |Availability zone|Select **Zone-redundant**.|
+
+3. Accept the defaults for the remaining settings, and then select **Review + create**.
+4. In the **Review + create** tab, select **Create**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/create-load-balancer.png" alt-text="Screenshot that shows the step to create standard load balancer.":::
+
+## Create load balancer resources
+
+### Create a backend pool
+
+A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.
+
+Create the backend address pool **myBackendPool** to include virtual machines for load-balancing internet traffic.
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+2. Under **Settings**, select **Backend pools**, then select **Add**.
+3. On the **Add a backend pool** page, for name, type **myBackendPool**, as the name for your backend pool, and then select **Add**.
+
+### Create a health probe
+
+The load balancer monitors the status of your app with a health probe.
+
+The health probe adds or removes VMs from the load balancer based on their response to health checks.
+
+Create a health probe named **myHealthProbe** to monitor the health of the VMs.
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+2. Under **Settings**, select **Health probes**, then select **Add**.
+
+ | Setting | Value |
+ |: |: |
+ |Name|Enter **myHealthProbe**.|
+ |Protocol|Select **TCP**.|
+ |Port|Enter 22.|
+ |Interval|Enter **15** for number of **Interval** in seconds between probe attempts.|
+ |Unhealthy threshold|Select **2** for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.|
+
+3. Leave the rest the defaults and select **OK**.
+
+### Create a load balancer rule
+
+A load balancer rule is used to define how traffic is distributed to the VMs. You define
+the frontend IP configuration for the incoming traffic and the backend IP pool to receive
+the traffic. The source and destination port are defined in the rule.
+
+In this section, you'll create a load balancer rule:
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer** from the resources list.
+2. Under **Settings**, select **Load-balancing rules**, then select **Add**.
+3. Use these values to configure the load-balancing rule:
+
+ |Setting |Value |
+ |: |: |
+ |Name|Enter **myRule**.|
+ |IP Version|Select **IPv4**.|
+ |Frontend IP address|Select **LoadBalancerFrontEnd**.|
+ |Protocol|Select **TCP**.|
+ |Port|Enter **1433**.|
+ |Backend port|Enter **1433**.|
+ |Backend pool|Select **myBackendPool**.|
+ |Health probe|Select **myHealthProbe**.|
+ |Idle timeout (minutes)|Move the slider to **15** minutes.|
+ |TCP reset|Select **Disabled**.|
+
+4. Leave the rest of the defaults and then select **OK**.
+
+## Create a private link service
+
+In this section, you'll create a Private Link service behind a standard load balancer.
+
+1. On the upper-left part of the page in the Azure portal, select **Create a resource**.
+2. Search for **Private Link** in the **Search the Marketplace** box.
+3. Select **Create**.
+4. In **Overview** under **Private Link Center**, select the blue **Create private link service** button.
+5. In the **Basics** tab under **Create private link service**, enter, or select the following
+information:
+
+ |Setting |Value|
+ ||--|
+ |**Project details**||
+ |Subscription |Select your subscription.|
+ |Resource Group |Select your resource group.|
+ |**Instance details**||
+ |Name |Enter **myPrivateLinkService**.|
+ |Region |Select **East US**.|
+
+6. Select the **Outbound settings** tab or select **Next: Outbound settings** at the
+bottom of the page.
+7. In the **Outbound settings** tab, enter or select the following information:
+
+ | Setting | Value |
+ |: |: |
+ |Load balancer|Select **myLoadBalancer**.|
+ |Load balancer frontend IP address|Select **LoadBalancerFrontEnd**.|
+ |Source NAT subnet|Select **pls-subnet**.|
+ |Enable TCP proxy V2|Leave the default of **No**.|
+ |**Private IP address settings**||
+ |Leave the default settings.||
+
+8. Select the **Access security** tab or select **Next: Access security** at the bottom of
+the page.
+9. Leave the default of **Role-based access control only** in the **Access security** tab.
+10. Select the **Tags** tab or select **Next: Tags** at the bottom of the page.
+11. Select the **Review + create** tab or select **Next: Review + create** at the bottom of
+the page.
+12. Select **Create** in the **Review + create** tab.
++
+## Create backend servers
+
+1. On the upper-left side of the portal, select **Create a resource > Compute > Virtual machine**.
+2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+
+ |Setting |Value|
+ ||--|
+ |**Project details**||
+ |Subscription |Select your Azure subscription.|
+ |Resource Group |Select your resource group.|
+ |**Instance details**||
+ |Virtual machine name |Enter **myVM1**.|
+ |Region |Select **East US**.|
+ |Availability Options |Select **Availability zones**.|
+ |Availability zone |Select **1**.|
+ |Image |Select **Ubuntu Server 18.04LTS ΓÇô Gen1**.|
+ |Azure Spot instance |Select **No**.|
+ |Size |Choose VM size or take default setting.|
+ |**Administrator account**||
+ |Username |Enter a username.|
+ |SSH public key source |Generate new key pair.|
+ |Key pair name |mySSHKey.|
+ |**Inbound port rules**||
+ |Public inbound ports |None.|
+
+3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+4. In the Networking tab, select or enter:
+
+ | Setting |Value|
+ ||--|
+ |**Network interface**||
+ |Virtual network |Select your virtual network.|
+ |Subnet |**be-subnet**.|
+ |Public IP |Select **None**.|
+ |NIC network security group |Select **None**.|
+ |**Load balancing**||
+ |Place this virtual machine behind an existing load balancing solution?|Select **Yes**.|
+ |**Load balancing settings**||
+ |Load balancing options |Select **Azure load balancing**.|
+ |Select a load balancer |Select **myLoadBalancer**.|
+ |Select a backend pool |Select **myBackendPool**.|
+
+5. Select **Review + create**.
+6. Review the settings, and then select **Create**.
+7. You can repeat step 1 to 6 to have more than 1 backend server VM for HA.
+
+## Creating Forwarding Rule to Endpoint
+
+1. Login and copy script [ip_fwd.sh](https://github.com/sajitsasi/az-ip-fwd/blob/main/ip_fwd.sh) to your backend server VMs.
+2. Run the script on with the following options:<br/>
+ **sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433**<br/>
+ <FQDN/IP> is the host of your SQL Managed Instance.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/sql-mi-host.png" alt-text="Screenshot that shows SQL MI host." lightbox="./media/tutorial-managed-virtual-network/sql-mi-host-expanded.png":::
+
+3. Run below command and check the iptables in your backend server VMs. You can see one record in your iptables with your target IP. <br/>
+ **sudo iptables -t nat -v -L PREROUTING -n --line-number**
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/command-record-2.png" alt-text="Screenshot that shows the command record.":::
+
+ >[!Note]
+ > Note: If you have more than one SQL MI or other data sources, you need to define multiple load balancer rules and IP table records with different ports. Otherwise, there will be some conflict. For example,<br/>
+ >
+ >| |Port in load balancer rule|Backend port in load balance rule|Command run in backend server VM|
+ >|||--||
+ >|**SQL MI 1**|1433 |1433 |sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433|
+ >|**SQL MI 2**|1434 |1434 |sudo ./ip_fwd.sh -i eth0 -f 1434 -a <FQDN/IP> -b 1433|
+
+## Create a Private Endpoint to Private Link Service
+
+1. Select All services in the left-hand menu, select All resources, and then select your
+data factory from the resources list.
+2. Select **Author & Monitor** to launch the Data Factory UI in a separate tab.
+3. Go to the **Manage** tab and then go to the **Managed private endpoints** section.
+4. Select + **New** under **Managed private endpoints**.
+5. Select the **Private Link Service** tile from the list and select **Continue**.
+6. Enter the name of private endpoint and select **myPrivateLinkService** in private
+link service list.
+7. Add FQDN of your target SQL Managed Instance and NAT IPs of your private link Service.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/sql-mi-host.png" alt-text="Screenshot that shows SQL MI host." lightbox="./media/tutorial-managed-virtual-network/sql-mi-host-expanded.png":::
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/link-service-nat-ip.png" alt-text="Screenshot that shows the NAT IP in the linked service." lightbox="./media/tutorial-managed-virtual-network/link-service-nat-ip-expanded.png":::
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-2.png" alt-text="Screenshot that shows the private endpoint settings.":::
+
+8. Create private endpoint.
+
+## Create a linked service and test the connection
+
+1. Go to the **Manage** tab and then go to the **Managed private endpoints** section.
+2. Select + **New** under **Linked Service**.
+3. Select the **Azure SQL Database Managed Instance** tile from the list and select **Continue**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-mi-1.png" alt-text="Screenshot that shows the linked service creation page.":::
+
+4. Enable **Interactive Authoring**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-mi-2.png" alt-text="Screenshot that shows how to enable Interactive Authoring.":::
+
+5. Input the **Host** of your SQL Managed Instance, **user name** and **password**.
+
+ >[!Note]
+ >Please input SQL Managed Instance host manually. Otherwise itΓÇÖs not full qualified domain name in the selection list.
+
+6. Then click **Test connection**.
+
+ :::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-mi-3.png" alt-text="Screenshot that shows the SQL MI linked service creation page.":::
+
+## Next steps
+
+Advance to the following tutorial to learn about accessing on premises SQL Server from Data Factory
+Managed VNET using Private Endpoint:
+
+> [!div class="nextstepaction"]
+> [Access on premises SQL Server from Data Factory Managed VNET](tutorial-managed-virtual-network-on-premise-sql-server.md)
ddos-protection Ddos Rapid Response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-rapid-response.md
# Azure DDoS Rapid Response
-During an active access, Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis.
+During an active attack, Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis.
## Prerequisites
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/getting-started.md
This article provides an overview of the steps you'll take to set up Azure Defen
## Permission requirements
+### For sensors and on-premises management consoles
+ Some of the setup steps require specific user permissions. Administrative user permissions are required to activate the sensor and management console, upload SSL/TLS certificates, and generate new passwords.
+### For the Defender for IoT portal
The following table describes user access permissions to Azure Defender for IoT portal tools:
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-sensor.md
Your sensor was onboarded to Azure Defender for IoT in a specific management mod
| Mode type | Description | |--|--|
-| **Cloud connected mode** | Information that the sensor detects is displayed in the sensor console. Alert information is also delivered through the IoT hub and can be shared with other Azure services, such as Azure Sentinel. |
+| **Cloud connected mode** | Information that the sensor detects is displayed in the sensor console. Alert information is also delivered through the IoT hub and can be shared with other Azure services, such as Azure Sentinel. You can also enable automatic threat intelligence updates. |
| **Locally connected mode** | Information that the sensor detects is displayed in the sensor console. Detection information is also shared with the on-premises management console, if the sensor is connected to it. | A locally connected, or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor.
You access console tools from the side menu.
## See also
+[Threat intelligence research and packages #](how-to-work-with-threat-intelligence-packages.md)
+ [Onboard a sensor](getting-started.md#onboard-a-sensor) [Manage sensor activation files](how-to-manage-individual-sensors.md#manage-sensor-activation-files)
defender-for-iot How To Create Risk Assessment Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-risk-assessment-reports.md
Risk Assessment scores are based on information learned from packet inspection,
**Vulnerable Devices** are devices with a security score below 70 %.
+### About backup and anti-virus servers
+
+The risk assessment score may be negatively impacted if you do not define backup and anti-virus server addresses in your sensor. Adding these addresses improves your score. By default these addresses are not defined.
+The Risk Assessment report cover page will indicate if backup servers and anti-virus servers are not defined.
+
+**To add servers:**
+
+1. Select **System Settings** and then select **System Properties**.
+1. Select **Vulnerability Assessment** and add the addresses to **backup_servers** and **AV_addresses** fields. Use commas to separate multiple addresses. separated by commas.
+1. Select **Save**.
## Create risk assessment reports Create a PDF risk assessment report. The report name is automatically generated as risk-assessment-report-1.pdf. The number is updated for each new report you create. The time and day of creation are displayed.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-on-the-cloud.md
You onboard a sensor by registering it with Azure Defender for IoT and downloadi
1. Choose a sensor connection mode by using the **Cloud connected** toggle. If the toggle is on, the sensor is cloud connected. If the toggle is off, the sensor is locally managed. - **Cloud-connected sensors**: Information that the sensor detects is displayed in the sensor console. Alert information is delivered through an IoT hub and can be shared with other Azure services, such as Azure Sentinel. In addition, threat intelligence packages can be pushed from the Azure Defender for IoT portal to sensors. Conversely when, the sensor is not cloud connected, you must download threat intelligence packages and then upload them to your enterprise sensors. To allow Defender for IoT to push packages to sensors, enable the **Automatic Threat Intelligence Updates** toggle. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
+
Choose an IoT hub to serve as a gateway between this sensor and the Azure Defender for IoT portal. Define a site name and zone. You can also add descriptive tags. The site name, zone, and tags are descriptive entries on the [Sites and Sensors page](#view-onboarded-sensors). - **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-set-up-snmp-mib-monitoring.md
Before you begin configuring SNMP monitoring, you need to open the port UDP 161
| Management console and sensor | OID | Format | Description | |--|--|--|--|
-| Appliance name | 1.3.6.1.2.1.1.5.0 | DISPLAYSTRING | Appliance name for the on-premises management console |
-| Vendor | 1.3.6.1.2.1.1.4.0 | DISPLAYSTRING | Microsoft Support (support.microsoft.com) |
-| Platform | 1.3.6.1.2.1.1.1.0 | DISPLAYSTRING | Sensor or on-premises management console |
-| Serial number | 1.3.6.1.4.1.9.9.53313.1 | DISPLAYSTRING | String that the license uses |
-| Software version | 1.3.6.1.4.1.9.9.53313.2 | DISPLAYSTRING | Xsense full-version string and management full-version string |
-| CPU usage | 1.3.6.1.4.1.9.9.53313.3.1 | GAUGE32 | Indication for zero to 100 |
-| CPU temperature | 1.3.6.1.4.1.9.9.53313.3.2 | DISPLAYSTRING | Celsius indication for zero to 100 based on Linux input |
-| Memory usage | 1.3.6.1.4.1.9.9.53313.3.3 | GAUGE32 | Indication for zero to 100 |
-| Disk Usage | 1.3.6.1.4.1.9.9.53313.3.4 | GAUGE32 | Indication for zero to 100 |
-| Service Status | 1.3.6.1.4.1.9.9.53313.5 | DISPLAYSTRING | Online or offline if one of the four crucial components is down |
-| Bandwidth | Out of scope for 2.4 | | The bandwidth received on each monitor interface in Xsense |
+| Appliance name | 1.3.6.1.2.1.1.5.0 | STRING | Appliance name for the on-premises management console |
+| Vendor | 1.3.6.1.2.1.1.4.0 | STRING | Microsoft Support (support.microsoft.com) |
+| Platform | 1.3.6.1.2.1.1.1.0 | STRING | Sensor or on-premises management console |
+| Serial number | 1.3.6.1.4.1.53313.1 |STRING | String that the license uses |
+| Software version | 1.3.6.1.4.1.53313.2 | STRING | Xsense full-version string and management full-version string |
+| CPU usage | 1.3.6.1.4.1.53313.3.1 | GAUGE32 | Indication for zero to 100 |
+| CPU temperature | 1.3.6.1.4.1.53313.3.2 | STRING | Celsius indication for zero to 100 based on Linux input. "No sensors found" will be returned from any machine that has no actual physical temperature sensor (for example VMs)|
+| Memory usage | 1.3.6.1.4.1.53313.3.3 | GAUGE32 | Indication for zero to 100 |
+| Disk Usage | 1.3.6.1.4.1.53313.3.4 | GAUGE32 | Indication for zero to 100 |
+| Service Status | 1.3.6.1.4.1.53313.5 |STRING | Online or offline if one of the four crucial components is down |
+| Service Status | 1.3.6.1.4.1.53313.5 |STRING | Online or offline if one of the four crucial components is down |
+| Locally/cloud connected | 1.3.6.1.4.1.53313.6 |STRING | Indicates if the sensor is connected to the Defender for IoT portal or managed on-premises only |
+| License status | 1.3.6.1.4.1.53313.5 |STRING | Indicates if activation file expired or not |
- Non-existing keys respond with null, HTTP 200, based on [Stack Overflow](https://stackoverflow.com/questions/51419026/querying-for-non-existing-record-returns-null-with-http-200).
dns Dns Protect Private Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-protect-private-zones-recordsets.md
Title: Protecting private DNS Zones and Records - Azure DNS description: In this learning path, get started protecting private DNS zones and record sets in Microsoft Azure DNS. -+ Previously updated : 02/18/2020- Last updated : 05/07/2021+ # How to protect private DNS zones and records
The simplest way to assign Azure RBAC permissions is [via the Azure portal](../r
Open **Access control (IAM)** for the resource group, select **Add**, then select the **Private DNS Zone Contributor** role. Select the required users or groups to grant permissions.
-![Resource group level Azure RBAC via the Azure portal](./media/dns-protect-private-zones-recordsets/rbac1.png)
Permissions can also be [granted using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md):
For example, the resource group *myPrivateDNS* contains the zone *private.contos
Zone-level Azure RBAC permissions can be granted via the Azure portal. Open **Access control (IAM)** for the zone, select **Add**, then select the **Private DNS Zone Contributor** role. Select the required users or groups to grant permissions.
-![DNS Zone level Azure RBAC via the Azure portal](./media/dns-protect-private-zones-recordsets/rbac2.png)
Permissions can also be [granted using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md):
Permissions are applied at the record set level. The user is granted control to
Record-set level Azure RBAC permissions can be configured via the Azure portal, using the **Access Control (IAM)** button in the record set page:
-![Screenshot shows the Access Control (I A M) button.](./media/dns-protect-private-zones-recordsets/rbac3.png)
-![Screenshot shows Access Control with Add role assignment selected.](./media/dns-protect-private-zones-recordsets/rbac4.png)
Record-set level Azure RBAC permissions can also be [granted using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md):
To prevent changes being made, apply a ReadOnly lock to the zone. This lock prev
Zone level resource locks can be created via the Azure portal. From the DNS zone page, select **Locks**, then select **+Add**:
-![Zone level resource locks via the Azure portal](./media/dns-protect-private-zones-recordsets/locks1.png)
Zone-level resource locks can also be created via [Azure PowerShell](/powershell/module/az.resources/new-azresourcelock):
hdinsight Apache Hbase Migrate New Version New Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-migrate-new-version-new-storage-account.md
+
+ Title: Migrate an HBase cluster to a new version and Storage account - Azure HDInsight
+description: Learn how to migrate an Apache HBase cluster in Azure HDInsight to a newer version with a different Azure Storage account.
+++ Last updated : 05/06/2021++
+# Migrate Apache HBase to a new version and storage account
+
+This article discusses how to update your Apache HBase cluster on Azure HDInsight to a newer version with a different Azure Storage account.
+
+This article applies only if you need to use different Storage accounts for your source and destination clusters. To upgrade versions with the same Storage account for your source and destination clusters, see [Migrate Apache HBase to a new version](apache-hbase-migrate-new-version.md).
+
+The downtime while upgrading should be only a few minutes. This downtime is caused by the steps to flush all in-memory data, and the time to configure and restart the services on the new cluster. Your results will vary, depending on the number of nodes, amount of data, and other variables.
+
+## Review Apache HBase compatibility
+
+Before upgrading Apache HBase, ensure the HBase versions on the source and destination clusters are compatible. Review the HBase version compatibility matrix and release notes in the [HBase Reference Guide](https://hbase.apache.org/book.html#upgrading) to make sure your application is compatible with the new version.
+
+Here is an example compatibility matrix. Y indicates compatibility and N indicates a potential incompatibility:
+
+| Compatibility type | Major version| Minor version | Patch |
+| | | | |
+| Client-Server wire compatibility | N | Y | Y |
+| Server-Server compatibility | N | Y | Y |
+| File format compatibility | N | Y | Y |
+| Client API compatibility | N | Y | Y |
+| Client binary compatibility | N | N | Y |
+| **Server-side limited API compatibility** | | | |
+| Stable | N | Y | Y |
+| Evolving | N | N | Y |
+| Unstable | N | N | N |
+| Dependency compatibility | N | Y | Y |
+| Operational compatibility | N | N | Y |
+
+The HBase version release notes should describe any breaking incompatibilities. Test your application in a cluster running the target version of HDInsight and HBase.
+
+For more information about HDInsight versions and compatibility, see [Azure HDInsight versions](../hdinsight-component-versioning.md).
+
+## Apache HBase cluster migration overview
+
+To upgrade and migrate your Apache HBase cluster on Azure HDInsight to a new storage account, you complete the following basic steps. For detailed instructions, see the detailed steps and commands.
+
+Prepare the source cluster:
+1. Stop data ingestion.
+1. Flush memstore data.
+1. Stop HBase from Ambari.
+1. For clusters with accelerated writes, back up the Write Ahead Log (WAL) directory.
+
+Prepare the destination cluster:
+1. Create the destination cluster.
+1. Stop HBase from Ambari.
+1. Clean Zookeeper data.
+1. Switch user to HBase.
+
+Complete the migration:
+1. Clean the destination file system, migrate the data, and remove `/hbase/hbase.id`.
+1. Clean and migrate the WAL.
+1. Start all services from the Ambari destination cluster.
+1. Verify HBase.
+1. Delete the source cluster.
+
+## Detailed migration steps and commands
+
+Use these detailed steps and commands to migrate your Apache HBase cluster with a new storage account.
+
+### Prepare the source cluster
+
+1. Stop ingestion to the source HBase cluster.
+
+1. Flush the source HBase cluster you're upgrading.
+
+ HBase writes incoming data to an in-memory store called a *memstore*. After the memstore reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. Deleting the source cluster after an upgrade also deletes any data in the memstores. To retain the data, manually flush each table's memstore to disk before upgrading.
+
+ You can flush the memstore data by running the [flush_all_tables.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/flush_all_tables.sh) script from the [hbase-utils GitHub repository](https://github.com/Azure/hbase-utils/).
+
+ You can also flush the memstore data by running the following HBase shell command from inside the HDInsight cluster:
+
+ ```bash
+ hbase shell
+ flush "<table-name>"
+ ```
+
+1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the source cluster with `https://<OLDCLUSTERNAME>.azurehdinsight.net`, and stop the HBase services.
+
+1. At the confirmation prompt, select the box to turn on maintenance mode for HBase.
+
+ For more information on connecting to and using Ambari, see [Manage HDInsight clusters by using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
+
+1. If your source HBase cluster doesn't have the [Accelerated Writes](apache-hbase-accelerated-writes.md) feature, skip this step. For source HBase clusters with Accelerated Writes, back up the WAL directory under HDFS by running the following commands from an SSH session on any source cluster Zookeeper node or worker node.
+
+ ```bash
+ hdfs dfs -mkdir /hbase-wal-backup
+ hdfs dfs -cp hdfs://mycluster/hbasewal /hbase-wal-backup
+ ```
+
+### Prepare the destination cluster
+
+1. In the Azure portal, [set up a new destination HDInsight cluster](../hdinsight-hadoop-provision-linux-clusters.md) that uses a different storage account than your source cluster.
+
+1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the new cluster at `https://<NEWCLUSTERNAME>.azurehdinsight.net`, and stop the HBase services.
+
+1. Clean the Zookeeper data on the destination cluster by running the following commands in any Zookeeper node or worker node:
+
+ ```bash
+ hbase zkcli
+ rmr /hbase-unsecure
+ quit
+ ```
+
+1. Switch the user to HBase by running `sudo su hbase`.
+
+### Clean and migrate the file system and WAL
+
+Run the following commands, depending on your source HDI version and whether the source and destination clusters have Accelerated Writes. The destination cluster is always HDI version 4.0, since HDI 3.6 is in Basic support and isn't recommended for new clusters.
+
+- [The source cluster is HDI 3.6 with Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-36-or-hdi-40-with-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-36-without-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes](#the-source-cluster-is-hdi-36-without-accelerated-writes-and-the-destination-cluster-doesnt-have-accelerated-writes).
+- [The source cluster is HDI 4.0 with Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-36-or-hdi-40-with-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-40-without-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes](#the-source-cluster-is-hdi-40-without-accelerated-writes-and-the-destination-cluster-doesnt-have-accelerated-writes).
+
+The `<container-endpoint-url>` for the storage account is `https://<storageaccount>.blob.core.windows.net/<container-name>`. Pass the SAS token for the storage account at the very end of the URL.
+
+- The `<container-fullpath>` for storage type WASB is `wasbs://<container-name>@<storageaccount>.blob.core.windows.net`
+- The `<container-fullpath>` for storage type Azure Data Lake Storage Gen2 is `abfs://<container-name>@<storageaccount>.dfs.core.windows.net`.
+
+#### Copy commands
+
+The HDFS copy command is `hdfs dfs <copy properties starting with -D> -cp`
+
+Use `hadoop distcp` for better performance when copying files not in a page blob: `hadoop distcp <copy properties starting with -D>`
+
+To pass the key of the storage account, use:
+- `-Dfs.azure.account.key.<storageaccount>.blob.core.windows.net='<storage account key>'`
+- `-Dfs.azure.account.keyprovider.<storageaccount>.blob.core.windows.net=org.apache.hadoop.fs.azure.SimpleKeyProvider`
+
+You can also use [AzCopy](/azure/storage/common/storage-ref-azcopy) for better performance when copying HBase data files.
+
+1. Run the AzCopy command:
+
+ ```bash
+ azcopy cp "<source-container-endpoint-url>/hbase" "<target-container-endpoint-url>" --recursive
+ ```
+
+1. If the destination storage account is Azure Blob storage, do this step after the copy. If the destination storage account is Data Lake Storage Gen2, skip this step.
+
+ The Hadoop WASB driver uses special 0-sized blobs corresponding to every directory. AzCopy skips these files when doing the copy. Some WASB operations use these blobs, so you must create them in the destination cluster. To create the blobs, run the following Hadoop command from any node in the destination cluster:
+
+ ```bash
+ sudo -u hbase hadoop fs -chmod -R 0755 /hbase
+ ```
+
+You can download AzCopy from [Get started with AzCopy](/azure/storage/common/storage-use-azcopy-v10). For more information about using AzCopy, see [azcopy copy](/azure/storage/common/storage-ref-azcopy-copy).
+
+#### The source cluster is HDI 3.6 or HDI 4.0 with Accelerated Writes, and the destination cluster has Accelerated Writes
+
+1. To clean the file system and migrate data, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase
+ hadoop distcp <source-container-fullpath>/hbase /
+ ```
+
+1. Remove `hbase.id` by running `hdfs dfs -rm /hbase/hbase.id`
+
+1. To clean and migrate the WAL, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r hdfs://<destination-cluster>/hbasewal
+ hdfs dfs -cp <source-container-fullpath>/hbase-wal-backup/hbasewal hdfs://<destination-cluster>/hbasewal
+ ```
+
+#### The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster has Accelerated Writes
+
+1. To clean the file system and migrate data, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase/WALs,/hbase/MasterProcWALs,/hbase/oldWALs,/hbase-wals" -cp <source-container-fullpath>/hbase /
+ hdfs dfs -rm -r /hbase/*WALs
+ ```
+
+1. Remove `hbase.id` by running `hdfs dfs -rm /hbase/hbase.id`
+
+1. To clean and migrate the WAL, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r hdfs://<destination-cluster>/hbasewal/*
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase/WALs,/hbase/MasterProcWALs,/hbase/oldWALs,/hbase-wals" -cp <source-container-fullpath>/hbase/*WALs hdfs://<destination-cluster>/hbasewal
+ ```
+
+#### The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes
+
+1. To clean the file system and migrate data, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase/WALs,/hbase/MasterProcWALs,/hbase/oldWALs,/hbase-wals" -cp <source-container-fullpath>/hbase /
+ hdfs dfs -rm -r /hbase/*WALs
+ ```
+
+1. Remove `hbase.id` by running `hdfs dfs -rm /hbase/hbase.id`
+
+1. To clean and migrate the WAL, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase-wals/*
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase/WALs,/hbase/MasterProcWALs,/hbase/oldWALs,/hbase-wals" -cp <source-container-fullpath>/hbase/*WALs /hbase-wals
+ ```
+
+#### The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes
+
+1. To clean the file system and migrate data, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase
+ hadoop distcp <source-container-fullpath>/hbase /
+ ```
+
+1. Remove `hbase.id` by running `hdfs dfs -rm /hbase/hbase.id`
+
+1. To clean and migrate the WAL, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r hdfs://<destination-cluster>/hbasewal
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase-wals" -cp <source-container-fullpath>/hbase-wals hdfs://<destination-cluster>/hbasewal
+ ```
+
+#### The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes
+
+1. To clean the file system and migrate data, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase
+ hadoop distcp <source-container-fullpath>/hbase /
+ ```
+
+1. Remove `hbase.id` by running `hdfs dfs -rm /hbase/hbase.id`
+
+1. To clean and migrate the WAL, run the following commands:
+
+ ```bash
+ hdfs dfs -rm -r /hbase-wals/*
+ hdfs dfs -Dfs.azure.page.blob.dir="/hbase-wals" -cp <source-container-fullpath>/hbase-wals /
+ ```
+
+### Complete the migration
+
+1. On the destination cluster, save your changes and restart all required services as indicated by Ambari.
+
+1. Point your application to the destination cluster.
+
+ > [!NOTE]
+ > The static DNS name for your application changes when you upgrade. Rather than hard-coding this DNS name, you can configure a CNAME in your domain name's DNS settings that points to the cluster's name. Another option is to use a configuration file for your application that you can update without redeploying.
+
+1. Start the ingestion.
+
+1. Verify HBase consistency and simple Data Definition Language (DDL) and Data Manipulation Language (DML) operations.
+
+1. If the destination cluster is satisfactory, delete the source cluster.
+
+## Next steps
+
+To learn more about [Apache HBase](https://hbase.apache.org/) and upgrading HDInsight clusters, see the following articles:
+
+- [Upgrade an HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md)
+- [Monitor and manage Azure HDInsight using the Apache Ambari Web UI](../hdinsight-hadoop-manage-ambari.md)
+- [Azure HDInsight versions](../hdinsight-component-versioning.md)
+- [Optimize Apache HBase](../optimize-hbase-ambari.md)
+
hdinsight Apache Hbase Migrate New Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-migrate-new-version.md
Title: Migrate an HBase cluster to a new version - Azure HDInsight
-description: How to migrate Apache HBase clusters to a newer version in Azure HDInsight.
+ Title: Migrate an HBase cluster to a new version - Azure HDInsight
+description: Learn how to migrate Apache HBase clusters in Azure HDInsight to a newer version.
Previously updated : 01/02/2020 Last updated : 05/06/2021 # Migrate an Apache HBase cluster to a new version
-This article discusses the steps required to update your Apache HBase cluster on Azure HDInsight to a newer version.
+This article discusses how to update your Apache HBase cluster on Azure HDInsight to a newer version.
-The downtime while upgrading should be minimal, on the order of minutes. This downtime is caused by the steps to flush all in-memory data, then the time to configure and restart the services on the new cluster. Your results will vary, depending on the number of nodes, amount of data, and other variables.
+This article applies only if you use the same Azure Storage account for your source and destination clusters. To upgrade with a new or different Storage account for your destination cluster, see [Migrate Apache HBase to a new version with a new Storage account](apache-hbase-migrate-new-version-new-storage-account.md).
-## Review Apache HBase compatibility
+The downtime while upgrading should be only a few minutes. This downtime is caused by the steps to flush all in-memory data, and the time to configure and restart the services on the new cluster. Your results will vary, depending on the number of nodes, amount of data, and other variables.
-Before upgrading Apache HBase, ensure the HBase versions on the source and destination clusters are compatible. For more information, see [Apache Hadoop components and versions available with HDInsight](../hdinsight-component-versioning.md).
+## Review Apache HBase compatibility
-> [!NOTE]
-> We highly recommend that you review the version compatibility matrix in the [HBase book](https://hbase.apache.org/book.html#upgrading). Any breaking incompatibilities should be described in the HBase version release notes.
+Before upgrading Apache HBase, ensure the HBase versions on the source and destination clusters are compatible. Review the HBase version compatibility matrix and release notes in the [HBase Reference Guide](https://hbase.apache.org/book.html#upgrading) to make sure your application is compatible with the new version.
-Here is an example version compatibility matrix. Y indicates compatibility and N indicates a potential incompatibility:
+Here is an example compatibility matrix. Y indicates compatibility and N indicates a potential incompatibility:
| Compatibility type | Major version| Minor version | Patch | | | | | |
Here is an example version compatibility matrix. Y indicates compatibility and N
| Dependency compatibility | N | Y | Y | | Operational compatibility | N | N | Y |
-## Upgrade with same Apache HBase major version
-
-To upgrade your Apache HBase cluster on Azure HDInsight, complete the following steps:
-
-1. Make sure that your application is compatible with the new version, as shown in the HBase compatibility matrix and release notes. Test your application in a cluster running the target version of HDInsight and HBase.
-
-1. [Set up a new destination HDInsight cluster](../hdinsight-hadoop-provision-linux-clusters.md) using the same storage account, but with a different container name:
-
- :::image type="content" source="./media/apache-hbase-migrate-new-version/same-storage-different-container.png" alt-text="Use the same Storage account, but create a different Container" border="true":::
-
-1. Flush your source HBase cluster, which is the cluster you're upgrading. HBase writes incoming data to an in-memory store, called a _memstore_. After the memstore reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. When deleting the old cluster, the memstores are recycled, potentially losing data. To manually flush the memstore for each table to disk, run the following script. The latest version of this script is on Azure's [GitHub](https://raw.githubusercontent.com/Azure/hbase-utils/master/scripts/flush_all_tables.sh).
-
- ```bash
- #!/bin/bash
-
- #-#
- # SCRIPT TO FLUSH ALL HBASE TABLES.
- #-#
-
- LIST_OF_TABLES=/tmp/tables.txt
- HBASE_SCRIPT=/tmp/hbase_script.txt
- TARGET_HOST=$1
-
- usage ()
- {
- if [[ "$1" == "-h" ]] || [[ "$1" == "--help" ]]
- then
- cat << ...
-
- Usage:
-
- $0 [hostname]
-
- Providing hostname is optional and not required when the script is executed within HDInsight cluster with access to 'hbase shell'.
-
- However hostname should be provided when executing the script as a script-action from HDInsight portal.
-
- For Example:
-
- 1. Executing script inside HDInsight cluster (where 'hbase shell' is
- accessible):
-
- $0
-
- [No need to provide hostname]
-
- 2. Executing script from HDinsight Azure portal:
-
- Provide Script URL.
-
- Provide hostname as a parameter (i.e. hn* or wn* etc.).
- ...
- exit
- fi
- }
-
- validate_machine ()
- {
- THIS_HOST=`hostname`
-
- if [[ ! -z "$TARGET_HOST" ]] && [[ $THIS_HOST != $TARGET_HOST* ]]
- then
- echo "[INFO] This machine '$THIS_HOST' is not the right machine ($TARGET_HOST) to execute the script."
- exit 0
- fi
- }
-
- get_tables_list ()
- {
- hbase shell << ... > $LIST_OF_TABLES 2>
- list
- exit
- ...
- }
-
- add_table_for_flush ()
- {
- TABLE_NAME=$1
- echo "[INFO] Adding table '$TABLE_NAME' to flush list..."
- cat << ... >> $HBASE_SCRIPT
- flush '$TABLE_NAME'
- ...
- }
-
- clean_up ()
- {
- rm -f $LIST_OF_TABLES
- rm -f $HBASE_SCRIPT
- }
-
- ########
- # MAIN #
- ########
-
- usage $1
-
- validate_machine
-
- clean_up
-
- get_tables_list
-
- START=false
-
- while read LINE
- do
- if [[ $LINE == TABLE ]]
- then
- START=true
- continue
- elif [[ $LINE == *row*in*seconds ]]
- then
- break
- elif [[ $START == true ]]
- then
- add_table_for_flush $LINE
- fi
-
- done < $LIST_OF_TABLES
-
- cat $HBASE_SCRIPT
-
- hbase shell $HBASE_SCRIPT << ... 2>
- exit
- ...
-
- ```
-
-1. Stop ingestion to the old HBase cluster.
-
-1. To ensure that any recent data in the memstore is flushed, run the previous script again.
-
-1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the old cluster (`https://OLDCLUSTERNAME.azurehdidnsight.net`) and stop the HBase services. When you prompted to confirm that you'd like to stop the services, check the box to turn on maintenance mode for HBase. For more information on connecting to and using Ambari, see [Manage HDInsight clusters by using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
-
- :::image type="content" source="./media/apache-hbase-migrate-new-version/stop-hbase-services1.png" alt-text="In Ambari, click Services > HBase > Stop under Service Actions" border="true":::
-
- :::image type="content" source="./media/apache-hbase-migrate-new-version/turn-on-maintenance-mode.png" alt-text="Check the Turn On Maintenance Mode for HBase checkbox, then confirm" border="true":::
-
-1. If you aren't using HBase clusters with the Enhanced Writes feature, skip this step. It's needed only for HBase clusters with Enhanced Writes feature.
-
- Backup the WAL dir under HDFS by running below commands from an ssh session on any of the zookeeper nodes or worker nodes of the original cluster.
-
- ```bash
- hdfs dfs -mkdir /hbase-wal-backup**
- hdfs dfs -cp hdfs://mycluster/hbasewal /hbase-wal-backup**
- ```
-
-1. Sign in to Ambari on the new HDInsight cluster. Change the `fs.defaultFS` HDFS setting to point to the container name used by the original cluster. This setting is under **HDFS > Configs > Advanced > Advanced core-site**.
+For more information about HDInsight versions and compatibility, see [Azure HDInsight versions](../hdinsight-component-versioning.md).
- :::image type="content" source="./media/apache-hbase-migrate-new-version/hdfs-advanced-settings.png" alt-text="In Ambari, click Services > HDFS > Configs > Advanced" border="true":::
+## Apache HBase cluster migration overview
- :::image type="content" source="./media/apache-hbase-migrate-new-version/change-container-name.png" alt-text="In Ambari, change the container name" border="true":::
+To upgrade your Apache HBase cluster on Azure HDInsight, you complete the following basic steps. For detailed instructions, see the detailed steps and commands.
-1. If you aren't using HBase clusters with the Enhanced Writes feature, skip this step. It's needed only for HBase clusters with Enhanced Writes feature.
+Prepare the source cluster:
+1. Stop data ingestion.
+1. Flush memstore data.
+1. Stop HBase from Ambari.
+1. For clusters with accelerated writes, back up the Write Ahead Log (WAL) directory.
- Change the `hbase.rootdir` path to point to the container of the original cluster.
+Prepare the destination cluster:
+1. Create the destination cluster.
+1. Stop HBase from Ambari.
+1. Update `fs.defaultFS` in HDFS service configs to refer to the original source cluster container.
+1. For clusters with accelerated writes, update `hbase.rootdir` in HBase service configs to refer to the original source cluster container.
+1. Clean Zookeeper data.
- :::image type="content" source="./media/apache-hbase-migrate-new-version/change-container-name-for-hbase-rootdir.png" alt-text="In Ambari, change the container name for HBase rootdir" border="true":::
-
-1. If you aren't using HBase clusters with the Enhanced Writes feature, skip this step. It's needed only for HBase clusters with Enhanced Writes feature, and only in cases where your original cluster was an HBase cluster with Enhanced Writes feature .
+Complete the migration:
+1. Clean and migrate the WAL.
+1. Copy apps from the destination cluster's default container to the original source container.
+1. Start all services from the Ambari destination cluster.
+1. Verify HBase.
+1. Delete the source cluster.
- Clean the zookeeper and WAL FS data for this new cluster. Issue the following commands in any of the zookeeper nodes or worker nodes:
+## Detailed migration steps and commands
- ```bash
- hbase zkcli
- rmr /hbase-unsecure
- quit
+Use these detailed steps and commands to migrate your Apache HBase cluster.
- hdfs dfs -rm -r hdfs://mycluster/hbasewal**
+### Prepare the source cluster
+
+1. Stop ingestion to the source HBase cluster.
+
+1. Flush the source HBase cluster you're upgrading.
+
+ HBase writes incoming data to an in-memory store called a *memstore*. After the memstore reaches a certain size, HBase flushes it to disk for long-term storage in the cluster's storage account. Deleting the source cluster after an upgrade also deletes any data in the memstores. To retain the data, manually flush each table's memstore to disk before upgrading.
+
+ You can flush the memstore data by running the [flush_all_tables.sh](https://github.com/Azure/hbase-utils/blob/master/scripts/flush_all_tables.sh) script from the [Azure hbase-utils GitHub repository](https://github.com/Azure/hbase-utils/).
+
+ You can also flush memstore data by running the following HBase shell command from the HDInsight cluster:
+
+ ```bash
+ hbase shell
+ flush "<table-name>"
+ ```
+
+1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the source cluster with `https://<OLDCLUSTERNAME>.azurehdinsight.net`, and stop the HBase services.
+
+1. At the confirmation prompt, select the box to turn on maintenance mode for HBase.
+
+ For more information on connecting to and using Ambari, see [Manage HDInsight clusters by using the Ambari Web UI](../hdinsight-hadoop-manage-ambari.md).
+
+1. If your source HBase cluster doesn't have the [Accelerated Writes](apache-hbase-accelerated-writes.md) feature, skip this step. For source HBase clusters with Accelerated Writes, back up the WAL directory under HDFS by running the following commands from an SSH session on any of the Zookeeper nodes or worker nodes of the source cluster.
+
+ ```bash
+ hdfs dfs -mkdir /hbase-wal-backup
+ hdfs dfs -cp hdfs://mycluster/hbasewal /hbase-wal-backup
```
+
+### Prepare the destination cluster
+
+1. In the Azure portal, [set up a new destination HDInsight cluster](../hdinsight-hadoop-provision-linux-clusters.md) using the same storage account as the source cluster, but with a different container name:
-1. If you aren't using HBase clusters with the Enhanced Writes feature, skip this step. It's needed only for HBase clusters with Enhanced Writes feature.
- Restore the WAL dir to the new cluster's HDFS from an ssh session on any of the zookeeper nodes or worker nodes of the new cluster.
+1. Sign in to [Apache Ambari](https://ambari.apache.org/) on the new cluster at `https://<NEWCLUSTERNAME>.azurehdinsight.net`, and stop the HBase services.
+
+1. Under **Services** > **HDFS** > **Configs** > **Advanced** > **Advanced core-site**, change the `fs.defaultFS` HDFS setting to point to the original source cluster container name. For example, the setting in the following screenshot should be changed to `wasbs://hbase-upgrade-old-2021-03-22`.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/hdfs-advanced-settings.png" alt-text="In Ambari, select Services > HDFS > Configs > Advanced > Advanced core-site and change the container name." border="false":::
+
+1. If your destination cluster has the Accelerated Writes feature, change the `hbase.rootdir` path to point to the original source cluster container name. For example, the following path should be changed to `hbase-upgrade-old-2021-03-22`. If your cluster doesn't have Accelerated Writes, skip this step.
+
+ :::image type="content" source="./media/apache-hbase-migrate-new-version/change-container-name-for-hbase-rootdir.png" alt-text="In Ambari, change the container name for the HBase rootdir." border="true":::
+
+1. Clean the Zookeeper data on the destination cluster by running the following commands in any Zookeeper node or worker node:
```bash
- hdfs dfs -cp /hbase-wal-backup/hbasewal hdfs://mycluster/**
+ hbase zkcli
+ rmr /hbase-unsecure
+ quit
```
-1. If you're upgrading HDInsight 3.6 to 4.0, follow the steps below, otherwise skip to step 13:
+### Clean and migrate WAL
+
+Run the following commands, depending on your source HDI version and whether the source and destination clusters have Accelerated Writes.
+
+- The destination cluster is always HDI version 4.0, since HDI 3.6 is in Basic support and isn't recommended for new clusters.
+- The HDFS copy command is `hdfs dfs <copy properties starting with -D> -cp <source> <destination> # Serial execution`.
+
+> [!NOTE]
+> - The `<source-container-fullpath>` for storage type WASB is `wasbs://<source-container-name>@<storageaccountname>.blob.core.windows.net`.
+> - The `<source-container-fullpath>` for storage type Azure Data Lake Storage Gen2 is `abfs://<source-container-name>@<storageaccountname>.dfs.core.windows.net`.
+
+- [The source cluster is HDI 3.6 with Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-36-or-hdi-40-with-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-36-without-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes](#the-source-cluster-is-hdi-36-without-accelerated-writes-and-the-destination-cluster-doesnt-have-accelerated-writes).
+- [The source cluster is HDI 4.0 with Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-36-or-hdi-40-with-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes](#the-source-cluster-is-hdi-40-without-accelerated-writes-and-the-destination-cluster-has-accelerated-writes).
+- [The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes](#the-source-cluster-is-hdi-40-without-accelerated-writes-and-the-destination-cluster-doesnt-have-accelerated-writes).
+
+#### The source cluster is HDI 3.6 or HDI 4.0 with Accelerated Writes, and the destination cluster has Accelerated Writes
+
+Clean the WAL FS data for the destination cluster, and copy the WAL directory from the source cluster into the destination cluster's HDFS. Copy the directory by running the following commands in any Zookeeper node or worker node on the destination cluster:
+
+```bash
+sudo -u hbase hdfs dfs -rm -r hdfs://mycluster/hbasewal
+sudo -u hbase hdfs dfs -cp <source-container-fullpath>/hbase-wal-backup/hbasewal hdfs://mycluster/
+```
+#### The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster has Accelerated Writes
+
+Clean the WAL FS data for the destination cluster, and copy the WAL directory from the source cluster into the destination cluster's HDFS. Copy the directory by running the following commands in any Zookeeper node or worker node on the destination cluster:
+
+```bash
+sudo -u hbase hdfs dfs -rm -r hdfs://mycluster/hbasewal
+sudo -u hbase hdfs dfs -Dfs.azure.page.blob.dir="/hbase/WALs,/hbase/MasterProcWALs,/hbase/oldWALs" -cp <source-container>/hbase/*WALs hdfs://mycluster/hbasewal
+```
+
+#### The source cluster is HDI 3.6 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes
+
+Clean the WAL FS data for the destination cluster, and copy the source cluster WAL directory into the destination cluster's HDFS. To copy the directory, run the following commands in any Zookeeper node or worker node on the destination cluster:
- 1. Restart all required services in Ambari by selecting **Services** > **Restart All Required**.
- 1. Stop the HBase service.
- 1. SSH to the Zookeeper node, and execute the [zkCli](https://github.com/go-zkcli/zkcli) command `rmr /hbase-unsecure` to remove the HBase root znode from Zookeeper.
- 1. Restart HBase.
+```bash
+sudo -u hbase hdfs dfs -rm -r /hbase-wals/*
+sudo -u hbase hdfs dfs -Dfs.azure.page.blob.dir="/hbase/WALs,/hbase/MasterProcWALs,/hbase/oldWALs" -cp <source-container-fullpath>/hbase/*WALs /hbase-wals
+```
-1. If you're upgrading to any other HDInsight version besides 4.0, follow these steps:
- 1. Save your changes.
- 1. Restart all required services as indicated by Ambari.
+#### The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster has Accelerated Writes
-1. Point your application to the new cluster.
+Clean the WAL FS data for the destination cluster, and copy the WAL directory from the source cluster into the destination cluster's HDFS. Copy the directory by running the following commands in any Zookeeper node or worker node on the destination cluster:
- > [!NOTE]
- > The static DNS for your application changes when upgrading. Rather than hard-coding this DNS, you can configure a CNAME in your domain name's DNS settings that points to the cluster's name. Another option is to use a configuration file for your application that you can update without redeploying.
+```bash
+sudo -u hbase hdfs dfs -rm -r hdfs://mycluster/hbasewal
+sudo -u hbase hdfs dfs -cp <source-container-fullpath>/hbase-wals/* hdfs://mycluster/hbasewal
+ ```
+
+#### The source cluster is HDI 4.0 without Accelerated Writes, and the destination cluster doesn't have Accelerated Writes
+
+Clean the WAL FS data for the destination cluster, and copy the source cluster WAL directory into the destination cluster's HDFS. To copy the directory, run the following commands in any Zookeeper node or worker node on the destination cluster:
+
+```bash
+sudo -u hbase hdfs dfs -rm -r /hbase-wals/*
+sudo -u hbase hdfs dfs -Dfs.azure.page.blob.dir="/hbase-wals" -cp <source-container-fullpath>/hbase-wals /
+```
-1. Start the ingestion to see if everything is functioning as expected.
+### Complete the migration
-1. If the new cluster is satisfactory, delete the original cluster.
+1. Using the `sudo -u hdfs` user context, copy the folder `/hdp/apps/<new-version-name>` and its contents from the `<destination-container-fullpath>` to the `/hdp/apps` folder under `<source-container-fullpath>`. You can copy the folder by running the following commands on the destination cluster:
+
+ ```bash
+ sudo -u hdfs hdfs dfs -cp /hdp/apps/<hdi-version> <source-container-fullpath>/hdp/apps
+ ```
+
+ For example:
+ ```bash
+ sudo -u hdfs hdfs dfs -cp /hdp/apps/4.1.3.6 wasbs://hbase-upgrade-old-2021-03-22@hbaseupgrade.blob.core.windows.net/hdp/apps
+ ```
+
+1. On the destination cluster, save your changes, and restart all required services as Ambari indicates.
+
+1. Point your application to the destination cluster.
+
+ > [!NOTE]
+ > The static DNS name for your application changes when you upgrade. Rather than hard-coding this DNS name, you can configure a CNAME in your domain name's DNS settings that points to the cluster's name. Another option is to use a configuration file for your application that you can update without redeploying.
+
+1. Start the ingestion.
+
+1. Verify HBase consistency and simple Data Definition Language (DDL) and Data Manipulation Language (DML) operations.
+
+1. If the destination cluster is satisfactory, delete the source cluster.
## Next steps To learn more about [Apache HBase](https://hbase.apache.org/) and upgrading HDInsight clusters, see the following articles:
-* [Upgrade an HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md)
-* [Monitor and manage Azure HDInsight using the Apache Ambari Web UI](../hdinsight-hadoop-manage-ambari.md)
-* [Apache Hadoop components and versions](../hdinsight-component-versioning.md)
-* [Optimize Apache HBase](../optimize-hbase-ambari.md)
+- [Upgrade an HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md)
+- [Monitor and manage Azure HDInsight using the Apache Ambari Web UI](../hdinsight-hadoop-manage-ambari.md)
+- [Azure HDInsight versions](../hdinsight-component-versioning.md)
+- [Optimize Apache HBase](../optimize-hbase-ambari.md)
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-features-supported.md
All the operations that are supported that extend the RESTful API.
| Patient/$export | Yes | Yes | Yes | | | Group/$export | Yes | Yes | Yes | | | $convert-data | Yes | Yes | Yes | |-
+| $validate | Yes | Yes | Yes | |
## Persistence
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/validation-against-profiles.md
+
+ Title: $validate FHIR resources against profiles on Azure API for FHIR
+description: $validate FHIR resources against profiles
++++ Last updated : 05/06/2021+++
+# How to validate FHIR resources against profiles
+
+HL7 FHIR defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define additional rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications.
+
+[FHIR profile](https://www.hl7.org/fhir/profiling.html) describes additional context, such as constraints or extensions, on a resource represented as a `StructureDefinition`. The HL7 FHIR standard defines a set of base resources, and these standard base resources have generic definitions. FHIR profile allows you to narrow down and customize resource definitions using constraints and extensions.
+
+Azure API for FHIR allows validating resources against profiles to see if the resources conform to the profiles. This article walks through the basics of FHIR profile, and how to use `$validate` for validating resources against the profiles when creating and updating resources.
+
+## FHIR profile: the basics
+
+A profile sets additional context on the resource, usually represented as a `StructureDefinition` resource. `StructureDefinition` defines a set of rules on the content of a resource or a data type, such as what fields a resource has and what values these fields can take. For example, profiles can restrict cardinality (e.g. setting the maximum cardinality to 0 to rule out the element), restrict the contents of an element to a single fixed value, or define required extensions for the resource. It can also specify additional constraints on an existing profile. A `StructureDefinition` is identified by its canonical URL:
+
+```rest
+http://hl7.org/fhir/StructureDefinition/{profile}
+```
+
+Where in the `{profile}` field, you specify the name of the profile.
+
+For example:
+
+- `http://hl7.org/fhir/StructureDefinition/patient-birthPlace` is a base profile that requires information on the registered address of birth of the patient.
+- `http://hl7.org/fhir/StructureDefinition/bmi` is another base profile that defines how to represent Body Mass Index (BMI) observations.
+- `http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance` is a US Core profile that sets minimum expectations for `AllergyIntolerance` resource associated with a patient, and identifies mandatory fields such as extensions and value sets.
+
+### Base profile and custom profile
+
+There are two types of profiles: base profile and custom profile. A base profile is a base `StructureDefinition` to which a resource needs to conform to, and has been defined by base resources such as `Patient` or `Observation`. For example, a Body Mass Index (BMI) `Observation` profile would start like this:
+
+```json
+{
+ "resourceType" : "StructureDefinition",
+ "id" : "bmi",
+...
+}
+```
+
+A custom profile is a set of additional constraints on top of a base profile, restricting or adding resource parameters that are not part of the base specification. Custom profile is useful because you can customize your own resource definitions by specifying the constraints and extensions on the existing base resource. For example, you might want to build a profile that shows `AllergyIntolerance` resource instances based on `Patient` genders, in which case you would create a custom profile on top of an existing `Patient` profile with `AllergyIntolerance` profile.
+
+> [!NOTE]
+> Custom profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the custom profile cannot make it optional.
+
+Custom profiles also specified by various Implementation Guides. Some common Implementation Guides are:
+
+|Name |URL
+|- |-
+Us Core |<https://www.hl7.org/fhir/us/core/>
+CARIN Blue Button |<http://hl7.org/fhir/us/carin-bb/>
+Da Vinci Payer Data Exchange |<http://hl7.org/fhir/us/davinci-pdex/>
+Argonaut |<http://www.fhir.org/guides/argonaut/pd/>
+
+## Accessing profiles and storing profiles
+
+### Viewing profiles
+
+You can access your existing custom profiles in the server using a `GET` request. All valid profiles, such as the profiles with valid canonical URLs in Implementation Guides, should be accessible by querying:
+
+```rest
+GET http://<your FHIR service base URL>/StructureDefinition?url={canonicalUrl}
+```
+
+Where the field `{canonicalUrl}` would be replaced with the canonical URL of your profile.
+
+For example, if you want to view US Core `Goal` resource profile:
+
+```rest
+GET http://my-fhir-server.azurewebsites.net/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-goal
+```
+
+This will return the `StructureDefinition` resource for US Core Goal profile, that will start like this:
+
+```json
+{
+ "resourceType" : "StructureDefinition",
+ "id" : "us-core-goal",
+ "url" : "http://hl7.org/fhir/us/core/StructureDefinition/us-core-goal",
+ "version" : "3.1.1",
+ "name" : "USCoreGoalProfile",
+ "title" : "US Core Goal Profile",
+ "status" : "active",
+ "experimental" : false,
+ "date" : "2020-07-21",
+ "publisher" : "HL7 US Realm Steering Committee",
+ "contact" : [
+ {
+ "telecom" : [
+ {
+ "system" : "url",
+ "value" : "http://www.healthit.gov"
+ }
+ ]
+ }
+ ],
+ "description" : "Defines constraints and extensions on the Goal resource for the minimal set of data to query and retrieve a patient's goal(s).",
+...
+```
+
+Our FHIR server does not return `StructureDefinition` instances for the base profiles, but they can be found easily on the HL7 website, such as:
+
+- `http://hl7.org/fhir/Observation.profile.json.html`
+- `http://hl7.org/fhir/Patient.profile.json.html`
+
+### Storing profiles
+
+For storing profiles to the server, you can do a `POST` request:
+
+```rest
+POST http://<your FHIR service base URL>/{Resource}
+```
+
+In which the field `{Resource}` will be replaced by `StructureDefinition`, and you would have your `StructureDefinition` resource `POST`ed to the server in `JSON` or `XML` format.
+
+Most profiles have the resource type `StructureDefinition`, but they can also be of the types `ValueSet` and `CodeSystem`, which are [terminology](http://hl7.org/fhir/terminologies.html) resources. For example, if you `POST` a `ValueSet` profile in a JSON form, the server will return the stored profile with the assigned `id` for the profile, just as it would with `StructureDefinition`. Below is an example you would get when you upload a [Condition Severity](https://www.hl7.org/fhir/valueset-condition-severity.html) profile, which specifies the criteria for a condition/diagnosis severity grading:
+
+```json
+{
+ "resourceType": "ValueSet",
+ "id": "35ab90e5-c75d-45ca-aa10-748fefaca7ee",
+ "meta": {
+ "versionId": "1",
+ "lastUpdated": "2021-05-07T21:34:28.781+00:00",
+ "profile": [
+ "http://hl7.org/fhir/StructureDefinition/shareablevalueset"
+ ]
+ },
+ "text": {
+ "status": "generated",
+ "div": "<div>!-- Snipped for Brevity --></div>"
+ },
+ "extension": [
+ {
+ "url": "http://hl7.org/fhir/StructureDefinition/structuredefinition-wg",
+ "valueCode": "pc"
+ }
+ ],
+ "url": "http://hl7.org/fhir/ValueSet/condition-severity",
+ "identifier": [
+ {
+ "system": "urn:ietf:rfc:3986",
+ "value": "urn:oid:2.16.840.1.113883.4.642.3.168"
+ }
+ ],
+ "version": "4.0.1",
+ "name": "Condition/DiagnosisSeverity",
+ "title": "Condition/Diagnosis Severity",
+ "status": "draft",
+ "experimental": false,
+ "date": "2019-11-01T09:29:23+11:00",
+ "publisher": "FHIR Project team",
+...
+```
+
+### Profiles in the capability statement
+
+The `Capability Statement` lists all possible behaviors of your FHIR server to be used as a statement of the server functionality, such as Structure Definitions and Value Sets. Azure API for FHIR updates the capability statement with information on the uploaded and stored profiles in the forms of:
+
+- `CapabilityStatement.rest.resource.profile`
+- `CapabilityStatement.rest.resource.supportedProfile`
+
+These will show all of the specification for the profile that describes the overall support for the resource, including any constraints on cardinality, bindings, extensions, or other restrictions. Therefore, when you `POST` a profile in the form of a `StructureDefinition`, and `GET` the resource metadata to see the full capability statement, you will see next to the `supportedProfiles` parameter all the details on the profile you uploaded.
+
+For example, if you `POST` a US Core Patient profile, which starts like this:
+
+```json
+{
+ "resourceType": "StructureDefinition",
+ "id": "us-core-patient",
+ "url": "http://hl7.org/fhir/us/core/StructureDefinition/us-core-patient",
+ "version": "3.1.1",
+ "name": "USCorePatientProfile",
+ "title": "US Core Patient Profile",
+ "status": "active",
+ "experimental": false,
+ "date": "2020-06-27",
+ "publisher": "HL7 US Realm Steering Committee",
+...
+```
+
+And send a `GET` request for your `metadata`:
+
+```rest
+GET http://<your FHIR service base URL>/metadata
+```
+
+You will be returned with a `CapabilityStatement` that includes the following information on the US Core Patient profile you uploaded to your FHIR server:
+
+```json
+...
+{
+ "type": "Patient",
+ "profile": "http://hl7.org/fhir/StructureDefinition/Patient",
+ "supportedProfile":[
+ "http://hl7.org/fhir/us/core/StructureDefinition/us-core-patient"
+ ],
+...
+```
+
+## Validating resources against the profiles
+
+FHIR resources, such as `Patient` or `Observation`, can express their conformance to specific profiles. This allows our FHIR server to **validate** given resources against the associated profiles or the specified profiles. Validating a resource against profiles means checking if your resource conforms to the profiles, including the specifications listed in `Resource.meta.profile` or in an Implementation Guide.
+
+There are two ways for you to validate your resource. First, you can use `$validate` operation against a resource that is already in the FHIR server. Second, you can `POST` it to the server as part of a resource `Update` or `Create` operation. In both cases, you can decide via your FHIR server configuration what to do when the resource does not conform to your desired profile.
+
+### Using $validate
+
+The `$validate` operation checks whether the provided profile is valid, and whether the resource conforms to the specified profile. As mentioned in the [HL7 FHIR specifications](https://www.hl7.org/fhir/resource-operation-validate.html), you can also specify the `mode` for `$validate`, such as `create` and `update`:
+
+- `create`: The server checks that the profile content is unique from the existing resources and that it is acceptable to be created as a new resource
+- `update`: checks that the profile is an update against the nominated existing resource (e.g. that no changes are made to the immutable fields)
+
+The server will always return an `OperationOutcome` as the validation results.
+
+#### Validating an existing resource
+
+To validate an existing resource, use `$validate` in a `GET` request:
+
+```rest
+GET http://<your FHIR service base URL>/{resource}/{resource ID}/$validate
+```
+
+For example:
+
+```rest
+GET http://my-fhir-server.azurewebsites.net/Patient/a6e11662-def8-4dde-9ebc-4429e68d130e/$validate
+```
+
+In the example above, you would be validating the existing `Patient` resource `a6e11662-def8-4dde-9ebc-4429e68d130e`. If it is valid, you will get an `OperationOutcome` such as the following:
+
+```json
+{
+ "resourceType": "OperationOutcome",
+ "issue": [
+ {
+ "severity": "information",
+ "code": "informational",
+ "diagnostics": "All OK"
+ }
+ ]
+}
+```
+
+If the resource is not valid, you will get an error code and an error message with details on why the resource is invalid. A `4xx` or `5xx` error means that the validation itself could not be performed, and it is unknown whether the resource is valid or not. An example `OperationOutcome` returned with error messages could look like the following:
+
+```json
+{
+ "resourceType": "OperationOutcome",
+ "issue": [
+ {
+ "severity": "error",
+ "code": "invalid",
+ "details": {
+ "coding": [
+ {
+ "system": "http://hl7.org/fhir/dotnet-api-operation-outcome",
+ "code": "1028"
+ }
+ ],
+ "text": "Instance count for 'Patient.identifier.value' is 0, which is not within the specified cardinality of 1..1"
+ },
+ "location": [
+ "Patient.identifier[1]"
+ ]
+ },
+ {
+ "severity": "error",
+ "code": "invalid",
+ "details": {
+ "coding": [
+ {
+ "system": "http://hl7.org/fhir/dotnet-api-operation-outcome",
+ "code": "1028"
+ }
+ ],
+ "text": "Instance count for 'Patient.gender' is 0, which is not within the specified cardinality of 1..1"
+ },
+ "location": [
+ "Patient"
+ ]
+ }
+ ]
+}
+```
+
+In this example above, the resource did not conform to the provided `Patient` profile which required a patient identifier value and gender.
+
+If you would like to specify a profile as a parameter, you can specify the canonical URL for the profile to validate against, such as the following example with US Core `Patient` profile and a base profile for `heartrate`:
+
+```rest
+GET http://<your FHIR service base URL>/{Resource}/{Resource ID}/$validate?profile={canonicalUrl}
+```
+
+For example:
+
+```rest
+GET http://my-fhir-server.azurewebsites.net/Patient/a6e11662-def8-4dde-9ebc-4429e68d130e/$validate?profile=http://hl7.org/fhir/us/core/StructureDefinition/us-core-patient
+GET http://my-fhir-server.azurewebsites.net/Observation/12345678/$validate?profile=http://hl7.org/fhir/StructureDefinition/heartrate
+```
+
+#### Validating a new resource
+
+If you would like to validate a new resource that you are uploading to the server, you can do a `POST` request:
+
+```rest
+POST http://<your FHIR service base URL>/{Resource}/$validate
+```
+
+For example:
+
+```rest
+POST http://my-fhir-server.azurewebsites.net/Patient/$validate
+```
+
+This request will create the new resource you are specifying in the request payload, whether it is in a JSON or XML format, and validate the uploaded resource. Then, it will return an `OperationOutcome` as a result of the validation on the new resource.
+
+### Validate on resource CREATE or resource UPDATE
+
+You can choose when you would like to validate your resource, such as on resource CREATE or UPDATE. You can specify this in the server configuration setting, under the `CoreFeatures`:
+
+```json
+{
+ "FhirServer": {
+ "CoreFeatures": {
+ "ProfileValidationOnCreate": true,
+ "ProfileValidationOnUpdate": false
+ }
+}
+```
+
+If the resource conforms to the provided `Resource.meta.profile` and the profile is present in the system, the server will act accordingly to the configuration setting above. If the provided profile is not present in the server, the validation request will be ignored and left in `Resource.meta.profile`.
+Validation is usually an expensive operation, so it is usually run only on test servers or on a small subset of resources - which is why it is important to have these ways to turn the validation operation on or off validation on the server side. If the server configuration specifies to opt out of validation on resource Create/Update, user can override the behavior by specifying it in the `header` of the Create/Update request:
+
+```rest
+x-ms-profile-validation: true
+```
+
+## Next steps
+
+In this article, you have learned about FHIR profiles, and how to validate resources against profiles using $validate. To learn about Azure API for FHIR's other supported features, check out:
+
+>[!div class="nextstepaction"]
+>[FHIR supported features](fhir-features-supported.md)
iot-hub Iot Hub Arduino Iot Devkit Az3166 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md
arduino Previously updated : 06/25/2019 Last updated : 04/29/2021
[!INCLUDE [iot-hub-get-started-device-selector](../../includes/iot-hub-get-started-device-selector.md)]
-You can use the [MXChip IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/) to develop and prototype Internet of Things (IoT) solutions that take advantage of Microsoft Azure services. It includes an Arduino-compatible board with rich peripherals and sensors, an open-source board package, and a rich [sample gallery](https://microsoft.github.io/azure-iot-developer-kit/docs/projects/).
+You can use the [MXChip IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/) to develop and prototype Internet of Things (IoT) solutions that take advantage of Microsoft Azure services. The kit includes an Arduino-compatible board with rich peripherals and sensors, an open-source board package, and a rich [sample gallery](https://microsoft.github.io/azure-iot-developer-kit/docs/projects/).
## What you learn
You can find the source code for all DevKit tutorials from [code samples gallery
- A computer running Windows 10, macOS 10.10+ or Ubuntu 18.04+. - An active Azure subscription. [Activate a free 30-day trial Microsoft Azure account](https://azureinfo.microsoft.com/us-freetrial.html). [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-
-## Prepare your hardware
+
+## Prepare the development environment
-Hook up the following hardware to your computer:
+Follow these steps to prepare the development environment for the DevKit:
-* DevKit board
-* Micro-USB cable
+#### Install Visual Studio Code with Azure IoT Tools extension package
-![Required hardware](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/hardware.jpg)
+1. Install [Arduino IDE](https://www.arduino.cc/en/Main/Software). It provides the necessary toolchain for compiling and uploading Arduino code.
+ * **Windows**: Use Windows Installer version. Do not install from the App Store.
+ * **macOS**: Drag and drop the extracted **Arduino.app** into `/Applications` folder.
+ * **Ubuntu**: Unzip it into folder such as `$HOME/Downloads/arduino-1.8.8`
-To connect the DevKit to your computer, follow these steps:
+2. Install [Visual Studio Code](https://code.visualstudio.com/), a cross platform source code editor with powerful intellisense, code completion, and debugging support as well as rich extensions can be installed from marketplace.
-1. Connect the USB end to your computer.
+3. Launch VS Code, look for **Arduino** in the extension marketplace and install it. This extension provides enhanced experiences for developing on Arduino platform.
-2. Connect the Micro-USB end to the DevKit.
+ ![Install Arduino](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-arduino.png)
-3. The green LED for power confirms the connection.
+4. Look for [Azure IoT Tools](https://aka.ms/azure-iot-tools) in the extension marketplace and install it.
- ![Hardware connections](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/connect.jpg)
+ ![Screenshot that shows Azure IoT Tools in the extension marketplace.](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-azure-iot-tools.png)
+
+ Or copy and paste this URL into a browser window: `vscode:extension/vsciot-vscode.azure-iot-tools`
+
+ > [!NOTE]
+ > The Azure IoT Tools extension pack contains the [Azure IoT Device Workbench](https://aka.ms/iot-workbench) which is used to develop and debug on various IoT devkit devices. The [Azure IoT Hub extension](https://aka.ms/iot-toolkit), also included with the Azure IoT Tools extension pack, is used to manage and interact with Azure IoT Hubs.
+
+5. Configure VS Code with Arduino settings.
+
+ In Visual Studio Code, click **File > Preferences > Settings** (on macOS, **Code > Preferences > Settings**). Then click the **Open Settings (JSON)** icon in the upper-right corner of the *Settings* page.
+
+ ![Install Azure IoT Tools](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/user-settings-arduino.png)
+
+ The correct path to your Arduino installation must be configured in VS Code. Add following lines to configure Arduino depending on your platform and the directory path where you installed the Arduino IDE:
+
+ * **Windows**:
+
+ ```json
+ "arduino.path": "C:\\Program Files (x86)\\Arduino",
+ "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
+ ```
+
+ * **macOS**:
+
+ ```json
+ "arduino.path": "/Applications",
+ "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
+ ```
+
+ * **Ubuntu**:
+
+ Replace the **{username}** placeholder below with your username.
-## Quickstart: Send telemetry from DevKit to an IoT Hub
+ ```json
+ "arduino.path": "/home/{username}/Downloads/arduino-1.8.13",
+ "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
+ ```
+
+6. Click `F1` to open the command palette, type and select **Arduino: Board Manager**. Search for **AZ3166** and install the latest version.
+
+ ![Install DevKit SDK](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-az3166-sdk.png)
+
+#### Install ST-Link drivers
+
+[ST-Link/V2](https://www.st.com/en/development-tools/st-link-v2.html) is the USB interface that IoT DevKit uses to communicate with your development machine. You need to install it on Windows to flash the compiled device code to the DevKit. Follow the OS-specific steps to allow the machine access to your device.
+
+* **Windows**: Download and install USB driver from [STMicroelectronics website](https://www.st.com/en/development-tools/stsw-link009.html).
+* **macOS**: No driver is required for macOS.
+* **Ubuntu**: Run the commands in terminal and sign out and sign in for the group change to take effect:
+
+ ```bash
+ # Copy the default rules. This grants permission to the group 'plugdev'
+ sudo cp ~/.arduino15/packages/AZ3166/tools/openocd/0.10.0/linux/contrib/60-openocd.rules /etc/udev/rules.d/
+ sudo udevadm control --reload-rules
+
+ # Add yourself to the group 'plugdev'
+ # Logout and log back in for the group to take effect
+ sudo usermod -a -G plugdev $(whoami)
+ ```
+
+Now you are all set with preparing and configuring your development environment.
+
+## Prepare Azure resources
-The quickstart uses pre-compiled DevKit firmware to send the telemetry to the IoT Hub. Before you run it, you create an IoT hub and register a device with the hub.
+For this article, you will need to have an IoT Hub created and a device registered to use the hub. The following subsections show how to create the resources with the Azure CLI.
-### Create an IoT hub
+You can also create an IoT Hub and register a device within Visual Studio code using the Azure IoT Tools extensions. For more information on creating the hub and device within VS Code, see [Use Azure IoT Tools for VS Code](iot-hub-create-use-iot-toolkit.md).
+#### Create an IoT hub
-### Register a device
+If you haven't already created a IoT Hub, follow the steps in [Create an IoT Hub using Azure CLI](iot-hub-create-using-cli.md).
-A device must be registered with your IoT hub before it can connect. In this quickstart, you use the Azure Cloud Shell to register a simulated device.
+#### Register a device
+
+A device must be registered for the IoT DevKit in your IoT hub before it can connect. Use the steps below for the Azure Cloud Shell to register a new device.
1. Run the following command in Azure Cloud Shell to create the device identity. **YourIoTHubName**: Replace this placeholder below with the name you choose for your IoT hub.
- **MyNodeDevice**: The name of the device you're registering. Use **MyNodeDevice** as shown. If you choose a different name for your device, you need to use that name throughout this article, and update the device name in the sample applications before you run them.
+ **AZ3166Device**: The name of the device you're registering. Use **AZ3166Device** as shown for the example in this article. If you choose a different name for your device, you need to use that name throughout this article, and update the device name in the sample applications before you run them.
```azurecli-interactive
- az iot hub device-identity create --hub-name YourIoTHubName --device-id MyNodeDevice
+ az iot hub device-identity create --hub-name YourIoTHubName --device-id AZ3166Device
``` > [!NOTE]
A device must be registered with your IoT hub before it can connect. In this qui
**YourIoTHubName**: Replace this placeholder below with the name you choose for your IoT hub. ```azurecli-interactive
- az iot hub device-identity connection-string show --hub-name YourIoTHubName --device-id MyNodeDevice --output table
+ az iot hub device-identity connection-string show --hub-name YourIoTHubName --device-id AZ3166Device --output table
``` Make a note of the device connection string, which looks like:
- `HostName={YourIoTHubName}.azure-devices.net;DeviceId=MyNodeDevice;SharedAccessKey={YourSharedAccessKey}`
+ `HostName={YourIoTHubName}.azure-devices.net;DeviceId=AZ3166Device;SharedAccessKey={YourSharedAccessKey}`
- You use this value later in the quickstart.
+ You use this value later.
-### Send DevKit telemetry
-The DevKit connects to a device-specific endpoint on your IoT hub and sends temperature and humidity telemetry.
+## Prepare your hardware
-1. Download the latest version of [GetStarted firmware](https://aka.ms/devkit/prod/getstarted/latest) for IoT DevKit.
+Hook up the following hardware to your computer:
+
+* DevKit board
+* Micro-USB cable
+
+![Required hardware](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/hardware.jpg)
+
+To connect the DevKit to your computer, follow these steps:
+
+1. Connect the USB end to your computer.
+
+2. Connect the Micro-USB end to the DevKit.
+
+3. The green LED for power confirms the connection.
+
+ ![Hardware connections](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/connect.jpg)
+
+#### Update firmware
+
+The DevKit Get Started firmware (`devkit-getstarted-*.*.*.bin`) connects to a device-specific endpoint on your IoT hub and sends temperature and humidity telemetry.
+
+1. Download the latest version of [GetStarted firmware (devkit-getstarted-*.*.*.bin)](https://github.com/microsoft/devkit-sdk/releases/) for IoT DevKit. At the time of this update, the latest filename is **devkit-getstarted-2.0.0.bin**.
1. Make sure IoT DevKit connect to your computer via USB. Open File Explorer there is a USB mass storage device called **AZ3166**.
The DevKit connects to a device-specific endpoint on your IoT hub and sends temp
![Copy firmware](media/iot-hub-arduino-devkit-az3166-get-started/quickstarts/copy-firmware.png)
-1. On the DevKit, Hold down button **B**, push and release the **Reset** button, and then release button **B**. Your DevKit enters AP mode. To confirm, the screen displays the service set identifier (SSID) of the DevKit and the configuration portal IP address.
+1. On the DevKit, Hold down the **B** button, and keep it pressed down. Push and release the **Reset** button. Afterwards, release button **B**. Your DevKit enters AP mode. To confirm, the screen displays the service set identifier (SSID) of the DevKit and the configuration portal IP address.
![Reset button, button B, and SSID](media/iot-hub-arduino-devkit-az3166-get-started/quickstarts/wifi-ap.jpg)
The DevKit connects to a device-specific endpoint on your IoT hub and sends temp
![Connect SSID](media/iot-hub-arduino-devkit-az3166-get-started/quickstarts/connect-ssid.png)
-1. Open **192.168.0.1** in the browser. Select the Wi-Fi that you want the IoT DevKit connect to, type the Wi-Fi password, then paste the device connection string you made note of previously. Then click Save.
+1. Open **192.168.0.1** in the browser. Select the Wi-Fi that you want the IoT DevKit to connect to, type the Wi-Fi password, then paste the device connection string you made note of previously. Then click **Configure Device**.
![Configuration UI](media/iot-hub-arduino-devkit-az3166-get-started/quickstarts/configuration-ui.png)
The DevKit connects to a device-specific endpoint on your IoT hub and sends temp
az iot hub monitor-events --hub-name YourIoTHubName --output table ```
-## Prepare the development environment
-
-Follow these steps to prepare the development environment for the DevKit:
-
-### Install Visual Studio Code with Azure IoT Tools extension package
-
-1. Install [Arduino IDE](https://www.arduino.cc/en/Main/Software). It provides the necessary toolchain for compiling and uploading Arduino code.
- * **Windows**: Use Windows Installer version. Do not install from the App Store.
- * **macOS**: Drag and drop the extracted **Arduino.app** into `/Applications` folder.
- * **Ubuntu**: Unzip it into folder such as `$HOME/Downloads/arduino-1.8.8`
-
-2. Install [Visual Studio Code](https://code.visualstudio.com/), a cross platform source code editor with powerful intellisense, code completion and debugging support as well as rich extensions can be installed from marketplace.
-
-3. Launch VS Code, look for **Arduino** in the extension marketplace and install it. This extension provides enhanced experiences for developing on Arduino platform.
-
- ![Install Arduino](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-arduino.png)
-
-4. Look for [Azure IoT Tools](https://aka.ms/azure-iot-tools) in the extension marketplace and install it.
-
- ![Screenshot that shows Azure IoT Tools in the extension marketplace.](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-azure-iot-tools.png)
-
- Or copy and paste this URL into a browser window: `vscode:extension/vsciot-vscode.azure-iot-tools`
-
- > [!NOTE]
- > The Azure IoT Tools extension pack contains the [Azure IoT Device Workbench](https://aka.ms/iot-workbench) which is used to develop and debug on various IoT devkit devices. The [Azure IoT Hub extension](https://aka.ms/iot-toolkit), also included with the Azure IoT Tools extension pack, is used to manage and interact with Azure IoT Hubs.
-
-5. Configure VS Code with Arduino settings.
-
- In Visual Studio Code, click **File > Preferences > Settings** (on macOS, **Code > Preferences > Settings**). Then click the **Open Settings (JSON)** icon in the upper-right corner of the *Settings* page.
-
- ![Install Azure IoT Tools](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/user-settings-arduino.png)
-
- Add following lines to configure Arduino depending on your platform:
-
- * **Windows**:
-
- ```json
- "arduino.path": "C:\\Program Files (x86)\\Arduino",
- "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
- ```
-
- * **macOS**:
-
- ```json
- "arduino.path": "/Applications",
- "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
- ```
-
- * **Ubuntu**:
-
- Replace the **{username}** placeholder below with your username.
-
- ```json
- "arduino.path": "/home/{username}/Downloads/arduino-1.8.8",
- "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
- ```
-
-6. Click `F1` to open the command palette, type and select **Arduino: Board Manager**. Search for **AZ3166** and install the latest version.
-
- ![Install DevKit SDK](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-az3166-sdk.png)
-
-### Install ST-Link drivers
-
-[ST-Link/V2](https://www.st.com/en/development-tools/st-link-v2.html) is the USB interface that IoT DevKit uses to communicate with your development machine. You need to install it on Windows to flash the compiled device code to the DevKit. Follow the OS-specific steps to allow the machine access to your device.
-
-* **Windows**: Download and install USB driver from [STMicroelectronics website](https://www.st.com/en/development-tools/stsw-link009.html).
-* **macOS**: No driver is required for macOS.
-* **Ubuntu**: Run the commands in terminal and sign out and sign in for the group change to take effect:
-
- ```bash
- # Copy the default rules. This grants permission to the group 'plugdev'
- sudo cp ~/.arduino15/packages/AZ3166/tools/openocd/0.10.0/linux/contrib/60-openocd.rules /etc/udev/rules.d/
- sudo udevadm control --reload-rules
-
- # Add yourself to the group 'plugdev'
- # Logout and log back in for the group to take effect
- sudo usermod -a -G plugdev $(whoami)
- ```
-
-Now you are all set with preparing and configuring your development environment. Let us build the GetStarted sample you just ran.
+ This command monitors device-to-cloud (D2C) messages sent to your IoT Hub
## Build your first project
-### Open sample code from sample gallery
+#### Open sample code from sample gallery
The IoT DevKit contains a rich gallery of samples that you can use to learn connect the DevKit to various Azure services. 1. Make sure your IoT DevKit is **not connected** to your computer. Start VS Code first, and then connect the DevKit to your computer.
-1. Click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Open Examples...**. Then select **IoT DevKit** as board.
+1. Click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Open Examples...**. Then select **MXChip IoT DevKit** as board.
-1. In the IoT Workbench Examples page, find **Get Started** and click **Open Sample**. Then selects the default path to download the sample code.
+1. In the IoT Workbench Examples page, find **Get Started** and click **Open Sample**. Then select the default path to download the sample code.
![Open sample](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/open-sample.png)
-### Provision Azure IoT Hub and device
-
-Instead of provisioning Azure IoT Hub and device from the Azure portal, you can do it in the VS Code without leaving the development environment.
-
-1. In the new opened project window, click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Provision Azure Services...**. Follow the step by step guide to finish provisioning your Azure IoT Hub and creating the IoT Hub device.
-
- ![Provision command](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/provision.png)
-
- > [!NOTE]
- > If you have not signed in Azure. Follow the pop-up notification for signing in.
-
-1. Select the subscription you want to use.
- ![Select sub](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/select-subscription.png)
+#### Review the code
-1. Then select or create a new [resource group](../azure-resource-manager/management/overview.md#terminology).
-
- ![Select resource group](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/select-resource-group.png)
-
-1. In the resource group you specified, follow the guide to select or create a new Azure IoT Hub.
-
- ![Select IoT Hub steps](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/iot-hub-provision.png)
-
- ![Select IoT Hub](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/select-iot-hub.png)
-
- ![Selected IoT Hub](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/iot-hub-selected.png)
-
-1. In the output window, you will see the Azure IoT Hub provisioned.
-
- ![IoT Hub Provisioned](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/iot-hub-provisioned.png)
-
-1. Select or create a new device in Azure IoT Hub you provisioned.
+The `GetStarted.ino` is the main Arduino sketch file.
- ![Select IoT Device steps](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/iot-device-provision.png)
+![D2C message](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/code.png)
- ![Select IoT Device Provisioned](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/select-iot-device.png)
+To see how device telemetry is sent to the Azure IoT Hub, open the `utility.cpp` file in the same folder. View [API Reference](https://microsoft.github.io/azure-iot-developer-kit/docs/apis/arduino-language-reference/) to learn how to use sensors and peripherals on IoT DevKit.
-1. Now you have Azure IoT Hub provisioned and device created in it. Also the device connection string will be saved in VS Code for configuring the IoT DevKit later.
+The `DevKitMQTTClient` used is a wrapper of the **iothub_client** from the [Microsoft Azure IoT SDKs and libraries for C](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client) to interact with Azure IoT Hub.
- ![Provision done](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/provision-done.png)
-### Configure and compile device code
+#### Configure and compile device code
1. In the bottom-right status bar, check the **MXCHIP AZ3166** is shown as selected board and serial port with **STMicroelectronics** is used. ![Select board and COM](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/select-com.png)
-1. Click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Configure Device Settings...**, then select **Config Device Connection String > Select IoT Hub Device Connection String**.
+1. Click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Configure Device Settings...**, select **Config Device Connection String**. Then paste the connection string for your IoT device.
1. On DevKit, hold down **button A**, push and release the **reset** button, and then release **button A**. Your DevKit enters configuration mode and saves the connection string.
Instead of provisioning Azure IoT Hub and device from the Azure portal, you can
![Arduino upload](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/arduino-upload.png)
-The DevKit reboots and starts running the code.
+The DevKit reboots and starts running the code. The DevKit will start sending telemetry to the hub.
> [!NOTE] > If there is any errors or interruptions, you can always recover by running the command again.
-## Test the project
-
-### View the telemetry sent to Azure IoT Hub
-Click the power plug icon on the status bar to open the Serial Monitor:
+## Use the Serial Monitor
-![Serial monitor](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/serial-monitor.png)
+The Serial Monitor in VS Code is useful to view logging information sent from the code. This logging information is generated using the `LogInfo()` API. This is useful for debugging purposes.
-The sample application is running successfully when you see the following results:
+1. Click the power plug icon on the status bar to open the Serial Monitor:
-* The Serial Monitor displays the message sent to the IoT Hub.
-* The LED on the MXChip IoT DevKit is blinking.
+ ![Serial monitor](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/serial-monitor.png)
-![Serial monitor output](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/result-serial-output.png)
+2. The sample application is running successfully when you see the following results:
-> [!NOTE]
-> You might encounter an error during testing in which the LED isn't blinking, the Azure portal doesn't show incoming data from the device, but the device OLED screen shows as **Running...**. To resolve the issue, in the Azure portal, go to the device in the IoT hub and send a message to the device. If you see the following response in the serial monitor in VS Code, it's possible that direct communication from the device is blocked at the router level. Check firewall and router rules that are configured for the connecting devices. Also, ensure that outbound port 1833 is open.
->
-> ERROR: mqtt_client.c (ln 454): Error: failure opening connection to endpoint
-> INFO: >>>Connection status: disconnected
-> ERROR: tlsio_mbedtls.c (ln 604): Underlying IO open failed
-> ERROR: mqtt_client.c (ln 1042): Error: io_open failed
-> ERROR: iothubtransport_mqtt_common.c (ln 2283): failure connecting to address atcsliothub.azure-devices.net.
-> INFO: >>>Re-connect.
-> INFO: IoThub Version: 1.3.6
-
-### View the telemetry received by Azure IoT Hub
-
-You can use [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) to monitor device-to-cloud (D2C) messages in IoT Hub.
+ * The Serial Monitor displays the message sent to the IoT Hub.
+ * The LED on the MXChip IoT DevKit is blinking.
+
+ ![Serial monitor output](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/result-serial-output.png)
+
+ > [!NOTE]
+ > You might encounter an error during testing in which the LED isn't blinking, the Azure portal doesn't show incoming data from the device, but the device OLED screen shows as **Running...**. To resolve the issue, in the Azure portal, go to the device in the IoT hub and send a message to the device. If you see the following response in the serial monitor in VS Code, it's possible that direct communication from the device is blocked at the router level. Check firewall and router rules that are configured for the connecting devices. Also, ensure that outbound port 1833 is open.
+ >
+ > ERROR: mqtt_client.c (ln 454): Error: failure opening connection to endpoint
+ > INFO: >>>Connection status: disconnected
+ > ERROR: tlsio_mbedtls.c (ln 604): Underlying IO open failed
+ > ERROR: mqtt_client.c (ln 1042): Error: io_open failed
+ > ERROR: iothubtransport_mqtt_common.c (ln 2283): failure connecting to address atcsliothub.azure-devices.net.
+ > INFO: >>>Re-connect.
+ > INFO: IoThub Version: 1.3.6
+
+3. On DevKit, hold down button **A**, push and release the **Reset** button, and then release button **A**. Your DevKit stops sending telemetry and enters configuration mode. A full command list for configuration mode is shown in the Serial Monitor output window in VS Code.
+
+ ```output
+ ************************************************
+ ** MXChip - Microsoft IoT Developer Kit **
+ ************************************************
+ Configuration console:
+ - help: Help document.
+ - version: System version.
+ - exit: Exit and reboot.
+ - scan: Scan Wi-Fi AP.
+ - set_wifissid: Set Wi-Fi SSID.
+ - set_wifipwd: Set Wi-Fi password.
+ - set_az_iothub: Set IoT Hub device connection string.
+ - set_dps_uds: Set DPS Unique Device Secret (UDS) for X.509 certificates..
+ - set_az_iotdps: Set DPS Symmetric Key. Format: "DPSEndpoint=global.azure-devices-provisioning.net;IdScope=XXX;DeviceId=XXX;SymmetricKey=XXX".
+ - enable_secure: Enable secure channel between AZ3166 and secure chip.
+ ```
+
+ This mode supports commands like changing the IoT Hub device connection string you want to use. You can send text commands in the Serial Monitor by pressing `F1` and choosing **Arduino: Send Text to Serial Port**.
+
+4. On DevKit, press and release the **Reset** button. This reboots the device so it can start sending telemetry again.
+
+## Use VS Code to view hub telemetry
+
+At the end of the [Prepare your hardware](#prepare-your-hardware) section, you used the Azure CLI to monitor device-to-cloud (D2C) messages in your IoT Hub. You can also monitor device-to-cloud (D2C) messages in VS Code using the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
1. Sign in [Azure portal](https://portal.azure.com/), find the IoT Hub you created. ![Azure portal](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/azure-iot-hub-portal.png)
-1. In the **Shared access policies** pane, click the **iothubowner policy**, and write down the Connection string of your IoT hub.
+1. In the **Shared access policies** pane, click the **iothubowner** policy, and copy the connection string of your IoT hub.
![Azure IoT Hub connection string](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/azure-portal-conn-string.png)
-1. In VS Code, click `F1`, type and select **Azure IoT Hub: Set IoT Hub Connection String**. Copy the connection string into it.
+1. In VS Code, click `F1`, type and select **Azure IoT Hub: Set IoT Hub Connection String**. Paste your IoT Hub connection string into the field.
![Set Azure IoT Hub connection string](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/set-iothub-connection-string.png)
-1. Expand the **AZURE IOT HUB DEVICES** pane on the left, right click on the device name you created and select **Start Monitoring Built-in Event Endpoint**.
+1. Expand the **AZURE IOT HUB** pane on the left, right click on the device name you created and select **Start Monitoring Built-in Event Endpoint**.
![Monitor D2C Message](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/monitor-d2c.png)
-1. In **OUTPUT** pane, you can see the incoming D2C messages to the IoT Hub.
-
- ![Screenshot that shows the incoming D2C messages to the IoT Hub.](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/d2c-output.png)
-
-## Review the code
+ If you are prompted for a built-in endpoint, use the **Event Hub-compatible endpoint** shown by clicking **Built-in endpoints** for your IoT Hub in the Azure portal.
-The `GetStarted.ino` is the main Arduino sketch file.
+1. In **OUTPUT** pane, you can see the incoming D2C messages to the IoT Hub.
-![D2C message](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/code.png)
+ Make sure the output is filtered correctly by selecting **Azure IoT Hub**. This filtering can be used to switch between hub monitoring and the Serial Monitor output.
-To see how device telemetry is sent to the Azure IoT Hub, open the `utility.cpp` file in the same folder. View [API Reference](https://microsoft.github.io/azure-iot-developer-kit/docs/apis/arduino-language-reference/) to learn how to use sensors and peripherals on IoT DevKit.
+ ![Screenshot that shows the incoming D2C messages to the IoT Hub.](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/d2c-output.png)
-The `DevKitMQTTClient` used is a wrapper of the **iothub_client** from the [Microsoft Azure IoT SDKs and libraries for C](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client) to interact with Azure IoT Hub.
## Problems and feedback
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
To complete this article, you need the following:
- [Visual Studio Code](https://code.visualstudio.com/) -- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) for Visual Studio Code.
+- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed for Visual Studio Code.
-## Create an IoT hub
+
+## Create an IoT hub and device in an IoT Project
+
+The following steps show how you can create an IoT Hub and register a device to the hub within an IoT Project in Visual Studio Code.
+
+Instead of provisioning an Azure IoT Hub and device from the Azure portal. You can do it in the VS Code without leaving the development environment. The steps in this section show how to do this.
+
+1. In the new opened project window, click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Provision Azure Services...**. Follow the step-by-step guide to finish provisioning your Azure IoT Hub and creating the IoT Hub device.
+
+ ![Provision command](media/iot-hub-create-use-iot-toolkit/provision.png)
+
+ > [!NOTE]
+ > If you have not signed in Azure. Follow the pop-up notification for signing in.
+
+1. Select the subscription you want to use.
+
+ ![Select sub](media/iot-hub-create-use-iot-toolkit/select-subscription.png)
+
+1. Then select and existing resource group or create a new [resource group](../azure-resource-manager/management/overview.md#terminology).
+
+ ![Select resource group](media/iot-hub-create-use-iot-toolkit/select-resource-group.png)
+
+1. In the resource group you specified, follow the prompts to select an existing IoT Hub or create a new Azure IoT Hub.
+
+ ![Select IoT Hub steps](media/iot-hub-create-use-iot-toolkit/iot-hub-provision.png)
+
+ ![Select IoT Hub](media/iot-hub-create-use-iot-toolkit/select-iot-hub.png)
+
+ ![Selected IoT Hub](media/iot-hub-create-use-iot-toolkit/iot-hub-selected.png)
+
+1. In the output window, you will see the Azure IoT Hub provisioned.
+
+ ![IoT Hub Provisioned](media/iot-hub-create-use-iot-toolkit/iot-hub-provisioned.png)
+
+1. Select or create a new IoT Hub Device in the Azure IoT Hub you provisioned.
+
+ ![Select IoT Device steps](media/iot-hub-create-use-iot-toolkit/iot-device-provision.png)
+
+ ![Select IoT Device Provisioned](media/iot-hub-create-use-iot-toolkit/select-iot-device.png)
+
+1. Now you have Azure IoT Hub provisioned and device created in it. Also the device connection string will be saved in VS Code.
+
+ ![Provision done](media/iot-hub-create-use-iot-toolkit/provision-done.png)
+++
+## Create an IoT hub without an IoT Project
+
+The following steps show how you can create an IoT Hub without an IoT Project in Visual Studio Code.
1. In Visual Studio Code, open the **Explorer** view.
-2. At the bottom of the Explorer, expand the **Azure IoT Hub Devices** section.
+2. At the bottom of the Explorer, expand the **Azure IoT Hub** section.
![Expand Azure IoT Hub Devices](./media/iot-hub-create-use-iot-toolkit/azure-iot-hub-devices.png)
-3. Click on the **...** in the **Azure IoT Hub Devices** section header. If you don't see the ellipsis, hover over the header.
+3. Click on the **...** in the **Azure IoT Hub** section header. If you don't see the ellipsis, hover over the header.
4. Choose **Create IoT Hub**.
-5. A pop-up will show in the bottom right corner to let you sign in to Azure for the first time.
+5. A pop-up will show in the bottom-right corner to let you sign in to Azure for the first time.
6. Select Azure subscription.
iot-hub Iot Hub Devguide Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-query-language.md
Previously updated : 10/29/2018 Last updated : 05/07/2021
This grouping query would return a result similar to the following example:
In this example, three devices reported successful configuration, two are still applying the configuration, and one reported an error.
-Projection queries allow developers to return only the properties they care about. For example, to retrieve the last activity time of all disconnected devices use the following query:
+Projection queries allow developers to return only the properties they care about. For example, to retrieve the last activity time along with the device ID of all enabled devices that are disconnected, use the following query:
```sql
-SELECT LastActivityTime FROM devices WHERE status = 'enabled'
+SELECT DeviceId, LastActivityTime FROM devices WHERE status = 'enabled' AND connectionState = 'Disconnected'
```
+Here is an example query result of that query in **Query Explorer** for an IoT Hub:
+
+```json
+[
+ {
+ "deviceId": "AZ3166Device",
+ "lastActivityTime": "2021-05-07T00:50:38.0543092Z"
+ }
+]
+```
++ ### Module twin queries Querying on module twins is similar to querying on device twins, but using a different collection/namespace; instead of from **devices**, you query from **devices.modules**:
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Title: Content Performance http://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-quotas-throttling
+ Title: Understand Azure IoT Hub quotas and throttling
description: Developer guide - description of the quotas that apply to IoT Hub and the expected throttling behavior.
For an in-depth discussion of IoT Hub throttling behavior, see the blog post [Io
Other reference topics in this IoT Hub developer guide include: * [IoT Hub endpoints](iot-hub-devguide-endpoints.md)
-* [Monitor IoT Hub](monitor-iot-hub.md)
+* [Monitor IoT Hub](monitor-iot-hub.md)
marketplace Azure Vm Create Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-plans.md
Previously updated : 04/01/2021 Last updated : 05/06/2021
-# How to create plans for a virtual machine offer
+# Create plans for a virtual machine offer
On the **Plan overview** page (select from the left-nav menu in Partner Center) you can provide a variety of plan options within the same offer. An offer requires at least one plan (formerly called a SKU), which can vary by monetization audience, Azure region, features, or VM images.
-You can create up to 100 plans for each offer: up to 45 of these can be private. Learn more about private plans in [Private offers in the Microsoft commercial marketplace](private-offers.md).
+You can create up to 100 plans for each offer; up to 45 of these can be private. Learn more about private plans in [Private offers in the Microsoft commercial marketplace](private-offers.md).
After you create your plans, select the **Plan overview** tab to display:
After you create your plans, select the **Plan overview** tab to display:
- Current publishing status - Available actions
-The actions available on the **Plan overview** pane vary depending on the current status of your plan.
+The actions available on this pane vary depending on the current status of your plan.
- If the plan status is a draft, select **Delete draft**. - If the plan status is published live, select **Stop sell plan** or **Sync private audience**.
Select **Create**. This opens the **Plan setup** page.
Set the high-level configuration for the type of plan, specify whether it reuses a technical configuration from another plan, and identify the Azure regions where the plan should be available. Your selections here determine which fields are displayed on other panes for the same plan.
-### Reuse technical configuration
-
-If you have more than one plan of the same type, and the packages are identical between them, you can select **This plan reuses technical configuration from another plan**. This option lets you select one of the other plans of the same type for this offer and lets you reuse its technical configuration.
-
-> [!NOTE]
-> When you reuse the technical configuration from another plan, the entire **Technical configuration** tab disappears from this plan. The technical configuration details from the other plan, including any updates you make in the future, will be used for this plan as well. This setting can't be changed after the plan is published.
- ### Azure regions Your plan must be made available in at least one Azure region.
Provide the images and other technical properties associated with this plan.
> [!NOTE] > This tab doesn't display if you configured this plan to reuse packages from another on the **Plan setup** tab.
+### Reuse technical configuration
+
+If you have more than one plan of the same type, and the packages are identical between them, select **This plan reuses the technical configuration from another plan**. This option lets you select one of the other plans of the same type for this offer and reuse its technical configuration.
+ ### Operating system Select the **Windows** or **Linux** operating system family.
purview Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/frequently-asked-questions.md
Previously updated : 10/20/2020 Last updated : 05/08/2021 # Frequently asked questions (FAQ) about Azure Purview
Yes, Azure Purview supports column level lineage.
### Does Azure Purview support Soft-Delete?
-Yes, Azure Purview supports Soft Delete for Azure subscription status management perspective. Purview can read subscription states (disabled/warned etc.) and put the account in soft-delete state until the account is restored/deleted. All the data plane API calls will be blocked when the account is in soft delete state and only GET/DELETE control plane API calls will be allowed. You can find additional information in Azure subscription states page [Azure Subscription Status](../cost-management-billing/manage/subscription-states.md)
+Yes, Azure Purview supports Soft Delete for Azure subscription status management perspective. Purview can read subscription states (disabled/warned etc.) and put the account in soft-delete state until the account is restored/deleted. All the data plane API calls will be blocked when the account is in soft delete state and only GET/DELETE control plane API calls will be allowed. You can find additional information in Azure subscription states page [Azure Subscription Status](../cost-management-billing/manage/subscription-states.md)
+
+### Does Azure Purview currently support Data Loss Prevention capabilities?
+
+No, Azure Purview does not provide Data Loss Prevention capabilities at this point.
+
+Read about [Data Loss Prevention in Microsoft Information Protection](https://docs.microsoft.com/microsoft-365/compliance/information-protection?view=o365-worldwide#prevent-data-loss) if you are interested in Data Loss Prevention features inside Microsoft 365.
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-credentials.md
Previously updated : 02/11/2021 Last updated : 05/08/2021 # Credentials for source authentication in Azure Purview
Before you can create a Credential, first associate one or more of your existing
## Grant the Purview managed identity access to your Azure Key Vault
+Currently Azure Key Vault supports two permission models:
+
+- Option 1 - Access Policies
+- Option 2 - Role-based Access Control
+
+Before assigning access to Purview managed identity, first identify your Azure Key Vault permission model from Key Vault resource **Access Policies** in the menu. Follow steps below based on relevant the permission model.
++
+### Option 1 - Assign access using using Key Vault Access Policy
+
+Follow these steps only if permission model in your Azure Key Vault resource is set to **Vault Access Policy**:
+ 1. Navigate to your Azure Key Vault. 2. Select the **Access policies** page.
Before you can create a Credential, first associate one or more of your existing
:::image type="content" source="media/manage-credentials/save-access-policy.png" alt-text="Save access policy":::
+### Option 2 - Assign access using Key Vault Azure role-based access control
+
+Follow these steps only if permission model in your Azure Key Vault resource is set to **Azure role-based access control**:
+
+1. Navigate to your Azure Key Vault.
+
+2. Select **Access Control (IAM)** from the left navigation menu.
+
+3. Select **+ Add**.
+
+4. Set the **Role** to **Key Vault Secrets User** and enter your enter your Azure Purview account name under **Select** input box. Then, select Save to give this role assignment to your Purview account.
+
+ :::image type="content" source="media/manage-credentials/akv-add-rbac.png" alt-text="Azure Key Vault RBAC":::
++ ## Create a new credential These credential types are supported in Purview:
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-data-sources.md
Use the following steps to register a new source.
:::image type="content" source="media/manage-data-sources/select-source-type.png" alt-text="Select a data source type in the Register sources page":::
-1. Fill out the form on the **Register sources** page. Select a name for your source and enter the relevant information. If you chose **From Azure subscription** as your account selection method, the sources in your subscription appear in a dropdown list. Alternatively, you can enter your source information manually.
+2. Fill out the form on the **Register sources** page. Select a name for your source and enter the relevant information. If you chose **From Azure subscription** as your account selection method, the sources in your subscription appear in a dropdown list.
:::image type="content" source="media/manage-data-sources/register-sources-form.png" alt-text="Form for data source information":::
-1. Select **Finish**.
+3. Select **Register**.
## View sources
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-adls-gen1.md
Previously updated : 11/30/2020 Last updated : 05/08/2021 # Customer intent: As a data steward or catalog administrator, I need to understand how to scan data from Azure Data Lake Storage Gen1 into the catalog. # Register and scan Azure Data Lake Storage Gen1
To register a new ADLS Gen1 account in your data catalog, do the following:
On the Register sources (Azure Data Lake Storage Gen1) screen, do the following: 1. Enter a **Name** that the data source will be listed with in the Catalog.
-2. Choose your subscription to filter down storage accounts
-3. Select a storage account
-4. Select a collection or create a new one (Optional)
-5. Finish to register the data source.
+2. Choose your subscription to filter down storage accounts.
+3. Select a storage account.
+4. Select a collection or create a new one (Optional).
+5. Select **Register** to register the data source.
+ [!INCLUDE [create and manage scans](includes/manage-scans.md)]
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-adls-gen2.md
Previously updated : 11/17/2020 Last updated : 05/08/2021 # Customer intent: As a data steward or catalog administrator, I need to understand how to scan data from Azure Data Lake Storage Gen2 into the catalog. # Register and scan Azure Data Lake Storage Gen2
It is required to get the Service Principal's application ID and secret:
> [!NOTE] > If you have firewall enabled for the storage account, you must use **Managed Identity** authentication method when setting up a scan.
-1. Go into your ASLS Gen2 storage account in [Azure portal](https://portal.azure.com)
+1. Go into your ADLS Gen2 storage account in [Azure portal](https://portal.azure.com)
1. Navigate to **Settings > Networking** and 1. Choose **Selected Networks** under **Allow access from** 1. In the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account** and hit **Save**
To register a new ADLS Gen2 account in your data catalog, do the following:
On the **Register sources (Azure Data Lake Storage Gen2)** screen, do the following: 1. Enter a **Name** that the data source will be listed with in the Catalog.
-2. Choose your subscription to filter down storage accounts
-3. Select a storage account
-4. Select a collection or create a new one (Optional)
-5. **Finish** to register the data source.
+2. Choose your subscription to filter down storage accounts.
+3. Select a storage account.
+4. Select a collection or create a new one (Optional).
+5. Select **Register** to register the data source.
:::image type="content" source="media/register-scan-adls-gen2/register-sources.png" alt-text="register sources options" border="true":::
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-blob-storage-source.md
Previously updated : 11/25/2020 Last updated : 05/08/2021 # Register and scan Azure Blob Storage
To register a new blob account in your data catalog, do the following:
On the **Register sources (Azure Blob Storage)** screen, do the following: 1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Choose your subscription to filter down storage accounts
-1. Select a storage account
-1. Select a collection or create a new one (Optional)
-1. **Finish** to register the data source.
+1. Choose your subscription to filter down storage accounts.
+1. Select a storage account.
+1. Select a collection or create a new one (Optional).
+1. Select **Register** to register the data source.
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-sources.png" alt-text="register sources options" border="true":::
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-cosmos-database.md
Previously updated : 10/9/2020 Last updated : 05/08/2021 # Register and scan Azure Cosmos Database (SQL API)
To register a new Azure Cosmos Database (SQL API) account in your data catalog,
On the **Register sources (Azure Cosmos DB (SQL API))** screen, do the following: 1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Choose how you want to point to your desired storage account:
- 1. Select **From Azure subscription**, select the appropriate subscription from the **Azure subscription** drop down box and the appropriate cosmosDB account from the **Cosmos DB account name** drop down box.
- 1. Or, you can select **Enter manually** and enter a service endpoint (URL).
-1. **Finish** to register the data source.
+2. Choose your Azure subscription to filter down Azure Cosmos DBs.
+3. Select an appropriate Cosmos DB Account name.
+4. Select a collection or create a new one (Optional).
+5. Select **Register** to register the data source.
+ :::image type="content" source="media/register-scan-azure-cosmos-database/register-sources.png" alt-text="register sources options" border="true":::
purview Register Scan Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-data-explorer.md
Previously updated : 10/9/2020 Last updated : 05/08/2021 # Register and scan Azure Data Explorer
To register a new Azure Data Explorer (Kusto) account in your data catalog, do t
On the **Register sources (Azure Data Explorer (Kusto))** screen, do the following: 1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Choose how you want to point to your desired storage account:
- 1. Select **From Azure subscription**, select the appropriate subscription from the **Azure subscription** drop down box and the appropriate cluster from the **Cluster** drop down box.
- 1. Or, you can select **Enter manually** and enter a service endpoint (URL).
-1. **Finish** to register the data source.
+2. Choose your Azure subscription to filter down Azure Data Explorer.
+3. Select an appropriate cluster.
+4. Select a collection or create a new one (Optional).
+5. Select **Register** to register the data source.
:::image type="content" source="media/register-scan-azure-data-explorer/register-sources.png" alt-text="register sources options" border="true":::
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-files-storage-source.md
Previously updated : 10/01/2020 Last updated : 05/08/2021 # Register and scan Azure Files
To register a new Azure Files account in your data catalog, do the following:
On the **Register sources (Azure Files)** screen, do the following: 1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Choose how you want to point to your desired storage account:
- 1. Select **From Azure subscription**, select the appropriate subscription from the **Azure subscription** drop down box and the appropriate storage account from the **Storage account name** drop down box.
- 1. Or, you can select **Enter manually** and enter a service endpoint (URL).
-1. **Finish** to register the data source.
+2. Choose your Azure subscription to filter down Azure Storage Accounts.
+3. Select an Azure Storage Account.
+4. Select a collection or create a new one (Optional).
+5. Select **Register** to register the data source.
:::image type="content" source="media/register-scan-azure-files/register-sources.png" alt-text="register sources options" border="true":::
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-multiple-sources.md
Previously updated : 2/26/2021 Last updated : 05/08/2021 # Register and scan multiple sources in Azure Purview
To register new multiple sources in your data catalog, do the following:
:::image type="content" source="media/register-scan-azure-multiple-sources/azure-multiple-source-setup.png" alt-text="Screenshot that shows the boxes for selecting a subscription and resource group."::: 1. In the **Select a collection** box, select a collection or create a new one (optional).
- 1. Select **Finish** to register the data source.
+ 1. Select **Register** to register the data sources.
## Create and run a scan
purview Register Scan Azure Sql Database Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database-managed-instance.md
Previously updated : 12/01/2020 Last updated : 05/08/2021 # Customer intent: As a data steward or catalog administrator, I need to understand how to scan data into the catalog.
It is required to get the service principal's application ID and secret:
## Register an Azure SQL Database Managed Instance data source
-1. Navigate to your Purview account
+1. Navigate to your Purview account.
-1. Select **Sources** on the left navigation
+1. Select **Sources** on the left navigation.
-1. Select **Register**
+1. Select **Register**.
-1. Select **Azure SQL Database Managed Instance** and then **Continue**
+1. Select **Azure SQL Database Managed Instance** and then **Continue**.
:::image type="content" source="media/register-scan-azure-sql-database-managed-instance/set-up-the-sql-data-source.png" alt-text="Set up the SQL data source"::: 1. Select **From Azure subscription**, select the appropriate subscription from the **Azure subscription** drop-down box and the appropriate server from the **Server name** drop-down box.
-1. Provide the **public endpoint fully qualified domain name** and **port number**. Then select **Finish** to register the data source.
+1. Provide the **public endpoint fully qualified domain name** and **port number**. Then select **Register** to register the data source.
:::image type="content" source="media/register-scan-azure-sql-database-managed-instance/add-azure-sql-database-managed-instance.png" alt-text="Add Azure SQL Database Managed Instance":::
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database.md
Previously updated : 10/02/2020 Last updated : 05/08/2021 # Customer intent: As a data steward or catalog administrator, I need to understand how to scan data into the catalog.
Your database server must allow Azure connections to be enabled. This will allow
To register a new Azure SQL Database in your data catalog, do the following:
-1. Navigate to your Purview account
+1. Navigate to your Purview account.
-1. Select **Sources** on the left navigation
+1. Select **Sources** on the left navigation.
-1. Select **Register**
+1. Select **Register**.
1. On **Register sources**, select **Azure SQL Database**. Select **Continue**.
On the **Register sources (Azure SQL Database)** screen, do the following:
1. Enter a **Name** that the data source will be listed with in the Catalog. 1. Select **From Azure subscription**, select the appropriate subscription from the **Azure subscription** drop-down box and the appropriate server from the **Server name** drop-down box.
-1. **Finish** to register the data source.
+1. Select **Register** to register the data source.
:::image type="content" source="media/register-scan-azure-sql-database/add-azure-sql-database.png" alt-text="register sources options" border="true":::
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-synapse-analytics.md
Previously updated : 10/22/2020 Last updated : 05/08/2021 # Register and scan Dedicated SQL pools (formerly SQL DW)
When authentication method selected is **SQL Authentication**, you need to get y
To register a new Azure Synapse Analytics server in your Data Catalog, do the following:
-1. Navigate to your Purview account
-1. Select **Sources** on the left navigation
-1. Select **Register**
-1. On **Register sources**, select **SQL dedicated pool (formerly SQL DW)**
-1. Select **Continue**
+1. Navigate to your Purview account.
+1. Select **Sources** on the left navigation.
+1. Select **Register**.
+1. On **Register sources**, select **SQL dedicated pool (formerly SQL DW)**.
+1. Select **Continue**.
On the **Register sources (Azure Synapse Analytics)** screen, do the following: 1. Enter a **Name** that the data source will be listed with in the Catalog.
-1. Choose how you want to point to your desired logical SQL Server:
- 1. Select **From Azure subscription**, select the appropriate subscription from the **Azure subscription** drop down box and the appropriate server from the **Server name** drop down box.
- 1. Or, you can select **Enter manually** and enter a **Server name**.
-1. **Finish** to register the data source.
+2. Choose your Azure subscription to filter down Azure Synapse workspaces.
+3. Select an Azure Synapse workspace.
+4. Select a collection or create a new one (Optional).
+5. Select **Register** to register the data source.
:::image type="content" source="media/register-scan-azure-synapse-analytics/register-sources.png" alt-text="register sources options" border="true":::
role-based-access-control Conditions Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditions-role-assignments-cli.md
Previously updated : 05/06/2021 Last updated : 05/07/2021
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, list, or delete conditions for your role assignments using Azure CLI.
+ ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
role-based-access-control Conditions Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditions-role-assignments-portal.md
Title: Add or edit Azure role assignment conditions using the Azure portal (preview) - Azure RBAC
-description: Learn how to add, edit, list, or delete attribute-based access control (ABAC) conditions in Azure role assignments using the Azure portal and Azure role-based access control (Azure RBAC).
+description: Learn how to add, edit, view, or delete attribute-based access control (ABAC) conditions in Azure role assignments using the Azure portal and Azure role-based access control (Azure RBAC).
Previously updated : 05/06/2021 Last updated : 05/07/2021
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, view, or delete conditions for your role assignments using the Azure portal.
+ ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
For information about the prerequisites to add or edit role assignment condition
To determine the conditions you need, review the examples in [Example Azure role assignment conditions](../storage/common/storage-auth-abac-examples.md).
-Currently, conditions can be added to built-in or custom role assignments that have [storage blob data actions](conditions-format.md#actions). These include the following built-in roles:
+Currently, conditions can be added to built-in or custom role assignments that have [storage blob data actions](../storage/common/storage-auth-abac-attributes.md). These include the following built-in roles:
- [Storage Blob Data Contributor](built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner)
There are two ways that you can add a condition. You can add a condition when yo
If you don't see the Condition tab, be sure you selected a role that supports conditions.
+ ![Screenshot of Add role assignment page with Add condition tab for preview experience.](./media/shared/condition.png)
+ The Add role assignment condition page appears. ### Existing role assignment
Once you have the Add role assignment condition page open, you can review the ba
1. In the **Add action** section, click **Add action**.
- The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition.
+ The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition. For more information, see [Azure role assignment condition format and syntax](conditions-format.md#actions).
![Select an action pane for condition with an action selected.](./media/conditions-role-assignments-portal/condition-actions-select.png)
role-based-access-control Conditions Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditions-role-assignments-powershell.md
Previously updated : 05/06/2021 Last updated : 05/07/2021
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, list, or delete conditions for your role assignments using Azure PowerShell.
+ ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
role-based-access-control Conditions Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/conditions-role-assignments-rest.md
Previously updated : 05/06/2021 Last updated : 05/07/2021
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, list, or delete conditions for your role assignments using the REST API.
+ ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](conditions-prerequisites.md).
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 05/03/2021 Last updated : 05/07/2021
[!INCLUDE [Azure RBAC definition grant access](../../includes/role-based-access-control/definition-grant.md)] This article describes how to assign roles using the Azure portal.
-If you need to assign administrator roles in Azure Active Directory, see [View and assign administrator roles in Azure Active Directory](../active-directory/roles/manage-roles-portal.md).
+If you need to assign administrator roles in Azure Active Directory, see [Assign Azure AD roles to users](../active-directory/roles/manage-roles-portal.md).
## Prerequisites
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
You can type in the **Select** box to search the directory for display names, email addresses, and object identifiers.
- ![Screenshot of Add members using Select principal pane for preview experience.](./media/role-assignments-portal/select-principal.png)
+ ![Screenshot of Add members using Select members pane for preview experience.](./media/role-assignments-portal/select-principal.png)
1. Click **Save** to add the users, groups, or service principals to the Members list.
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
1. Click **Add members**.
-1. In the **Select managed identity** pane, select whether the type is [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) or [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md).
+1. In the **Select managed identities** pane, select whether the type is [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) or [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md).
1. Find and select the managed identities.
- ![Screenshot of Add user-assigned managed identities using Select principal pane for preview experience.](./media/role-assignments-portal/select-managed-identity-user.png)
- If you selected a system-assigned managed identity, you need to select the Azure service instance where the managed identity is located. ![Screenshot of Add system-assigned managed identities using Select principal pane for preview experience.](./media/role-assignments-portal/select-managed-identity-system.png)
+ ![Screenshot of Add user-assigned managed identities using Select principal pane for preview experience.](./media/role-assignments-portal/select-managed-identity-user.png)
+ 1. Click **Save** to add the managed identities to the Members list. 1. In the **Description** box enter an optional description for this role assignment.
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
## Step 5: (Optional) Add condition (preview)
-If you selected a role that supports conditions, a **Condition** tab will appear and you have the option to add a condition to your role assignment.
+If you selected a role that supports conditions, a **Condition** tab will appear and you have the option to add a condition to your role assignment. A [condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control.
Currently, conditions can be added to built-in or custom role assignments that have [storage blob data actions](conditions-format.md#actions). These include the following built-in roles:
Currently, conditions can be added to built-in or custom role assignments that h
- [Storage Blob Data Owner](built-in-roles.md#storage-blob-data-owner) - [Storage Blob Data Reader](built-in-roles.md#storage-blob-data-reader)
-1. Click **Add condition** if you want to further refine the role assignments based on principal and resource attributes. For more information, see [Add or edit Azure role assignment conditions](conditions-role-assignments-portal.md).
+1. Click **Add condition** if you want to further refine the role assignments based on storage blob attributes. For more information, see [Add or edit Azure role assignment conditions](conditions-role-assignments-portal.md).
- ![Screenshot of Add role assignment page with Add condition tab for preview experience.](./media/role-assignments-portal/condition.png)
+ ![Screenshot of Add role assignment page with Add condition tab for preview experience.](./media/shared/condition.png)
1. Click **Next**.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Azure Sentinel feature availability in
|- [Bring Your Own ML (BYO-ML)](/azure/sentinel/bring-your-own-ml) | Public Preview | Public Preview | | - [Cross-tenant/Cross-workspace incidents view](/azure/sentinel/multiple-workspace-view) |Public Preview | Public Preview | | - [Entity insights](/azure/sentinel/enable-entity-behavior-analytics) | Public Preview | Not Available |
-| - [Fusion](/azure/sentinel/fusion)<br>Advanced multistage attack detections <sup>[1](#footnote1)</sup> | GA | Not Available |
+| - [Fusion](/azure/sentinel/fusion)<br>Advanced multistage attack detections <sup>[1](#footnote1)</sup> | GA | GA |
| - [Hunting](/azure/sentinel/hunting) | GA | GA | |- [Notebooks](/azure/sentinel/notebooks) | GA | GA | |- [SOC incident audit metrics](/azure/sentinel/manage-soc-with-incident-metrics) | GA | GA |
Office 365 GCC is paired with Azure Active Directory (Azure AD) in Azure. Office
## Next steps -- Understand the [shared responsibility](https://docs.microsoft.com/azure/security/fundamentals/shared-responsibility) model and which security tasks are handled by the cloud provider and which tasks are handled by you.-- Understand the [Azure Government Cloud](https://docs.microsoft.com/azure/azure-government/documentation-government-welcome) capabilities and the trustworthy design and security used to support compliance applicable to federal, state, and local government organizations and their partners.-- Understand the [Office 365 Government plan](https://docs.microsoft.com/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/office-365-us-government#about-office-365-government-environments).
+- Understand the [shared responsibility](shared-responsibility.md) model and which security tasks are handled by the cloud provider and which tasks are handled by you.
+- Understand the [Azure Government Cloud](/azure/azure-government/documentation-government-welcome) capabilities and the trustworthy design and security used to support compliance applicable to federal, state, and local government organizations and their partners.
+- Understand the [Office 365 Government plan](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/office-365-us-government#about-office-365-government-environments).
- Understand [compliance in Azure](/azure/compliance/) for legal and regulatory standards.
sentinel Resource Context Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/resource-context-rbac.md
When users have access to Azure Sentinel data via the resources they can access
Enable resource-context RBAC in Azure Monitor. For more information, see [Manage access to log data and workspaces in Azure Monitor](../azure-monitor/logs/manage-access.md). > [!NOTE]
-> If your data is not an Azure resource, such as Syslog, CEF, or AAD data, or data collected by a custom collector, you'll need to manually configure the resource ID that's used to identify the data and enable access.
->
-> For more information, see [Explicitly configure resource-context RBAC](#explicitly-configure-resource-context-rbac).
+> If your data is not an Azure resource, such as Syslog, CEF, or AAD data, or data collected by a custom collector, you'll need to manually configure the resource ID that's used to identify the data and enable access. For more information, see [Explicitly configure resource-context RBAC](#explicitly-configure-resource-context-rbac).
>
+> Additionally, [functions](/azure/azure-monitor/logs/functions) and saved searches are not supported in resource-centric contexts. Therefore, Azure Sentinel features such as parsing and [normalization](normalization.md) are not supported for resource-context RBAC in Azure Sentinel.
+>
+ ## Scenarios for resource-context RBAC The following table highlights the scenarios where resource-context RBAC is most helpful. Note the differences in access requirements between SOC teams and non-SOC teams.
If you are using resource-context RBAC and want the events collected by API to b
## Next steps
-For more information, see [Permissions in Azure Sentinel](roles.md).
+For more information, see [Permissions in Azure Sentinel](roles.md).
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
Title: Synapse Studio notebooks
-description: In this article, you learn how to create and develop Azure Synapse Studio notebooks to do data preparation and visualization.
+ Title: How to use Synapse notebooks
+description: In this article, you learn how to create and develop Synapse notebooks to do data preparation and visualization.
Previously updated : 10/19/2020 Last updated : 05/08/2021
-# Create, develop, and maintain Synapse Studio notebooks in Azure Synapse Analytics
+# Create, develop, and maintain Synapse notebooks in Azure Synapse Analytics
-A Synapse Studio notebook is a web interface for you to create files that contain live code, visualizations, and narrative text. Notebooks are a good place to validate ideas and use quick experiments to get insights from your data. Notebooks are also widely used in data preparation, data visualization, machine learning, and other Big Data scenarios.
+A Synapse notebook is a web interface for you to create files that contain live code, visualizations, and narrative text. Notebooks are a good place to validate ideas and use quick experiments to get insights from your data. Notebooks are also widely used in data preparation, data visualization, machine learning, and other Big Data scenarios.
-With an Azure Synapse Studio notebook, you can:
+With a Synapse notebook, you can:
* Get started with zero setup effort. * Keep data secure with built-in enterprise security features. * Analyze data across raw formats (CSV, txt, JSON, etc.), processed file formats (parquet, Delta Lake, ORC, etc.), and SQL tabular data files against Spark and SQL. * Be productive with enhanced authoring capabilities and built-in data visualization.
-This article describes how to use notebooks in Azure Synapse Studio.
+This article describes how to use notebooks in Synapse Studio.
## Preview of the new notebook experience Synapse team brought the new notebooks component into Synapse Studio to provide consistent notebook experience for Microsoft customers and maximize discoverability, productivity, sharing, and collaboration. The new notebook experience is ready for preview. Check the **Preview Features** button in notebook toolbar to turn it on. The table below captures feature comparison between existing notebook (so called "classical notebook") with the new preview one.
Synapse team brought the new notebooks component into Synapse Studio to provide
## Create a notebook
-There are two ways to create a notebook. You can create a new notebook or import an existing notebook to an Azure Synapse workspace from the **Object Explorer**. Azure Synapse Studio notebooks can recognize standard Jupyter Notebook IPYNB files.
+There are two ways to create a notebook. You can create a new notebook or import an existing notebook to a Synapse workspace from the **Object Explorer**. Synapse notebooks recognize standard Jupyter Notebook IPYNB files.
![create import notebook](./media/apache-spark-development-using-notebooks/synapse-create-import-notebook-2.png) ## Develop notebooks
-Notebooks consist of cells, which are individual blocks of code or text that can be ran independently or as a group.
+Notebooks consist of cells, which are individual blocks of code or text that can be run independently or as a group.
### Add a cell
There are multiple ways to add a new cell to your notebook.
### Set a primary language
-Azure Synapse Studio notebooks support four Apache Spark languages:
+Synapse notebooks support four Apache Spark languages:
* pySpark (Python) * Spark (Scala)
The following image is an example of how you can write a PySpark query using the
### Use temp tables to reference data across languages
-You cannot reference data or variables directly across different languages in a Synapse Studio notebook. In Spark, a temporary table can be referenced across languages. Here is an example of how to read a `Scala` DataFrame in `PySpark` and `SparkSQL` using a Spark temp table as a workaround.
+You cannot reference data or variables directly across different languages in a Synapse notebook. In Spark, a temporary table can be referenced across languages. Here is an example of how to read a `Scala` DataFrame in `PySpark` and `SparkSQL` using a Spark temp table as a workaround.
1. In Cell 1, read a DataFrame from a SQL pool connector using Scala and create a temporary table.
You cannot reference data or variables directly across different languages in a
### IDE-style IntelliSense
-Azure Synapse Studio notebooks are integrated with the Monaco editor to bring IDE-style IntelliSense to the cell editor. Syntax highlight, error marker, and automatic code completions help you to write code and identify issues quicker.
+Synapse notebooks are integrated with the Monaco editor to bring IDE-style IntelliSense to the cell editor. Syntax highlight, error marker, and automatic code completions help you to write code and identify issues quicker.
The IntelliSense features are at different levels of maturity for different languages. Use the following table to see what's supported.
The IntelliSense features are at different levels of maturity for different lang
### Code Snippets
-Azure Synapse Studio notebooks provide code snippets that make it easier to enter common used code patterns, such as configuring your Spark session, reading data as a Spark DataFrame, or drawing charts with matplotlib etc.
+Synapse notebooks provide code snippets that make it easier to enter common used code patterns, such as configuring your Spark session, reading data as a Spark DataFrame, or drawing charts with matplotlib etc.
Snippets appear in [IntelliSense](#ide-style-intellisense) mixed with other suggestions. The code snippets contents align with the code cell language. You can see available snippets by typing **Snippet** or any keywords appear in the snippet title in the code cell editor. For example, by typing **read** you can see the list of snippets to read data from various data sources.
A step-by-step cell execution status is displayed beneath the cell to help you s
### Spark progress indicator
-Azure Synapse Studio notebook is purely Spark based. Code cells are executed on the serverless Apache Spark pool remotely. A Spark job progress indicator is provided with a real-time progress bar appears to help you understand the job execution status.
+Synapse notebook is purely Spark based. Code cells are executed on the serverless Apache Spark pool remotely. A Spark job progress indicator is provided with a real-time progress bar appears to help you understand the job execution status.
The number of tasks per each job or stage help you to identify the parallel level of your spark job. You can also drill deeper to the Spark UI of a specific job (or stage) via selecting the link on the job (or stage) name.
In the notebook properties, you can configure whether to include the cell output
![notebook-properties](./media/apache-spark-development-using-notebooks/synapse-notebook-properties.png) ## Magic commands
-You can use familiar Jupyter magic commands in Azure Synapse Studio notebooks. Review the following list as the current available magic commands. Tell us [your use cases on GitHub](https://github.com/MicrosoftDocs/azure-docs/issues/new) so that we can continue to build out more magic commands to meet your needs.
+You can use familiar Jupyter magic commands in Synapse notebooks. Review the following list as the current available magic commands. Tell us [your use cases on GitHub](https://github.com/MicrosoftDocs/azure-docs/issues/new) so that we can continue to build out more magic commands to meet your needs.
> [!NOTE] > Only following magic commands are supported in Synapse pipeline : %%pyspark, %%spark, %%csharp, %%sql.
Azure Data Factory looks for the parameters cell and treats this cell as default
### Assign parameters values from a pipeline
-Once you've created a notebook with parameters, you can execute it from a pipeline with the Azure Synapse Notebook activity. After you add the activity to your pipeline canvas, you will be able to set the parameters values under **Base parameters** section on the **Settings** tab.
+Once you've created a notebook with parameters, you can execute it from a pipeline with the Synapse Notebook activity. After you add the activity to your pipeline canvas, you will be able to set the parameters values under **Base parameters** section on the **Settings** tab.
![Assign a parameter](./media/apache-spark-development-using-notebooks/assign-parameter.png)
When assigning parameter values, you can use the [pipeline expression language](
## Shortcut keys
-Similar to Jupyter Notebooks, Azure Synapse Studio notebooks have a modal user interface. The keyboard does different things depending on which mode the notebook cell is in. Synapse Studio notebooks support the following two modes for a given code cell: command mode and edit mode.
+Similar to Jupyter Notebooks, Synapse notebooks have a modal user interface. The keyboard does different things depending on which mode the notebook cell is in. Synapse notebooks support the following two modes for a given code cell: command mode and edit mode.
1. A cell is in command mode when there is no text cursor prompting you to type. When a cell is in Command mode, you can edit the notebook as a whole but not type into individual cells. Enter command mode by pressing `ESC` or using the mouse to select outside of a cell's editor area.
Similar to Jupyter Notebooks, Azure Synapse Studio notebooks have a modal user i
# [Classical Notebook](#tab/classical)
-Using the following keystroke shortcuts, you can more easily navigate and run code in Azure Synapse notebooks.
+Using the following keystroke shortcuts, you can more easily navigate and run code in Synapse notebooks.
-| Action |Synapse Studio notebook Shortcuts |
+| Action |Synapse notebook Shortcuts |
|--|--| |Run the current cell and select below | Shift+Enter | |Run the current cell and insert below | Alt+Enter |
Using the following keystroke shortcuts, you can more easily navigate and run co
# [Preview Notebook](#tab/preview)
-| Action |Synapse Studio notebook Shortcuts |
+| Action |Synapse notebook Shortcuts |
|--|--| |Run the current cell and select below | Shift+Enter | |Run the current cell and insert below | Alt+Enter |
Using the following keystroke shortcuts, you can more easily navigate and run co
### Shortcut keys under edit mode
-Using the following keystroke shortcuts, you can more easily navigate and run code in Azure Synapse notebooks when in Edit mode.
+Using the following keystroke shortcuts, you can more easily navigate and run code in Synapse notebooks when in Edit mode.
-| Action |Synapse Studio notebook shortcuts |
+| Action |Synapse notebook shortcuts |
|--|--| |Move cursor up | Up | |Move cursor down|Down|
Using the following keystroke shortcuts, you can more easily navigate and run co
- [What is Apache Spark in Azure Synapse Analytics](apache-spark-overview.md) - [Use .NET for Apache Spark with Azure Synapse Analytics](spark-dotnet.md) - [.NET for Apache Spark documentation](/dotnet/spark)-- [Azure Synapse Analytics](../index.yml)
+- [Azure Synapse Analytics](../index.yml)
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ddv4-ddsv4-series.md
The new Ddv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) a
| Standard_D48d_v4 | 48 | 192 | 1800 | 32 | 462000/2904 | 8|24000 | | Standard_D64d_v4 | 64 | 256 | 2400 | 32 | 615000/3872 | 8|30000 |
-<sup>**</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)
+<sup>**</sup> These IOPs values can be achieved by using [Gen2 VMs](generation-2.md)
## Ddsv4-series
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/edv4-edsv4-series.md
Edv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake)
| Standard_E64d_v4 | 64 | 504 | 2400 | 32 | 615000/3872 | 8|30000 |
-<sup>**</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)
+<sup>**</sup> These IOPs values can be achieved by using [Gen2 VMs](generation-2.md)
## Edsv4-series