Updates from: 10/15/2022 01:38:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
Previously updated : 10/04/2022 Last updated : 10/13/2022
In external user scenarios, the authentication methods that can satisfy authenti
|FIDO2 security key | ✅ | | |Windows Hello for Business | ✅ | |
+For more information about how to set authentication strengths for external users, see [Conditional Access: Require an authentication strength for external users](../conditional-access/howto-conditional-access-policy-authentication-strength-external.md).
### User experience for external users
An authentication strength Conditional Access policy works together with [MFA tr
- **Authentication strength is not enforced on Register security information user action** ΓÇô If an Authentication strength Conditional Access policy targets **Register security information** user action, the policy would not apply. - **Conditional Access audit log** ΓÇô When a Conditional Access policy with the authentication strength grant control is created or updated in the Azure AD portal, the auditing log includes details about the policy that was updated, but doesn't include the details about which authentication strength is referenced by the Conditional Access policy. This issue doesn't exist when a policy is created or updated By using Microsoft Graph APIs.
-<!-- Namrata to update about B2B>
+
+- **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy.
## Limitations
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 09/14/2022 Last updated : 10/10/2022 -+
Admins can use the MFA Server Migration Utility to target single users or groups
## Limitations and requirements -- The MFA Server Migration Utility is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).-- The MFA Server Migration Utility requires a new preview build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically.
+- The MFA Server Migration Utility requires a new build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically.
- The MFA Server Migration Utility copies the data from the database file onto the user objects in Azure AD. During migration, users can be targeted for Azure AD MFA for testing purposes using [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). Staged migration lets you test without making any changes to your domain federation settings. Once migrations are complete, you must finalize your migration by making changes to your domain federation settings. - AD FS running Windows Server 2016 or higher is required to provide MFA authentication on any AD FS relying parties, not including Azure AD and Office 365. - Review your AD FS claims rules and make sure none requires MFA to be performed on-premises as part of the authentication process.
Open MFA Server, click **User Portal**:
|- OATH token|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)| |Allow users to select language|Language settings will be automatically applied to a user based on the locale settings in their browser| |Allow users to activate mobile app|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)|
-|- Device limit|Azure AD limits users to 5 cumulative devices (mobile app instances + hardware OATH token + software OATH token) per user|
+|- Device limit|Azure AD limits users to five cumulative devices (mobile app instances + hardware OATH token + software OATH token) per user|
|Use security questions for fallback|Azure AD allows users to choose a fallback method at authentication time should the chosen authentication method fail| |- Questions to answer|Security Questions in Azure AD can only be used for SSPR. See more details for [Azure AD Custom Security Questions](concept-authentication-security-questions.md#custom-security-questions)| |Allow users to associate third-party OATH token|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)|
Once you've successfully migrated user data, you can validate the end-user exper
1. Navigate to the following url: [Enable staged rollout features - Microsoft Azure](https://portal.azure.com/?mfaUIEnabled=true%2F#view/Microsoft_AAD_IAM/StagedRolloutEnablementBladeV2).
-1. Change **Azure multifactor authentication (preview)** to **On**, and then click **Manage groups**.
+1. Change **Azure multifactor authentication** to **On**, and then click **Manage groups**.
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/staged-rollout.png" alt-text="Screenshot of Staged Rollout.":::
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Users can bootstrap Passwordless methods in one of two ways:
- Using existing Azure AD Multi-Factor Authentication methods - Using a Temporary Access Pass (TAP)
-A Temporary Access Pass is a time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones such as Microsoft Authenticator or even Windows Hello.
+A Temporary Access Pass is a time-limited passcode that can be configured for multi or single use to allow users to onboard other authentication methods including passwordless methods such as Microsoft Authenticator, FIDO2 or Windows Hello for Business.
+ A Temporary Access Pass also makes recovery easier when a user has lost or forgotten their strong authentication factor like a FIDO2 security key or Microsoft Authenticator app, but needs to sign in to register new strong authentication methods. This article shows you how to enable and use a Temporary Access Pass in Azure AD using the Azure portal.
active-directory Howto Mfaserver Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md
Previously updated : 11/21/2019 Last updated : 10/10/2022 -+
-# Getting started with the Azure Multi-Factor Authentication Server
+# Getting started with the Azure AD Multi-Factor Authentication Server
<center> ![Getting started with MFA Server on-premises](./media/howto-mfaserver-deploy/server2.png)</center>
-This page covers a new installation of the server and setting it up with on-premises Active Directory. If you already have the MFA server installed and are looking to upgrade, see [Upgrade to the latest Azure Multi-Factor Authentication Server](howto-mfaserver-deploy-upgrade.md). If you're looking for information on installing just the web service, see [Deploying the Azure Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).
+This page covers a new installation of the server and setting it up with on-premises Active Directory. If you already have the MFA server installed and are looking to upgrade, see [Upgrade to the latest Azure AD Multi-Factor Authentication Server](howto-mfaserver-deploy-upgrade.md). If you're looking for information on installing just the web service, see [Deploying the Azure AD Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
->
+> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+ > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
->
-> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
## Plan your deployment
-Before you download the Azure Multi-Factor Authentication Server, think about what your load and high availability requirements are. Use this information to decide how and where to deploy.
+Before you download the Azure AD Multi-Factor Authentication Server, think about what your load and high availability requirements are. Use this information to decide how and where to deploy.
-A good guideline for the amount of memory you need is the number of users you expect to authenticate on a regular basis.
+A good guideline for the amount of memory you need is the number of users you expect to authenticate regularly.
| Users | RAM | | -- | |
A good guideline for the amount of memory you need is the number of users you ex
| 100,000-200,001 | 16 GB | | 200,001+ | 32 GB |
-Do you need to set up multiple servers for high availability or load balancing? There are a number of ways to set up this configuration with Azure MFA Server. When you install your first Azure MFA Server, it becomes the master. Any additional servers become subordinate, and automatically synchronize users and configuration with the master. Then, you can configure one primary server and have the rest act as backup, or you can set up load balancing among all the servers.
+Do you need to set up multiple servers for high availability or load balancing? There are many ways to set up this configuration with Azure MFA Server. When you install your first Azure MFA Server, it becomes the master. Any other servers become subordinate, and automatically synchronize users and configuration with the master. Then, you can configure one primary server and have the rest act as backup, or you can set up load balancing among all the servers.
When a master Azure MFA Server goes offline, the subordinate servers can still process two-step verification requests. However, you can't add new users and existing users can't update their settings until the master is back online or a subordinate gets promoted. ### Prepare your environment
-Make sure the server that you're using for Azure Multi-Factor Authentication meets the following requirements:
+Make sure the server that you're using for Azure AD Multi-Factor Authentication meets the following requirements:
-| Azure Multi-Factor Authentication Server Requirements | Description |
+| Azure AD Multi-Factor Authentication Server Requirements | Description |
|: |: | | Hardware |<li>200 MB of hard disk space</li><li>x32 or x64 capable processor</li><li>1 GB or greater RAM</li> | | Software |<li>Windows Server 2016</li><li>Windows Server 2012 R2</li><li>Windows Server 2012</li><li>Windows Server 2008/R2 (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Windows 10</li><li>Windows 8.1, all editions</li><li>Windows 8, all editions</li><li>Windows 7, all editions (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Microsoft .NET 4.0 Framework</li><li>IIS 7.0 or greater if installing the user portal or web service SDK</li> |
Make sure the server that you're using for Azure Multi-Factor Authentication me
There are three web components that make up Azure MFA Server: * Web Service SDK - Enables communication with the other components and is installed on the Azure MFA application server
-* User Portal - An IIS web site that allows users to enroll in Azure Multi-Factor Authentication (MFA) and maintain their accounts.
+* User portal - An IIS web site that allows users to enroll in Azure AD Multi-Factor Authentication (MFA) and maintain their accounts.
* Mobile App Web Service - Enables using a mobile app like the Microsoft Authenticator app for two-step verification.
-All three components can be installed on the same server if the server is internet-facing. If breaking up the components, the Web Service SDK is installed on the Azure MFA application server and the User Portal and Mobile App Web Service are installed on an internet-facing server.
+All three components can be installed on the same server if the server is internet-facing. If breaking up the components, the Web Service SDK is installed on the Azure MFA application server and the User portal and Mobile App Web Service are installed on an internet-facing server.
### Azure Multi-Factor Authentication Server firewall requirements
If you aren't using the Event Confirmation feature, and your users aren't using
## Download the MFA Server
-Follow these steps to download the Azure Multi-Factor Authentication Server from the Azure portal:
+Follow these steps to download the Azure AD Multi-Factor Authentication Server from the Azure portal:
> [!IMPORTANT] > As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers who would like to require multi-factor authentication (MFA) from their users should use cloud-based Azure AD Multi-Factor Authentication.
Now that you have downloaded the server you can install and configure it. Be sur
To ease rollout, allow MFA Server to communicate with your users. MFA Server can send an email to inform them that they have been enrolled for two-step verification.
-The email you send should be determined by how you configure your users for two-step verification. For example, if you are able to import phone numbers from the company directory, the email should include the default phone numbers so that users know what to expect. If you do not import phone numbers, or your users are going to use the mobile app, send them an email that directs them to complete their account enrollment. Include a hyperlink to the Azure Multi-Factor Authentication User Portal in the email.
+The email you send should be determined by how you configure your users for two-step verification. For example, if you are able to import phone numbers from the company directory, the email should include the default phone numbers so that users know what to expect. If you do not import phone numbers, or your users are going to use the mobile app, send them an email that directs them to complete their account enrollment. Include a hyperlink to the Azure AD Multi-Factor Authentication User portal in the email.
The content of the email also varies depending on the method of verification that has been set for the user (phone call, SMS, or mobile app). For example, if the user is required to use a PIN when they authenticate, the email tells them what their initial PIN has been set to. Users are required to change their PIN during their first verification.
Now that the server is installed you want to add users. You can choose to create
4. In the **Add Synchronization Item** box that appears choose the Domain, OU **or** security group, Settings, Method Defaults, and Language Defaults for this synchronization task and click **Add**. 5. Check the box labeled **Enable synchronization with Active Directory** and choose a **Synchronization interval** between one minute and 24 hours.
-## How the Azure Multi-Factor Authentication Server handles user data
+## How the Azure AD Multi-Factor Authentication Server handles user data
When you use the Multi-Factor Authentication (MFA) Server on-premises, a user's data is stored in the on-premises servers. No persistent user data is stored in the cloud. When the user performs a two-step verification, the MFA Server sends data to the Azure MFA cloud service to perform the verification. When these authentication requests are sent to the cloud service, the following fields are sent in the request and logs so that they are available in the customer's authentication/usage reports. Some of the fields are optional so they can be enabled or disabled within the Multi-Factor Authentication Server. The communication from the MFA Server to the MFA cloud service uses SSL/TLS over port 443 outbound. These fields are:
Once you have upgraded to or installed MFA Server version 8.x or higher, it is r
## Next steps -- Set up and configure the [User Portal](howto-mfaserver-deploy-userportal.md) for user self-service.
+- Set up and configure the [User portal](howto-mfaserver-deploy-userportal.md) for user self-service.
- Set up and configure the Azure MFA Server with [Active Directory Federation Service](multi-factor-authentication-get-started-adfs.md), [RADIUS Authentication](howto-mfaserver-dir-radius.md), or [LDAP Authentication](howto-mfaserver-dir-ldap.md).-- Set up and configure [Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md).-- [Deploy the Azure Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).-- [Advanced scenarios with Azure Multi-Factor Authentication and third-party VPNs](howto-mfaserver-nps-vpn.md).
+- Set up and configure [Remote Desktop Gateway and Azure AD Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md).
+- [Deploy the Azure AD Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).
+- [Advanced scenarios with Azure AD Multi-Factor Authentication and third-party VPNs](howto-mfaserver-nps-vpn.md).
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
sampleApp/
In the next steps you'll create a new folder for the JavaScript SPA, and set up the user interface (UI). > [!TIP]
-> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization, and is primarily associated with a domain, like Microsoft.com. If you wish to learn how applications can work with multiple tenants, refer to the [application model](https://docs.microsoft.com/azure/active-directory/develop/application-model).
+> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization, and is primarily associated with a domain, like Microsoft.com. If you wish to learn how applications can work with multiple tenants, refer to the [application model](/articles/active-directory/develop/application-model.md).
## Create the SPA UI
In the next steps you'll create a new folder for the JavaScript SPA, and set up
<title>Quickstart | MSAL.JS Vanilla JavaScript SPA</title> <!-- msal.js with a fallback to backup CDN -->
- <script type="text/javascript" src="https://alcdn.msauth.net/lib/1.2.1/js/msal.js" integrity="sha384-9TV1245fz+BaI+VvCjMYL0YDMElLBwNS84v3mY57pXNOt6xcUYch2QLImaTahcOP" crossorigin="anonymous"></script>
- <script type="text/javascript">
- if(typeof Msal === 'undefined')document.write(unescape("%3Cscript src='https://alcdn.msftauth.net/lib/1.2.1/js/msal.js' type='text/javascript' integrity='sha384-m/3NDUcz4krpIIiHgpeO0O8uxSghb+lfBTngquAo2Zuy2fEF+YgFeP08PWFo5FiJ' crossorigin='anonymous'%3E%3C/script%3E"));
- </script>
+ <script src="https://alcdn.msauth.net/browser/2.30.0/js/msal-browser.js"
+ integrity="sha384-L8LyrNcolaRZ4U+N06atid1fo+kBo8hdlduw0yx+gXuACcdZjjquuGZTA5uMmUdS"
+ crossorigin="anonymous"></script>
<!-- adding Bootstrap 4 for UI components -->
- <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
+ <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-o4ufwq3oKqc7IoCcR08YtZXmgOljhTggRwxP2CLbSqeXGtitAxwYaUln/05nJjit" crossorigin="anonymous">
</head> <body> <nav class="navbar navbar-expand-lg navbar-dark bg-primary">
The `acquireTokenSilent` method handles token acquisition and renewal without an
## Call the Microsoft Graph API using the acquired token
-1. In the *JavaScriptSPA* folder create a *.js* file named *graphConfig.js*, which stores the Representational State Transfer ([REST](https://docs.microsoft.com/rest/api/azure/)) endpoints. Add the following code:
+1. In the *JavaScriptSPA* folder create a *.js* file named *graphConfig.js*, which stores the Representational State Transfer ([REST](/rest/api/azure/)) endpoints. Add the following code:
```JavaScript const graphConfig = {
active-directory Tutorial V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md
async function callApi(endpoint, accessToken) {
console.log('request made to web API at: ' + new Date().toString()); try {
- const response = await axios.default.get(endpoint, options);
+ const response = await axios.get(endpoint, options);
return response.data; } catch (error) { console.log(error)
active-directory Tutorial V2 React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-react.md
In the [Redirect URI: MSAL.js 2.0 with auth code flow](scenario-spa-app-registra
import { msalConfig } from "./authConfig"; ```
-1. Underneath the imports in *src/index.js* create a `PublicClientApplication` instance using the configuration from step 1.
+2. Underneath the imports in *src/index.js* create a `PublicClientApplication` instance using the configuration from step 1.
```javascript const msalInstance = new PublicClientApplication(msalConfig); ```
-1. Find the `<App />` component in *src/index.js* and wrap it in the `MsalProvider` component. Your render function should look like this:
+3. Find the `<App />` component in *src/index.js* and wrap it in the `MsalProvider` component. Your render function should look like this:
```jsx
- ReactDOM.render(
+ root.render(
<React.StrictMode> <MsalProvider instance={msalInstance}> <App /> </MsalProvider>
- </React.StrictMode>,
- document.getElementById("root")
+ </React.StrictMode>
); ``` - ## Sign in users Create a folder in *src* called *components* and create a file inside this folder named *SignInButton.jsx*. Add the code from either of the following sections to invoke login using a pop-up window or a full-frame redirect:
import { useMsal } from "@azure/msal-react";
import { loginRequest } from "../authConfig"; import Button from "react-bootstrap/Button";
-function handleLogin(instance) {
- instance.loginPopup(loginRequest).catch(e => {
- console.error(e);
- });
-}
/** * Renders a button which, when selected, will open a popup for login
function handleLogin(instance) {
export const SignInButton = () => { const { instance } = useMsal();
+ const handleLogin = (loginType) => {
+ if (loginType === "popup") {
+ instance.loginPopup(loginRequest).catch(e => {
+ console.log(e);
+ });
+ }
+ }
return (
- <Button variant="secondary" className="ml-auto" onClick={() => handleLogin(instance)}>Sign in using Popup</Button>
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogin("popup")}>Sign in using Popup</Button>
); } ```
import { useMsal } from "@azure/msal-react";
import { loginRequest } from "../authConfig"; import Button from "react-bootstrap/Button";
-function handleLogin(instance) {
- instance.loginRedirect(loginRequest).catch(e => {
- console.error(e);
- });
-}
/** * Renders a button which, when selected, will redirect the page to the login prompt
function handleLogin(instance) {
export const SignInButton = () => { const { instance } = useMsal();
+ const handleLogin = (loginType) => {
+ if (loginType === "redirect") {
+ instance.loginRedirect(loginRequest).catch(e => {
+ console.log(e);
+ });
+ }
+ }
return (
- <Button variant="secondary" className="ml-auto" onClick={() => handleLogin(instance)}>Sign in using Redirect</Button>
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogin("redirect")}>Sign in using Redirect</Button>
); } ```
export const SignInButton = () => {
}; ```
-2. Now open *src/App.js* and add replace the existing content with the following code:
+1. Now open *src/App.js* and add replace the existing content with the following code:
```jsx import React from "react";
import React from "react";
import { useMsal } from "@azure/msal-react"; import Button from "react-bootstrap/Button";
-function handleLogout(instance) {
- instance.logoutPopup().catch(e => {
- console.error(e);
- });
-}
- /** * Renders a button which, when selected, will open a popup for logout */ export const SignOutButton = () => { const { instance } = useMsal();
+ const handleLogout = (logoutType) => {
+ if (logoutType === "popup") {
+ instance.logoutPopup({
+ postLogoutRedirectUri: "/",
+ mainWindowRedirectUri: "/" // redirects the top level app after logout
+ });
+ }
+ }
+ return (
- <Button variant="secondary" className="ml-auto" onClick={() => handleLogout(instance)}>Sign out using Popup</Button>
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogout("popup")}>Sign out using Popup</Button>
); } ```
import React from "react";
import { useMsal } from "@azure/msal-react"; import Button from "react-bootstrap/Button";
-function handleLogout(instance) {
- instance.logoutRedirect().catch(e => {
- console.error(e);
- });
-}
- /** * Renders a button which, when selected, will redirect the page to the logout prompt */ export const SignOutButton = () => { const { instance } = useMsal();
+
+ const handleLogout = (logoutType) => {
+ if (logoutType === "redirect") {
+ instance.logoutRedirect({
+ postLogoutRedirectUri: "/",
+ });
+ }
+ }
return (
- <Button variant="secondary" className="ml-auto" onClick={() => handleLogout(instance)}>Sign out using Redirect</Button>
+ <Button variant="secondary" className="ml-auto" onClick={() => handleLogout("redirect")}>Sign out using Redirect</Button>
); } ```
In order to render certain components only for authenticated or unauthenticated
function ProfileContent() { const { instance, accounts, inProgress } = useMsal(); const [accessToken, setAccessToken] = useState(null);
-
+ const name = accounts[0] && accounts[0].name;
-
+ function RequestAccessToken() { const request = { ...loginRequest, account: accounts[0] };
-
+ // Silently acquires an access token which is then attached to a request for Microsoft Graph data instance.acquireTokenSilent(request).then((response) => { setAccessToken(response.accessToken);
In order to render certain components only for authenticated or unauthenticated
}); }); }
-
+ return ( <> <h5 className="card-title">Welcome {name}</h5>
In order to render certain components only for authenticated or unauthenticated
1. Finally, add your new `ProfileContent` component as a child of the `AuthenticatedTemplate` in your `App` component in *src/App.js*. Your `App` component should look like this:
- ```javascript
+ ```jsx
function App() { return ( <PageLayout>
If you're using Internet Explorer, we recommend that you use the `loginRedirect`
function ProfileContent() { const { instance, accounts } = useMsal(); const [graphData, setGraphData] = useState(null);
-
+ const name = accounts[0] && accounts[0].name;
-
+ function RequestProfileData() { const request = { ...loginRequest, account: accounts[0] };
-
+ // Silently acquires an access token which is then attached to a request for Microsoft Graph data instance.acquireTokenSilent(request).then((response) => { callMsGraph(response.accessToken).then(response => setGraphData(response));
If you're using Internet Explorer, we recommend that you use the `loginRedirect`
}); }); }
-
+ return ( <> <h5 className="card-title">Welcome {name}</h5>
active-directory How To Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-groups.md
Previously updated : 08/29/2022 Last updated : 10/14/2022
To edit your group settings:
- **Object ID.** You can't change the Object ID, but you can copy it to use in your PowerShell commands for the group. For more info about using PowerShell cmdlets, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-v2-cmdlets.md). ## Add or remove a group from another group
-You can add an existing Security group to another Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time. You'll need the **Groups Administrator** or **User Administrator** role to edit group membership.
+You can add an existing Security group to another Security group (also known as nested groups). Depending on the group types, you can add a group as a member of another group, just like a user, which applies settings like roles and access to the nested groups. You'll need the **Groups Administrator** or **User Administrator** role to edit group membership.
We currently don't support: - Adding groups to a group synced with on-premises Active Directory.
Now you can review the "MDM policy - West - Group memberships" page to see the g
For a more detailed view of the group and member relationship, select the parent group name (MDM policy - All org) and take a look at the "MDM policy - West" page details. ### Remove a group from another group
-You can remove an existing Security group from another Security group; however, removing the group also removes any inherited attributes and properties for its members.
+You can remove an existing Security group from another Security group; however, removing the group also removes any inherited settings for its members.
1. On the **Groups - All groups** page, search for and select the group you need to remove as a member of another group.
You can delete an Azure AD group for any number of reasons, but typically it wil
- No longer need the group.
-To delete a group you'll need the **Groups Administrator** or **User Administrator** role.
+To delete a group, you'll need the **Groups Administrator** or **User Administrator** role.
1. Sign in to the [Azure portal](https://portal.azure.com).
active-directory Howto Assign Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-portal.md
After you've configured an Azure resource with a managed identity, you can give
## Use Azure RBAC to assign a managed identity access to another resource
+>[!IMPORTANT]
+> The steps outlined below show is how you grant access to a service using Azure RBAC. Check specific service documentation on how to grant access - for example check Azure Data Explorer for instructions. Some Azure services are in the process of adopting Azure RBAC on the data plane
+ After you've enabled managed identity on an Azure resource, such as an [Azure VM](qs-configure-portal-windows-vm.md) or [Azure virtual machine scale set](qs-configure-portal-windows-vmss.md): 1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription under which you have configured the managed identity.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
For more information, see [Manage access to custom security attributes in Azure
## Authentication Administrator
-Users with this role can set or reset any authentication method (including passwords) for non-administrators and some roles. Authentication Administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. For a list of the roles that an Authentication Administrator can read or update authentication methods, see [Who can reset passwords](#who-can-reset-passwords).
+Assign the Authentication Administrator role to users who need to do the following:
-Authentication Administrators can update sensitive attributes for some users. For a list of the roles that an Authentication Administrator can update sensitive attributes, see [Who can update sensitive attributes](#who-can-update-sensitive-attributes).
+- Set or reset any authentication method (including passwords) for non-administrators and some roles. For a list of the roles that an Authentication Administrator can read or update authentication methods, see [Who can reset passwords](#who-can-reset-passwords).
+- Require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in.
+- Perform sensitive actions for some users. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Create and manage support tickets in Azure and the Microsoft 365 admin center.
-The [Privileged Authentication Administrator](#privileged-authentication-administrator) role has permission to force re-registration and multifactor authentication for all users.
+Users with this role **cannot** do the following:
-The [Authentication Policy Administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
+- Cannot change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+- Cannot manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. The same functions can be accomplished using the [Set-MsolUser](/powershell/module/msonline/set-msoluser) commandlet Azure AD PowerShell module.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy | Update sensitive attributes |
-| - | - | - | - | - | - | - |
-| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No | Yes for some users (see above) |
-| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No | Yes for all users |
-| Authentication Policy Administrator | No |No | Yes | Yes | Yes | No |
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
The [Authentication Policy Administrator](#authentication-policy-administrator)
>* Administrators in other services outside of Azure AD like Exchange Online, Office 365 Security & Compliance Center, and human resources systems. >* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
-> [!IMPORTANT]
-> This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. The same functions can be accomplished using the [Set-MsolUser](/powershell/module/msonline/set-msoluser) commandlet Azure AD PowerShell module.
-
-Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
- > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
Users with this role can't change the credentials or reset MFA for members and o
## Authentication Policy Administrator
-Users with this role can configure the authentication methods policy, tenant-wide MFA settings, and password protection policy. This role grants permission to manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list. Authentication Policy Administrators cannot update sensitive attributes for users.
+Assign the Authentication Policy Administrator role to users who need to do the following:
-The [Authentication Administrator](#authentication-administrator) and [Privileged Authentication Administrator](#privileged-authentication-administrator) roles have permission to manage registered authentication methods on users and can force re-registration and multifactor authentication for all users.
+- Configure the authentication methods policy, tenant-wide MFA settings, and password protection policy that determine which methods each user can register and use.
+- Manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list.
+- Create and manage verifiable credentials.
+- Create and manage Azure support tickets.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy | Update sensitive attributes |
-| - | - | - | - | - | - | - |
-| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No | Yes for some users (see above) |
-| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No | Yes for all users |
-| Authentication Policy Administrator | No | No | Yes | Yes | Yes | No |
+Users with this role **cannot** do the following:
-> [!IMPORTANT]
-> This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens.
+- Cannot update sensitive properties. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Cannot delete or restore users. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Cannot manage MFA settings in the legacy MFA management portal or Hardware OATH tokens.
+ > [!div class="mx-tableFixed"] > | Actions | Description |
Users in this role can manage Azure Active Directory B2B guest user invitations
Users with this role can change passwords, invalidate refresh tokens, create and manage support requests with Microsoft for Azure and Microsoft 365 services, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. For a list of the roles that a Helpdesk Administrator can reset passwords for and invalidate refresh tokens, see [Who can reset passwords](#who-can-reset-passwords).
+Users with this role **cannot** do the following:
+
+- Cannot change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+ > [!IMPORTANT] > Users with this role can change passwords for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example: >
Users with this role can change passwords, invalidate refresh tokens, create and
>- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
-Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
- Delegating administrative permissions over subsets of users and applying policies to a subset of users is possible with [Administrative Units](administrative-units.md). This role was previously called "Password Administrator" in the [Azure portal](https://portal.azure.com/). The "Helpdesk Administrator" name in Azure AD now matches its name in Azure AD PowerShell and the Microsoft Graph API.
Do not use. This role has been deprecated and will be removed from Azure AD in t
Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that a Password Administrator can reset passwords for, see [Who can reset passwords](#who-can-reset-passwords).
-Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+Users with this role **cannot** do the following:
+
+- Cannot change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
> [!div class="mx-tableFixed"] > | Actions | Description |
Users with this role can register printers and manage printer status in the Micr
## Privileged Authentication Administrator
-Users with this role can set or reset any authentication method (including passwords) for any user, including Global Administrators. Privileged Authentication Administrators can force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke 'remember MFA on the device', prompting for MFA on the next sign-in of all users. Privileged Authentication Administrators can update sensitive attributes for all users.
+Assign the Privileged Authentication Administrator role to users who need to do the following:
+
+- Set or reset any authentication method (including passwords) for any user, including Global Administrators.
+- Delete or restore any users, including Global Administrators. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke **remember MFA on the device**, prompting for MFA on the next sign-in of all users.
+- Update sensitive properties for all users. For more information, see [Who can perform sensitive actions](#who-can-perform-sensitive-actions).
+- Create and manage support tickets in Azure and the Microsoft 365 admin center.
-The [Authentication Administrator](#authentication-administrator) role has permission to force re-registration and multifactor authentication for standard users and users with some admin roles.
+Users with this role **cannot** do the following:
-The [Authentication Policy Administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
+- Cannot manage per-user MFA in the legacy MFA management portal. The same functions can be accomplished using the [Set-MsolUser](/powershell/module/msonline/set-msoluser) commandlet Azure AD PowerShell module.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy | Update sensitive attributes |
-| - | - | - | - | - | - | - |
-| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No | Yes for some users (see above) |
-| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No | Yes for all users |
-| Authentication Policy Administrator | No | No | Yes | Yes | Yes | No |
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
The [Authentication Policy Administrator](#authentication-policy-administrator)
>* Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information. -
-> [!IMPORTANT]
-> This role is not currently capable of managing per-user MFA in the legacy MFA management portal. The same functions can be accomplished using the [Set-MsolUser](/powershell/module/msonline/set-msoluser) commandlet Azure AD PowerShell module.
- > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
Users with this role can access tenant level aggregated data and associated insi
## User Administrator
-Users with this role can create users, and manage all aspects of users with some restrictions (see the table), and can update password expiration policies. Additionally, users with this role can create and manage all groups. This role also includes the ability to create and manage user views, manage support tickets, and monitor service health. User Administrators don't have permission to manage some user properties for users in most administrator roles. Admins with this role do not have permissions to manage MFA or manage shared mailboxes. The roles that are exceptions to this restriction are listed in the following table.
+Assign the User Administrator role to users who need to do the following:
-| User Administrator permission | Notes |
+| Permission | More information |
| | |
-| Create users and groups<br/>Create and manage user views<br/>Manage Office support tickets<br/>Update password expiration policies | |
-| Manage licenses<br/>Manage all user properties except User Principal Name | Applies to all users, including all admins |
-| Delete and restore<br/>Disable and enable<br/>Manage all user properties including User Principal Name<br/>Update (FIDO) device keys | Applies to users who are non-admins or in any of the following roles:<ul><li>Helpdesk Administrator</li><li>User with no role</li><li>User Administrator</li></ul> |
-| Invalidate refresh Tokens<br/>Reset password | For a list of the roles that a User Administrator can reset passwords for and invalidate refresh tokens, see [Who can reset passwords](#who-can-reset-passwords). |
-| Update sensitive attributes | For a list of the roles that a User Administrator can update sensitive attributes for, see [Who can update sensitive attributes](#who-can-update-sensitive-attributes). |
+| Create users | |
+| Update most user properties for all users, including all administrators | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
+| Update sensitive properties (including user principal name) for some users | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
+| Disable or enable some users | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
+| Delete or restore some users | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) |
+| Create and manage user views | |
+| Create and manage all groups | |
+| Assign licenses for all users, including all administrators | |
+| Reset passwords | [Who can reset passwords](#who-can-reset-passwords) |
+| Invalidate refresh tokens | [Who can reset passwords](#who-can-reset-passwords) |
+| Update (FIDO) device keys | |
+| Update password expiration policies | |
+| Create and manage support tickets in Azure and the Microsoft 365 admin center | |
+| Monitor service health | |
+
+Users with this role **cannot** do the following:
+
+- Cannot manage MFA.
+- Cannot change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
+- Cannot manage shared mailboxes.
> [!IMPORTANT] > Users with this role can change passwords for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example:
Users with this role can create users, and manage all aspects of users with some
>- Administrators in other services outside of Azure AD like Exchange Online, Office Security and Compliance Center, and human resources systems. >- Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information.
-Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
- > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
Assign the Windows 365 Administrator role to users who need to do the following
- Create and manage security groups, but not role-assignable groups - View basic properties in the Microsoft 365 admin center - Read usage reports in the Microsoft 365 admin center-- Create and manage support tickets in Azure AD and the Microsoft 365 admin center
+- Create and manage support tickets in Azure and the Microsoft 365 admin center
> [!div class="mx-tableFixed"] > | Actions | Description |
Workplace Device Join | Deprecated | [Deprecated roles documentation](#deprecate
## Who can reset passwords
-In the following table, the columns list the roles that can reset passwords. The rows list the roles for which their password can be reset.
+In the following table, the columns list the roles that can reset passwords and invalidate refresh tokens. The rows list the roles for which their password can be reset.
The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
\* A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has 0 Global Administrators. > [!NOTE]
-> The ability to reset a password includes the ability to update the following sensitive attributes required for [self-service password reset](../authentication/concept-sspr-howitworks.md):
+> The ability to reset a password includes the ability to update the following sensitive properties required for [self-service password reset](../authentication/concept-sspr-howitworks.md):
> - businessPhones > - mobilePhone > - otherMails
-## Who can update sensitive attributes
+## Who can perform sensitive actions
-Some administrators can update the following sensitive attributes for some users. All users can read these sensitive attributes.
+Some administrators can perform the following sensitive actions for some users. All users can read the sensitive properties.
-- accountEnabled-- businessPhones-- mobilePhone-- onPremisesImmutableId-- otherMails-- passwordProfile-- userPrincipalName
+| Sensitive action | Sensitive property name |
+| | |
+| Disable or enable users | `accountEnabled` |
+| Update business phone | `businessPhones` |
+| Update mobile phone | `mobilePhone` |
+| Update on-premises immutable ID | `onPremisesImmutableId` |
+| Update other emails | `otherMails` |
+| Update password profile | `passwordProfile` |
+| Update user principal name | `userPrincipalName` |
+| Delete or restore users | Not applicable |
-In the following table, the columns list the roles that can update the sensitive attributes. The rows list the roles for which their sensitive attributes can be updated.
+In the following table, the columns list the roles that can perform sensitive actions. The rows list the roles for which the sensitive action can be performed upon.
The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
-Role that sensitive attributes can be updated | Auth Admin | User Admin | Privileged Auth Admin | Global Admin
+Role that sensitive action can be performed upon | Auth Admin | User Admin | Privileged Auth Admin | Global Admin
| | | | Auth Admin | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
active-directory Mongodb Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mongodb-cloud-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with MongoDB Cloud'
-description: Learn how to configure single sign-on between Azure Active Directory and MongoDB Cloud.
+ Title: 'Tutorial: Azure AD SSO integration with MongoDB Atlas - SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and MongoDB Atlas - SSO.
Previously updated : 05/13/2022 Last updated : 10/14/2022
-# Tutorial: Azure AD SSO integration with MongoDB Cloud
+# Tutorial: Azure AD SSO integration with MongoDB Atlas - SSO
-In this tutorial, you'll learn how to integrate MongoDB Cloud with Azure Active Directory (Azure AD). When you integrate MongoDB Cloud with Azure AD, you can:
+In this tutorial, you'll learn how to integrate MongoDB Atlas - SSO with Azure Active Directory (Azure AD). When you integrate MongoDB Atlas - SSO with Azure AD, you can:
* Control in Azure AD who has access to MongoDB Atlas, the MongoDB community, MongoDB University, and MongoDB Support.
-* Enable your users to be automatically signed in to MongoDB Cloud with their Azure AD accounts.
+* Enable your users to be automatically signed in to MongoDB Atlas - SSO with their Azure AD accounts.
* Assign MongoDB Atlas roles to users based on their Azure AD group memberships. * Manage your accounts in one central location: the Azure portal.
In this tutorial, you'll learn how to integrate MongoDB Cloud with Azure Active
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* MongoDB Cloud single sign-on (SSO) enabled subscription.
+* MongoDB Atlas - SSO single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* MongoDB Cloud supports **SP** and **IDP** initiated SSO.
-* MongoDB Cloud supports **Just In Time** user provisioning.
+* MongoDB Atlas - SSO supports **SP** and **IDP** initiated SSO.
+* MongoDB Atlas - SSO supports **Just In Time** user provisioning.
-## Add MongoDB Cloud from the gallery
+## Add MongoDB Atlas - SSO from the gallery
-To configure the integration of MongoDB Cloud into Azure AD, you need to add MongoDB Cloud from the gallery to your list of managed SaaS apps.
+To configure the integration of MongoDB Atlas - SSO into Azure AD, you need to add MongoDB Atlas - SSO from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **MongoDB Cloud** in the search box.
-1. Select **MongoDB Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **MongoDB Atlas - SSO** in the search box.
+1. Select **MongoDB Atlas - SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-## Configure and test Azure AD SSO for MongoDB Cloud
+## Configure and test Azure AD SSO for MongoDB Atlas - SSO
-Configure and test Azure AD SSO with MongoDB Cloud, by using a test user called **B.Simon**. For SSO to work, you need to establish a linked relationship between an Azure AD user and the related user in MongoDB Cloud.
+Configure and test Azure AD SSO with MongoDB Atlas - SSO, by using a test user called **B.Simon**. For SSO to work, you need to establish a linked relationship between an Azure AD user and the related user in MongoDB Atlas - SSO.
-To configure and test Azure AD SSO with MongoDB Cloud, perform the following steps:
+To configure and test Azure AD SSO with MongoDB Atlas - SSO, perform the following steps:
1. [Configure Azure AD SSO](#configure-azure-ad-sso) to enable your users to use this feature. 1. [Create an Azure AD test user and test group](#create-an-azure-ad-test-user-and-test-group) to test Azure AD single sign-on with B.Simon. 1. [Assign the Azure AD test user or test group](#assign-the-azure-ad-test-user-or-test-group) to enable B.Simon to use Azure AD single sign-on. 1. [Configure MongoDB Atlas SSO](#configure-mongodb-atlas-sso) to configure the single sign-on settings on the application side.
- 1. [Create a MongoDB Cloud test user](#create-a-mongodb-cloud-test-user) to have a counterpart of B.Simon in MongoDB Cloud, linked to the Azure AD representation of the user.
+ 1. [Create a MongoDB Atlas SSO test user](#create-a-mongodb-atlas-sso-test-user) to have a counterpart of B.Simon in MongoDB Atlas - SSO, linked to the Azure AD representation of the user.
1. [Test SSO](#test-sso) to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **MongoDB Cloud** application integration page, find the **Manage** section. Select **single sign-on**.
+1. In the Azure portal, on the **MongoDB Atlas - SSO** application integration page, find the **Manage** section. Select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://cloud.mongodb.com/sso/<Customer_Unique>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-on URL. To get these values, contact the [MongoDB Cloud Client support team](https://support.mongodb.com/). You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL, and Sign-on URL. To get these values, contact the [MongoDB Atlas - SSO Client support team](https://support.mongodb.com/). You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. The MongoDB Cloud application expects the SAML assertions to be in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. The MongoDB Atlas - SSO application expects the SAML assertions to be in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![Screenshot of default attributes](common/default-attributes.png)
-1. In addition to the preceding attributes, the MongoDB Cloud application expects a few more attributes to be passed back in the SAML response. These attributes are also pre-populated, but you can review them per your requirements.
+1. In addition to the preceding attributes, the MongoDB Atlas - SSO application expects a few more attributes to be passed back in the SAML response. These attributes are also pre-populated, but you can review them per your requirements.
| Name | Source attribute| | | |
Follow these steps to enable Azure AD SSO in the Azure portal.
![Screenshot of SAML Signing Certificate section, with Download link highlighted](common/metadataxml.png)
-1. In the **Set up MongoDB Cloud** section, copy the appropriate URLs, based on your requirement.
+1. In the **Set up MongoDB Atlas - SSO** section, copy the appropriate URLs, based on your requirement.
![Screenshot of Set up Mongo DB Cloud section, with URLs highlighted](common/copy-configuration-urls.png)
If you are using MongoDB Atlas role mappings feature in order to assign roles to
### Assign the Azure AD test user or test group
-In this section, you'll enable B.Simon or Group 1 to use Azure single sign-on by granting access to MongoDB Cloud.
+In this section, you'll enable B.Simon or Group 1 to use Azure single sign-on by granting access to MongoDB Atlas - SSO.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **MongoDB Cloud**.
+1. In the applications list, select **MongoDB Atlas - SSO**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list or if you are using MongoDB Atla role mappings, select **Group 1** from the Groups list; then click the **Select** button at the bottom of the screen.
To configure single sign-on on the MongoDB Atlas side, you need the appropriate
To authorize users in MongoDB Atlas based on their Azure AD group membership, you can map the Azure AD group's Object-IDs to MongoDB Atlas Organization/Project roles with the help of MongoDB Atlas role mappings. Follow the instructions in the [MongoDB Atlas documentation](https://docs.atlas.mongodb.com/security/manage-role-mapping/#add-role-mappings-in-your-organization-and-its-projects). If you have a problem, contact the [MongoDB support team](https://support.mongodb.com/).
-### Create a MongoDB Cloud test user
+### Create a MongoDB Atlas SSO test user
MongoDB Atlas supports just-in-time user provisioning, which is enabled by default. There is no additional action for you to take. If a user doesn't already exist in MongoDB Atlas, a new one is created after authentication.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the MongoDB Atlas for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the MongoDB Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MongoDB Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the MongoDB Atlas - SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MongoDB Atlas - SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure MongoDB Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure MongoDB Atlas - SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Opentext Fax Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opentext-fax-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with OpenText XM Fax and XM SendSecure'
+description: Learn how to configure single sign-on between Azure Active Directory and OpenText XM Fax and XM SendSecure.
++++++++ Last updated : 10/10/2022++++
+# Tutorial: Azure AD SSO integration with OpenText XM Fax and XM SendSecure
+
+In this tutorial, you'll learn how to integrate OpenText XM Fax and XM SendSecure with Azure Active Directory (Azure AD). When you integrate OpenText XM Fax and XM SendSecure with Azure AD, you can:
+
+* Control in Azure AD who has access to OpenText XM Fax and XM SendSecure.
+* Enable your users to be automatically signed-in to OpenText XM Fax and XM SendSecure with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* OpenText XM Fax and XM SendSecure single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* OpenText XM Fax and XM SendSecure supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add OpenText XM Fax and XM SendSecure from the gallery
+
+To configure the integration of OpenText XM Fax and XM SendSecure into Azure AD, you need to add OpenText XM Fax and XM SendSecure from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **OpenText XM Fax and XM SendSecure** in the search box.
+1. Select **OpenText XM Fax and XM SendSecure** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for OpenText XM Fax and XM SendSecure
+
+Configure and test Azure AD SSO with OpenText XM Fax and XM SendSecure using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at OpenText XM Fax and XM SendSecure.
+
+To configure and test Azure AD SSO with OpenText XM Fax and XM SendSecure, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure OpenText XM Fax and XM SendSecure SSO](#configure-opentext-xm-fax-and-xm-sendsecure-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create OpenText XM Fax and XM SendSecure test user](#create-opentext-xm-fax-and-xm-sendsecure-test-user)** - to have a counterpart of B.Simon in OpenText XM Fax and XM SendSecure that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **OpenText XM Fax and XM SendSecure** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type one of the following URLs:
+
+ | **Identifier** |
+ |-|
+ | `https://login.xmedius.com/` |
+ | `https://login.xmedius.eu/` |
+ | `https://login.xmedius.ca/` |
+
+ b. In the **Reply URL** textbox, type one of the following URLs:
+
+ | **Reply URL** |
+ |-|
+ | `https://login.xmedius.com/auth/saml/callback` |
+ | `https://login.xmedius.eu/auth/saml/callback` |
+ | `https://login.xmedius.ca/auth/saml/callback` |
+
+ c. In the **Sign-on URL** text box, type one of the following URLs:
+
+ | **Sign-on URL** |
+ |-|
+ | `https://login.xmedius.com/` |
+ | `https://login.xmedius.eu/` |
+ | `https://login.xmedius.ca/` |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up OpenText XM Fax and XM SendSecure** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to OpenText XM Fax and XM SendSecure.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **OpenText XM Fax and XM SendSecure**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure OpenText XM Fax and XM SendSecure SSO
+
+To configure single sign-on on **OpenText XM Fax and XM SendSecure** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [OpenText XM Fax and XM SendSecure support team](mailto:support@opentext.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create OpenText XM Fax and XM SendSecure test user
+
+In this section, you create a user called Britta Simon at OpenText XM Fax and XM SendSecure. Work with [OpenText XM Fax and XM SendSecure support team](mailto:support@opentext.com) to add the users in the OpenText XM Fax and XM SendSecure platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to OpenText XM Fax and XM SendSecure Sign-on URL where you can initiate the login flow.
+
+* Go to OpenText XM Fax and XM SendSecure Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the OpenText XM Fax and XM SendSecure tile in the My Apps, this will redirect to OpenText XM Fax and XM SendSecure Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure OpenText XM Fax and XM SendSecure you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Snowflake Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md
This tutorial demonstrates the steps that you perform in Snowflake and Azure Act
> This connector is currently in public preview. For information about terms of use, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Capabilities supported+ > [!div class="checklist"]
+>
> * Create users in Snowflake > * Remove users in Snowflake when they don't require access anymore > * Keep user attributes synchronized between Azure AD and Snowflake
The scenario outlined in this tutorial assumes that you already have the followi
* A user account in Snowflake with admin permissions ## Step 1: Plan your provisioning deployment+ 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. Determine what data to [map between Azure AD and Snowflake](../app-provisioning/customize-application-attributes.md).
The scenario outlined in this tutorial assumes that you already have the followi
Before you configure Snowflake for automatic user provisioning with Azure AD, you need to enable System for Cross-domain Identity Management (SCIM) provisioning on Snowflake.
-1. Sign in to your Snowflake admin console. Enter the following query in the highlighted worksheet, and then select **Run**.
+1. Sign in to Snowflake as an administrator and execute the following from either the Snowflake worksheet interface or SnowSQL.
- ![Screenshot of the Snowflake admin console with query and Run button.](media/Snowflake-provisioning-tutorial/image00.png)
-
``` use role accountadmin;
- create or replace role aad_provisioner;
- grant create user on account to aad_provisioner;
- grant create role on account to aad_provisioner;
+ create role if not exists aad_provisioner;
+ grant create user on account to role aad_provisioner;
+ grant create role on account to role aad_provisioner;
grant role aad_provisioner to role accountadmin;
- create or replace security integration aad_provisioning type=scim scim_client=azure run_as_role='AAD_PROVISIONER';
-
- select SYSTEM$GENERATE_SCIM_ACCESS_TOKEN('AAD_PROVISIONING');
+ create or replace security integration aad_provisioning
+ type = scim
+ scim_client = 'azure'
+ run_as_role = 'AAD_PROVISIONER';
+ select system$generate_scim_access_token('AAD_PROVISIONING');
```
-1. A SCIM access token is generated for your Snowflake tenant. To retrieve it, select the link highlighted in the following screenshot.
+2. Use the ACCOUNTADMIN role.
+
+ ![Screenshot of a worksheet in the Snowflake UI with the SCIM access token called out.](media/Snowflake-provisioning-tutorial/step-2.png)
+
+3. Create the custom role AAD_PROVISIONER. All users and roles in Snowflake created by Azure AD will be owned by the scoped down AAD_PROVISIONER role.
- ![Screenshot of a worksheet in the Snowflake U I with the S C I M access token called out.](media/Snowflake-provisioning-tutorial/image01.png)
+ ![Screenshot showing the custom role.](media/Snowflake-provisioning-tutorial/step-3.png)
-1. Copy the generated token value and select **Done**. This value is entered in the **Secret Token** box on the **Provisioning** tab of your Snowflake application in the Azure portal.
+4. Let the ACCOUNTADMIN role create the security integration using the AAD_PROVISIONER custom role.
- ![Screenshot of the Details section, showing the token copied into the text field and the Done option called out.](media/Snowflake-provisioning-tutorial/image02.png)
+ ![Screenshot showing the security integrations.](media/Snowflake-provisioning-tutorial/step-4.png)
+
+5. Create and copy the authorization token to the clipboard and store securely for later use. Use this token for each SCIM REST API request and place it in the request header. The access token expires after six months and a new access token can be generated with this statement.
+
+ ![Screenshot showing the token generation.](media/Snowflake-provisioning-tutorial/step-5.png)
## Step 3: Add Snowflake from the Azure AD application gallery
-Add Snowflake from the Azure AD application gallery to start managing provisioning to Snowflake. If you previously set up Snowflake for single sign-on (SSO), you can use the same application. However, we recommend that you create a separate app when you're initially testing the integration. [Learn more about adding an application from the gallery](../manage-apps/add-application-portal.md).
+Add Snowflake from the Azure AD application gallery to start managing provisioning to Snowflake. If you previously set up Snowflake for single sign-on (SSO), you can use the same application. However, we recommend that you create a separate app when you're initially testing the integration. [Learn more about adding an application from the gallery](../manage-apps/add-application-portal.md).
-## Step 4: Define who will be in scope for provisioning
+## Step 4: Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application, or based on attributes of the user or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the [steps to assign users and groups to the application](../manage-apps/assign-user-or-group-access-portal.md). If you choose to scope who will be provisioned based solely on attributes of the user or group, you can [use a scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application, or based on attributes of the user or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the [steps to assign users and groups to the application](../manage-apps/assign-user-or-group-access-portal.md). If you choose to scope who will be provisioned based solely on attributes of the user or group, you can [use a scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Keep these tips in mind:
-* When you're assigning users and groups to Snowflake, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* When you're assigning users and groups to Snowflake, you must select a role other than Default Access. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the Default Access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. -
-## Step 5: Configure automatic user provisioning to Snowflake
+## Step 5: Configure automatic user provisioning to Snowflake
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Snowflake. You can base the configuration on user and group assignments in Azure AD.
To configure automatic user provisioning for Snowflake in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
2. In the list of applications, select **Snowflake**.
- ![Screenshot that shows a list of applications.](common/all-applications.png)
+ ![Screenshot that shows a list of applications.](common/all-applications.png)
3. Select the **Provisioning** tab.
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+ ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
4. Set **Provisioning Mode** to **Automatic**.
- ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
+ ![Screenshot of the Provisioning Mode drop-down list with the Automatic option called out.](common/provisioning-automatic.png)
-5. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
+5. In the **Admin Credentials** section, enter the SCIM 2.0 base URL and authentication token that you retrieved earlier in the **Tenant URL** and **Secret Token** boxes, respectively.
Select **Test Connection** to ensure that Azure AD can connect to Snowflake. If the connection fails, ensure that your Snowflake account has admin permissions and try again.
- ![Screenshot that shows boxes for tenant U R L and secret token, along with the Test Connection button.](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot that shows boxes for tenant URL and secret token, along with the Test Connection button.](common/provisioning-testconnection-tenanturltoken.png)
6. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box.
- ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
+ ![Screenshot that shows boxes for notification email.](common/provisioning-notification-email.png)
7. Select **Save**.
To configure automatic user provisioning for Snowflake in Azure AD:
13. To enable the Azure AD provisioning service for Snowflake, change **Provisioning Status** to **On** in the **Settings** section.
- ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
+ ![Screenshot that shows Provisioning Status switched on.](common/provisioning-toggle-on.png)
14. Define the users and groups that you want to provision to Snowflake by choosing the desired values in **Scope** in the **Settings** section. If this option is not available, configure the required fields under **Admin Credentials**, select **Save**, and refresh the page.
- ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
+ ![Screenshot that shows choices for provisioning scope.](common/provisioning-scope.png)
15. When you're ready to provision, select **Save**.
- ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
+ ![Screenshot of the button for saving a provisioning configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization of all users and groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs. Subsequent syncs occur about every 40 minutes, as long as the Azure AD provisioning service is running. ## Step 6: Monitor your deployment+ After you've configured provisioning, use the following resources to monitor your deployment: -- Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.-- Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion.-- If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. [Learn more about quarantine states](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. [Learn more about quarantine states](../app-provisioning/application-provisioning-quarantine-status.md).
## Connector limitations
-Snowflake-generated SCIM tokens expire in 6 months. Be aware that you need to refresh these tokens before they expire, to allow the provisioning syncs to continue working.
+Snowflake-generated SCIM tokens expire in 6 months. Be aware that you need to refresh these tokens before they expire, to allow the provisioning syncs to continue working.
## Troubleshooting tips
-The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allow list of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
+The Azure AD provisioning service currently operates under particular [IP ranges](../app-provisioning/use-scim-to-provision-users-and-groups.md#ip-ranges). If necessary, you can restrict other IP ranges and add these particular IP ranges to the allowlist of your application. That technique will allow traffic flow from the Azure AD provisioning service to your application.
## Change log * 07/21/2020: Enabled soft-delete for all users (via the active attribute).
+* 10/12/2022: Updated Snowflake SCIM Configuration.
## Additional resources
The Azure AD provisioning service currently operates under particular [IP ranges
* [What are application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps+ * [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Teamviewer Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamviewer-provisioning-tutorial.md
# Tutorial: Configure TeamViewer for automatic user provisioning
-This tutorial describes the steps you need to perform in both TeamViewer and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [TeamViewer](https://www.teamviewer.com/buy-now/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both TeamViewer and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [TeamViewer](https://www.teamviewer.com/buy-now/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
Add TeamViewer from the Azure AD application gallery to start managing provision
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two usersto the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. ## Step 5. Configure automatic user provisioning to TeamViewer
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
### To configure automatic user provisioning for TeamViewer in Azure AD:
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-12. Define the users and/or groups that you would like to provision to TeamViewer by choosing the desired values in **Scope** in the **Settings** section.
+12. Define the users that you would like to provision to TeamViewer by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
This section guides you through the steps to configure the Azure AD provisioning
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
active-directory Zoom Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoom-provisioning-tutorial.md
Title: 'Tutorial: Configure Zoom for automatic user provisioning with Azure Active Directory | Microsoft Docs' description: Learn how to automatically provision and de-provision user accounts from Azure AD to Zoom.
+documentationcenter: ''
-writer: twimmers
-
+writer: Thwimmer
+
+ms.assetid: d9bd44ed-2e9a-4a1b-b33c-cb9e9fe8ff47
+ms.devlang: na
Last updated 06/3/2019
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Zoom to support provisioning with Azure AD
-1. Sign in to your [Zoom Admin Console](https://zoom.us/signin). Navigate to **Advanced > App Marketplace** in the left navigation pane.
+1. Sign in to your [Zoom Admin Console](https://zoom.us/signin). Navigate to **ADMIN > Advanced > App Marketplace** in the left navigation pane.
- ![Zoom Integrations](media/zoom-provisioning-tutorial/zoom01.png)
+ ![Screenshot of Zoom Integrations.](media/zoom-provisioning-tutorial/app-navigations.png)
2. Navigate to **Manage** in the top-right corner of the page.
- ![Screenshot of the Zoom App Marketplace with the Manage option called out.](media/zoom-provisioning-tutorial/zoom02.png)
+ ![Screenshot of the Zoom App Marketplace with the Manage option called out.](media/zoom-provisioning-tutorial/zoom-manage.png)
3. Navigate to your created Azure AD app. ![Screenshot of the Created Apps section with the Azure A D app called out.](media/zoom-provisioning-tutorial/zoom03.png)
+ > [!NOTE]
+ > If you don't have an Azure AD app already created, then have a [JWT type Azure AD app](https://marketplace.zoom.us/docs/guides/build/jwt-app) created.
+ 4. Select **App Credentials** in the left navigation pane. ![Screenshot of the left navigation pane with the App Credentials option highlighted.](media/zoom-provisioning-tutorial/zoom04.png)
Once you've configured provisioning, use the following resources to monitor your
## Change log * 05/14/2020 - Support for UPDATE operations added for emails[type eq "work"] attribute.
-* 10/20/2020 - Added support for two new roles "Licensed" and "On-Prem" to replace existing roles "Pro" and "Corp". Support for roles "Pro" and "Corp" will be removed in the future.
+* 10/20/2020 - Added support for two new roles "Licensed" and "on-premises" to replace existing roles "Pro" and "Corp". Support for roles "Pro" and "Corp" will be removed in the future.
## Additional resources
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
To confirm which endpoint you should use, we recommend checking your Azure AD te
### Credential Revocation with Enhanced Privacy
-The Azure AD Verifiable Credential service supports the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. Each Issuer tenant now has an [Identity Hub](https://identity.foundation/identity-hub/spec/) endpoint used by verifiers to check on the status of a credential using a privacy-respecting mechanism. The identity hub endpoint for the tenant is also published in the DID document. This feature replaces the current status endpoint.
+The Azure AD Verifiable Credential service supports the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. Each Issuer tenant now has an Identity Hub endpoint used by verifiers to check on the status of a credential using a privacy-respecting mechanism. The identity hub endpoint for the tenant is also published in the DID document. This feature replaces the current status endpoint.
To uptake this feature follow the next steps:
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-identity.md
The following permissions are needed by the identity creating and operating the
> | `Microsoft.Network/virtualNetworks/subnets/join/action` | Required to configure the Network Security Group for the subnet when using a custom VNET.| > | `Microsoft.Network/publicIPAddresses/join/action` <br/> `Microsoft.Network/publicIPPrefixes/join/action` | Required to configure the outbound public IPs on the Standard Load Balancer. | > | `Microsoft.OperationalInsights/workspaces/sharedkeys/read` <br/> `Microsoft.OperationalInsights/workspaces/read` <br/> `Microsoft.OperationsManagement/solutions/write` <br/> `Microsoft.OperationsManagement/solutions/read` <br/> `Microsoft.ManagedIdentity/userAssignedIdentities/assign/action` | Required to create and update Log Analytics workspaces and Azure monitoring for containers. |
+> | `Microsoft.Network/virtualNetworks/joinLoadBalancer/action` | Required to configure the IP-based Load Balancer Backend Pools. |
### AKS cluster identity permissions
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This policy can be used in the following policy [sections](./api-management-howt
The `ip-filter` policy filters (allows/denies) calls from specific IP addresses and/or address ranges.
+> [!NOTE]
+> The policy filters the immediate caller's IP address. However, if API Management is hosted behind Application Gateway, the policy considers its IP address, not the originator of the API request. Presently, IP addresses in the `X-Forwarded-For` are not considered.
+ [!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)] ### Policy statement
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
All configuration items must be set up before you create the application gateway
$sku = New-AzApplicationGatewaySku -Name "WAF_v2" -Tier "WAF_v2" -Capacity 2 ```
-1. Configure WAF to be in "Prevention" mode.
+1. Configure the WAF mode.
+
+ > [!TIP]
+ > For a short period during setup and to test your firewall rules, you might want to configure "Detection" mode, which monitors and logs threat alerts but doesn't block traffic. You can then make any updates to firewall rules before transitioning to "Prevention" mode, which blocks intrusions and attacks that the rules detect.
```powershell $config = New-AzApplicationGatewayWebApplicationFirewallConfiguration -Enabled $true -FirewallMode "Prevention"
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
The self-hosted gateway sends telemetry to [Azure Monitor](api-management-howto-
When [connectivity to Azure](self-hosted-gateway-overview.md#connectivity-to-azure) is temporarily lost, the flow of telemetry to Azure is interrupted and the data is lost for the duration of the outage. Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) to ensure the ability to observe API traffic and prevent telemetry loss during Azure connectivity outages.
+## HTTP(S) proxy
+
+The self-hosted gateway provides support for HTTP(S) proxy by using the traditional `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables.
+
+Once configured, the self-hosted gateway will automatically use the proxy for all outbound HTTP(S) requests to the backend services.
+
+Starting with version 2.1.5 or above, the self-hosted gateway provides observability related to request proxying:
+
+- [API Inspector](api-management-howto-api-inspector.md) will show additional steps when HTTP(S) proxy is being used and its related interactions.
+- Verbose logs are provided to provide indication of the request proxy behavior.
+
+> [!Warning]
+> Ensure that the [infrastructure requirements](self-hosted-gateway-overview.md#fqdn-dependencies) have been met and that the self-hosted gateway can still connect to them or certain functionality will not work properly.
+ ## High availability The self-hosted gateway is a crucial component in the infrastructure and has to be highly available. However, failure will and can happen.
azure-fluid-relay Quickstart Dice Roll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/quickstarts/quickstart-dice-roll.md
You can open new tabs with the same URL to create additional instances of the di
To run against the Azure Fluid Relay service, you'll need to update your app's configuration to connect to your Azure service instead of your local server. ### Configure and create an Azure client-
+Install @fluidframework/azure-client and "@fluidframework/test-client-utils packages and import Azure Client and InsecureTokenProvider.
+```javascript
+import { InsecureTokenProvider } from "@fluidframework/test-client-utils";
+import { AzureClient } from "@fluidframework/azure-client";
+```
To configure the Azure client, replace the local connection `serviceConfig` object in `app.js` with your Azure Fluid Relay service configuration values. These values can be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. Your `serviceConfig` object should look like this with the values replaced
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
description: Understand how to use C# to develop and publish code as class libra
ms.devlang: csharp Previously updated : 05/12/2022 Last updated : 10/12/2022 # Develop C# class library functions using Azure Functions
This directory is what gets deployed to your function app in Azure. The binding
## Methods recognized as functions
-In a class library, a function is a static method with a `FunctionName` and a trigger attribute, as shown in the following example:
+In a class library, a function is a method with a `FunctionName` and a trigger attribute, as shown in the following example:
```csharp public static class SimpleExample
public static class SimpleExample
} ```
-The `FunctionName` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name.
+The `FunctionName` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name. The above example shows a static method being used, but functions aren't required to be static.
The trigger attribute specifies the trigger type and binds input data to a method parameter. The example function is triggered by a queue message, and the queue message is passed to the method in the `myQueueItem` parameter.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
Use the following CLI commands to install the Azure Monitor agent on Azure virtu
# [Windows](#tab/CLIWindows) ```azurecli
-az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":"/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
``` # [Linux](#tab/CLILinux) ```azurecli
-az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":"/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
```
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
Title: Configure Container insights agent data collection | Microsoft Docs
description: This article describes how you can configure the Container insights agent to control stdout/stderr and environment variables log collection. Last updated 08/25/2022-+ # Configure agent data collection for Container insights
The following table describes the settings you can configure to control data col
| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs will not be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. | | `[log_collection_settings.stderr] enabled =` | Boolean | true or false | This controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in ConfigMaps, the default value is<br> `enabled = true`. | | `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs will not be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
-| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). ItΓÇÖs strongly recommended to secure log analytics workspace access with the default [log_collection_settings.env_var] enabled = true. If sensitive data is stored in environment variables, it is mandatory and very critical to secure log analytics workspace. |
+| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). |
| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | true or false | This setting controls container log enrichment to populate the Name and Image property values<br> for every log record written to the ContainerLog table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in ConfigMap. | | `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | true or false | This setting allows the collection of Kube events of all types.<br> By default the Kube events with type *Normal* are not collected. When this setting is set to `true`, the *Normal* events are no longer filtered and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap |
Perform the following steps to configure and deploy your ConfigMap configuration
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
## Verify configuration
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n kube-system`. If there are configuration errors from the omsagent pods, the output will show errors similar to the following:
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n kube-system`. If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following:
``` ***************Start Config Processing********************
After you correct the error(s) in ConfigMap, save the yaml file and apply the up
If you have already deployed a ConfigMap on clusters and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used and then apply using the same command as before, `kubectl apply -f <configmap_yaml_file.yaml`.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
+The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor Agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor Agent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" updated`.
## Verifying schema version
-Supported config schema versions are available as pod annotation (schema-versions) on the omsagent pod. You can see them with the following kubectl command: `kubectl describe pod omsagent-fdf58 -n=kube-system`
+Supported config schema versions are available as pod annotation (schema-versions) on the Azure Monitor Agent pod. You can see them with the following kubectl command: `kubectl describe pod ama-logs-fdf58 -n=kube-system`
The output will show similar to the following with the annotation schema-versions: ```
- Name: omsagent-fdf58
+ Name: ama-logs-fdf58
Namespace: kube-system Node: aks-agentpool-95673144-0/10.240.0.4 Start Time: Mon, 10 Jun 2019 15:01:03 -0700 Labels: controller-revision-hash=589cc7785d
- dsName=omsagent-ds
+ dsName=ama-logs-ds
pod-template-generation=1 Annotations: agentVersion=1.10.0.1 dockerProviderVersion=5.0.0-0
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
After you've enabled monitoring, it might take about 15 minutes before you can v
Run the following command to verify that the agent is deployed successfully. ```
-kubectl get ds omsagent --namespace=kube-system
+kubectl get ds ama-logs --namespace=kube-system
``` The output should resemble the following, which indicates that it was deployed properly: ```output
-User@aksuser:~$ kubectl get ds omsagent --namespace=kube-system
+User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-omsagent 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
+ama-logs 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
``` If there are Windows Server nodes on the cluster then you can run the following command to verify that the agent is deployed successfully. ```
-kubectl get ds omsagent-win --namespace=kube-system
+kubectl get ds ama-logs-windows --namespace=kube-system
``` The output should resemble the following, which indicates that it was deployed properly: ```output
-User@aksuser:~$ kubectl get ds omsagent-win --namespace=kube-system
+User@aksuser:~$ kubectl get ds ama-logs-windows --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-omsagent-win 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
+ama-logs-windows 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
``` To verify deployment of the solution, run the following command: ```
-kubectl get deployment omsagent-rs -n=kube-system
+kubectl get deployment ama-logs-rs -n=kube-system
``` The output should resemble the following, which indicates that it was deployed properly:
The output should resemble the following, which indicates that it was deployed p
```output User@aksuser:~$ kubectl get deployment omsagent-rs -n=kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-omsagent 1 1 1 1 3h
+ama-logs-rs 1 1 1 1 3h
``` ## View configuration with CLI
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Title: Monitor Azure Arc-enabled Kubernetes clusters Last updated 05/24/2022 - description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
To use [managed identity authentication (preview)](container-insights-onboard.md#authentication), add the `configuration-settings` parameter as in the following: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.useAADAuth=true
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true
```
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
If you want to tweak the default resource requests and limits, you can use the advanced configurations settings: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.resources.daemonset.limits.cpu=150m omsagent.resources.daemonset.limits.memory=600Mi omsagent.resources.deployment.limits.cpu=1 omsagent.resources.deployment.limits.memory=750Mi
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.resources.daemonset.limits.cpu=150m amalogsagent.resources.daemonset.limits.memory=600Mi amalogsagent.resources.deployment.limits.cpu=1 amalogsagent.resources.deployment.limits.memory=750Mi
```
-Check out the [resource requests and limits section of Helm chart](https://github.com/helm/charts/blob/master/incubator/azuremonitor-containers/values.yaml) for the available configuration settings.
+Checkout the [resource requests and limits section of Helm chart](https://github.com/helm/charts/blob/master/incubator/azuremonitor-containers/values.yaml) for the available configuration settings.
### Option 4 - On Azure Stack Edge If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, then a custom mount path `/home/data/docker` needs to be used. ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.logsettings.custommountpath=/home/data/docker
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.logsettings.custommountpath=/home/data/docker
```
az k8s-extension show --name azuremonitor-containers --cluster-name \<cluster-na
Enable Container insights extension with managed identity authentication option using the workspace returned in the first step. ```cli
-az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
+az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
``` ## [Resource Manager](#tab/migrate-arm)
For issues with enabling monitoring, we have provided a [troubleshooting script]
- By default, the containerized agent collects the stdout/ stderr container logs of all the containers running in all the namespaces except kube-system. To configure container log collection specific to particular namespace or namespaces, review [Container Insights agent configuration](container-insights-agent-config.md) to configure desired data collection settings to your ConfigMap configurations file. -- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus.md)
+- To scrape and analyze Prometheus metrics from your cluster, review [Configure Prometheus metrics scraping](container-insights-prometheus-integration.md)
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
Title: Configure Hybrid Kubernetes clusters with Container insights | Microsoft Docs description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environment. Previously updated : 08/29/2022- Last updated : 06/30/2020+ # Configure hybrid Kubernetes clusters with Container insights
To first identify the full resource ID of your Log Analytics workspace required
## Install the HELM chart
-In this section you install the containerized agent for Container insights. Before proceeding, you need to identify the workspace ID required for the `omsagent.secret.wsid` parameter, and primary key required for the `omsagent.secret.key` parameter. You can identify this information by performing the following steps, and then run the commands to install the agent using the HELM chart.
+In this section you install the containerized agent for Container insights. Before proceeding, you need to identify the workspace ID required for the `amalogsagent.secret.wsid` parameter, and primary key required for the `amalogsagent.secret.key` parameter. You can identify this information by performing the following steps, and then run the commands to install the agent using the HELM chart.
1. Run the following command to identify the workspace ID:
In this section you install the containerized agent for Container insights. Befo
>The following commands are applicable only for Helm version 2. Use of the `--name` parameter is not applicable with Helm version 3. >[!NOTE]
->If your Kubernetes cluster communicates through a proxy server, configure the parameter `omsagent.proxy` with the URL of the proxy server. If the cluster does not communicate through a proxy server, then you don't need to specify this parameter. For more information, see [Configure proxy endpoint](#configure-proxy-endpoint) later in this article.
+>If your Kubernetes cluster communicates through a proxy server, configure the parameter `amalogsagent.proxy` with the URL of the proxy server. If the cluster does not communicate through a proxy server, then you don't need to specify this parameter. For more information, see [Configure proxy endpoint](#configure-proxy-endpoint) later in this article.
3. Add the Azure charts repository to your local list by running the following command:
In this section you install the containerized agent for Container insights. Befo
``` $ helm install --name myrelease-1 \
- --set omsagent.secret.wsid=<logAnalyticsWorkspaceId>,omsagent.secret.key=<logAnalyticsWorkspaceKey>,omsagent.env.clusterName=<my_prod_cluster> microsoft/azuremonitor-containers
+ --set amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<my_prod_cluster> microsoft/azuremonitor-containers
``` If the Log Analytics workspace is in Azure China 21Vianet, run the following command: ``` $ helm install --name myrelease-1 \
- --set omsagent.domain=opinsights.azure.cn,omsagent.secret.wsid=<logAnalyticsWorkspaceId>,omsagent.secret.key=<logAnalyticsWorkspaceKey>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+ --set amalogsagent.domain=opinsights.azure.cn,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
``` If the Log Analytics workspace is in Azure US Government, run the following command: ``` $ helm install --name myrelease-1 \
- --set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<logAnalyticsWorkspaceId>,omsagent.secret.key=<logAnalyticsWorkspaceKey>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+ --set amalogsagent.domain=opinsights.azure.us,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
``` ### Enable the Helm chart using the API Model
After you have successfully deployed the chart, you can review the data for your
## Configure proxy endpoint
-Starting with chart version 2.7.1, chart will support specifying the proxy endpoint with the `omsagent.proxy` chart parameter. This allows it to communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server, and both anonymous and basic authentication (username/password) are supported.
+Starting with chart version 2.7.1, chart will support specifying the proxy endpoint with the `amalogsagent.proxy` chart parameter. This allows it to communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server, and both anonymous and basic authentication (username/password) are supported.
The proxy configuration value has the following syntax: `[protocol://][user:password@]proxyhost[:port]`
The proxy configuration value has the following syntax: `[protocol://][user:pass
|proxyhost | Address or FQDN of the proxy server | |port | Optional port number for the proxy server |
-For example: `omsagent.proxy=http://user01:password@proxy01.contoso.com:8080`
+For example: `amalogsagent.proxy=http://user01:password@proxy01.contoso.com:8080`
If you specify the protocol as **http**, the HTTP requests are created using SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols.
If you encounter an error while attempting to enable monitoring for your hybrid
- The specified Log Analytics workspace is valid - The Log Analytics workspace is configured with the Container insights solution. If not, configure the workspace.-- OmsAgent replicaset pods are running-- OmsAgent daemonset pods are running-- OmsAgent Health service is running
+- Azure Monitor Agent replicaset pods are running
+- Azure Monitor Agent daemonset pods are running
+- Azure Monitor Agent Health service is running
- The Log Analytics workspace ID and key configured on the containerized agent match with the workspace the Insight is configured with. - Validate all the Linux worker nodes have `kubernetes.io/role=agent` label to schedule rs pod. If it doesn't exist, add it. - Validate `cAdvisor secure port:10250` or `unsecure port: 10255` is opened on all nodes in the cluster.
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Title: How to manage the Container insights agent | Microsoft Docs description: This article describes managing the most common maintenance tasks with the containerized Log Analytics agent used by Container insights. Previously updated : 08/29/2022- Last updated : 07/21/2020+ # How to manage the Container insights agent Container insights uses a containerized version of the Log Analytics agent for Linux. After initial deployment, there are routine or optional tasks you may need to perform during its lifecycle. This article details on how to manually upgrade the agent and disable collection of environmental variables from a particular container. +
+>[!NOTE]
+>The Container Insights agent name has changed from OMSAgent to Azure Monitor Agent, along with a few other resoruce names. This doc reflects the new name. Please update your commands, alerts and scripts referencing the old name. Read more about the name change in [our blog post](https://techcommunity.microsoft.com/t5/azure-monitor-status-archive/name-update-for-agent-and-associated-resources-in-azure-monitor/ba-p/3576810).
+>
+ ## How to upgrade the Container insights agent
-Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). For a [hybrid Kubernetes cluster](container-insights-hybrid-setup.md), the agent is not managed, and you need to manually upgrade the agent.
+Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Red Hat OpenShift version 3.x. For a [hybrid Kubernetes cluster](container-insights-hybrid-setup.md) and Azure Red Hat OpenShift version 4.x, the agent is not managed, and you need to manually upgrade the agent.
-If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+If the agent upgrade fails for a cluster hosted on AKS or Azure Red Hat OpenShift version 3.x, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
### Upgrade agent on AKS cluster
To install the new version of the agent, follow the steps described in the [enab
After you've re-enabled monitoring, it might take about 15 minutes before you can view updated health metrics for the cluster. To verify the agent upgraded successfully, you can either:
-* Run the command: `kubectl get pod <omsagent-pod-name> -n kube-system -o=jsonpath='{.spec.containers[0].image}'`. In the status returned, note the value under **Image** for omsagent in the *Containers* section of the output.
+* Run the command: `kubectl get pod <ama-logs-agent-pod-name> -n kube-system -o=jsonpath='{.spec.containers[0].image}'`. In the status returned, note the value under **Image** for Azure Monitor Agent in the *Containers* section of the output.
* On the **Nodes** tab, select the cluster node and on the **Properties** pane to the right, note the value under **Agent Image Tag**. The version of the agent shown should match the latest version listed on the [Release history](https://github.com/microsoft/docker-provider/tree/ci_feature_prod) page.
If the Log Analytics workspace is in Azure US Government, run the following comm
$ helm upgrade --set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers ```
+### Upgrade agent on Azure Red Hat OpenShift v4
+
+Perform the following steps to upgrade the agent on a Kubernetes cluster running on Azure Red Hat OpenShift version 4.x.
+
+>[!NOTE]
+>Azure Red Hat OpenShift version 4.x only supports running in the Azure commercial cloud.
+>
+
+```console
+curl -o upgrade-monitoring.sh -L https://aka.ms/upgrade-monitoring-bash-script
+export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
+bash upgrade-monitoring.sh --resource-id $ azureAroV4ClusterResourceId
+```
+ ## How to disable environment variable collection on a container Container insights collects environmental variables from the containers running in a pod and presents them in the property pane of the selected container in the **Containers** view. You can control this behavior by disabling collection for a specific container either during deployment of the Kubernetes cluster, or after by setting the environment variable *AZMON_COLLECT_ENV*. This feature is available from the agent version ΓÇô ciprod11292018 and higher.
To disable collection of environmental variables on a new or existing container,
value: "False" ```
-Run the following command to apply the change to Kubernetes clusters: `kubectl apply -f <path to yaml file>`.
+Run the following command to apply the change to Kubernetes clusters other than Azure Red Hat OpenShift): `kubectl apply -f <path to yaml file>`. To edit ConfigMap and apply this change for Azure Red Hat OpenShift clusters, run the command:
+
+```bash
+oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging
+```
+
+This opens your default text editor. After setting the variable, save the file in the editor.
To verify the configuration change took effect, select a container in the **Containers** view in Container insights, and in the property panel, expand **Environment Variables**. The section should show only the variable created earlier - **AZMON_COLLECT_ENV=FALSE**. For all other containers, the Environment Variables section should list all the environment variables discovered.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Create metric alert rules in Container insights (preview)
-description: Describes how to create recommended metric alerts rules for a Kubernetes cluster in Container insights.
+ Title: Metric alerts from Container insights
+description: This article reviews the recommended metric alerts available from Container insights in public preview.
- Previously updated : 09/28/2022 Last updated : 05/24/2022
-# Metric alert rules in Container insights (preview)
+# Recommended metric alerts (preview) from Container insights
-Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides pre-configured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them.
+To alert on system resource issues when they are experiencing peak demand and running near capacity, with Container insights you would create a log alert based on performance data stored in Azure Monitor Logs. Container insights now includes pre-configured metric alert rules for your AKS and Azure Arc-enabled Kubernetes cluster, which is in public preview.
-> [!IMPORTANT]
-> Container insights in Azure Monitor now supports alerts based on Prometheus metrics. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts.
-## Types of metric alert rules
-There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details).
+This article reviews the experience and provides guidance on configuring and managing these alert rules.
-| Alert rule type | Description |
-|:|:|
-| [Prometheus rules](#prometheus-alert-rules) | Alert rules that use metrics stored in [Azure Monitor managed service for Prometheus (preview)](../essentials/prometheus-metrics-overview.md). There are two sets of Prometheus alert rules that you can choose to enable.<br><br>- *Community alerts* are hand-picked alert rules from the Prometheus community. Use this set of alert rules if you don't have any other alert rules enabled.<br>-*Recommended alerts* are the equivalent of the custom metric alert rules. Use this set if you're migrating from custom metrics to Prometheus metrics and want to retain identical functionality.
-| [Metric rules](#metrics-alert-rules) | Alert rules that use [custom metrics collected for your Kubernetes cluster](container-insights-custom-metrics.md). Use these alert rules if you're not ready to move to Prometheus metrics yet or if you want to manage your alert rules in the Azure portal. |
+If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md).
+> [!NOTE]
+> Beginning October 8, 2021, three alerts have been updated to correctly calculate the alert condition: **Container CPU %**, **Container working set memory %**, and **Persistent Volume Usage %**. These new alerts have the same names as their corresponding previously available alerts, but they use new, updated metrics. We recommend that you disable the alerts that use the "Old" metrics, described in this article, and enable the "New" metrics. The "Old" metrics will no longer be available in recommended alerts after they are disabled, but you can manually re-enable them.
-## Prometheus alert rules
-[Prometheus alert rules](../alerts/alerts-types.md#prometheus-alerts-preview) use metric data from your Kubernetes cluster sent to [Azure Monitor manage service for Prometheus](../essentials/prometheus-metrics-overview.md).
+## Prerequisites
-### Prerequisites
-- Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md).
+Before you start, confirm the following:
-### Enable alert rules
+* Custom metrics are only available in a subset of Azure regions. A list of supported regions is documented in [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
-The only method currently available for creating Prometheus alert rules is a Resource Manager template.
+* To support metric alerts and the introduction of additional metrics, the minimum agent version required is **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod05262020** for AKS and **mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod09252020** for Azure Arc-enabled Kubernetes cluster.
-1. Download the template that includes the set of alert rules that you want to enable. See [Alert rule details](#alert-rule-details) for a listing of the rules for each.
+ To verify your cluster is running the newer version of the agent, you can either:
- - [Community alerts](https://aka.ms/azureprometheus-communityalerts)
- - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts)
+ * Run the command: `kubectl describe pod <azure-monitor-agent-pod-name> --namespace=kube-system`. In the status returned, note the value under **Image** for Azure Monitor agent in the *Containers* section of the output.
+ * On the **Nodes** tab, select the cluster node and on the **Properties** pane to the right, note the value under **Agent Image Tag**.
-2. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates) for guidance.
+ The value shown for AKS should be version **ciprod05262020** or later. The value shown for Azure Arc-enabled Kubernetes cluster should be version **ciprod09252020** or later. If your cluster has an older version, see [How to upgrade the Container insights agent](container-insights-manage-agent.md#upgrade-agent-on-aks-cluster) for steps to get the latest version.
-> [!NOTE]
-> While the Prometheus alert could be created in a different resource group to the target resource, you should use the same resource group as your target resource.
+ For more information related to the agent release, see [agent release history](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). To verify metrics are being collected, you can use Azure Monitor metrics explorer and verify from the **Metric namespace** that **insights** is listed. If it is, you can go ahead and start setting up the alerts. If you don't see any metrics collected, the cluster Service Principal or MSI is missing the necessary permissions. To verify the SPN or MSI is a member of the **Monitoring Metrics Publisher** role, follow the steps described in the section [Upgrade per cluster using Azure CLI](container-insights-update-metrics.md#update-one-cluster-by-using-the-azure-cli) to confirm and set role assignment.
+
+> [!TIP]
+> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
-### Edit alert rules
+## Alert rules overview
- To edit the query and threshold or configure an action group for your alert rules, edit the appropriate values in the ARM template and redeploy it using any deployment method.
+To alert on what matters, Container insights includes the following metric alerts for your AKS and Azure Arc-enabled Kubernetes clusters:
-### Configure alertable metrics in ConfigMaps
+|Name| Description |Default threshold |
+|-|-||
+|**(New)Average container CPU %** |Calculates average CPU used per container.|When average CPU usage per container is greater than 95%.|
+|**(New)Average container working set memory %** |Calculates average working set memory used per container.|When average working set memory usage per container is greater than 95%. |
+|Average CPU % |Calculates average CPU used per node. |When average node CPU utilization is greater than 80% |
+| Daily Data Cap Breach | When data cap is breached| When the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md) |
+|Average Disk Usage % |Calculates average disk usage for a node.|When disk usage for a node is greater than 80%. |
+|**(New)Average Persistent Volume Usage %** |Calculates average PV usage per pod. |When average PV usage per pod is greater than 80%.|
+|Average Working set memory % |Calculates average Working set memory for a node. |When average Working set memory for a node is greater than 80%. |
+|Restarting container count |Calculates number of restarting containers. | When container restarts are greater than 0. |
+|Failed Pod Counts |Calculates if any pod in failed state.|When a number of pods in failed state are greater than 0. |
+|Node NotReady status |Calculates if any node is in NotReady state.|When a number of nodes in NotReady state are greater than 0. |
+|OOM Killed Containers |Calculates number of OOM killed containers. |When a number of OOM killed containers is greater than 0. |
+|Pods ready % |Calculates the average ready state of pods. |When ready state of pods is less than 80%.|
+|Completed job count |Calculates number of jobs completed more than six hours ago. |When number of stale jobs older than six hours is greater than 0.|
-Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
+There are common properties across all of these alert rules:
-- *cpuExceededPercentage*-- *cpuThresholdViolated*-- *memoryRssExceededPercentage*-- *memoryRssThresholdViolated*-- *memoryWorkingSetExceededPercentage*-- *memoryWorkingSetThresholdViolated*-- *pvUsageExceededPercentage*-- *pvUsageThresholdViolated*
+* All alert rules are metric based.
-> [!TIP]
-> Download the new ConfigMap from [here](https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml).
+* All alert rules are disabled by default.
+* All alert rules are evaluated once per minute and they look back at last 5 minutes of data.
-1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
+* Alerts rules do not have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.
- - **Example**. Use the following ConfigMap configuration to modify the *cpuExceededPercentage* threshold to 90%:
+* You can modify the threshold for alert rules by directly editing them. However, refer to the guidance provided in each alert rule before modifying its threshold.
- ```
- [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
- # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
- container_cpu_threshold_percentage = 90.0
- # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
- container_memory_rss_threshold_percentage = 95.0
- # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
- container_memory_working_set_threshold_percentage = 95.0
- ```
+The following alert-based metrics have unique behavior characteristics compared to the other metrics:
- - **Example**. Use the following ConfigMap configuration to modify the *pvUsageExceededPercentage* threshold to 80%:
+* *completedJobsCount* metric is only sent when there are jobs that are completed greater than six hours ago.
- ```
- [alertable_metrics_configuration_settings.pv_utilization_thresholds]
- # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
- pv_usage_threshold_percentage = 80.0
- ```
+* *containerRestartCount* metric is only sent when there are containers restarting.
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+* *oomKilledContainerCount* metric is only sent when there are OOM killed containers.
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+* *cpuExceededPercentage*, *memoryRssExceededPercentage*, and *memoryWorkingSetExceededPercentage* metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
-The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+* *pvUsageExceededPercentage* metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.
-## Metrics alert rules
-[Metric alert rules](../alerts/alerts-types.md#metric-alerts) use [custom metric data from your Kubernetes cluster](container-insights-custom-metrics.md).
+## Metrics collected
+The following metrics are enabled and collected, unless otherwise specified, as part of this feature. The metrics in **bold** with label "Old" are the ones replaced by "New" metrics collected for correct alert evaluation.
-### Prerequisites
- - You may need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
- - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
+|Metric namespace |Metric |Description |
+||-||
+|Insights.container/nodes |cpuUsageMillicores |CPU utilization in millicores by host.|
+|Insights.container/nodes |cpuUsagePercentage, cpuUsageAllocatablePercentage (preview) |CPU usage percentage by node and allocatable respectively.|
+|Insights.container/nodes |memoryRssBytes |Memory RSS utilization in bytes by host.|
+|Insights.container/nodes |memoryRssPercentage, memoryRssAllocatablePercentage (preview) |Memory RSS usage percentage by host and allocatable respectively.|
+|Insights.container/nodes |memoryWorkingSetBytes |Memory Working Set utilization in bytes by host.|
+|Insights.container/nodes |memoryWorkingSetPercentage, memoryRssAllocatablePercentage (preview) |Memory Working Set usage percentage by host and allocatable respectively.|
+|Insights.container/nodes |nodesCount |Count of nodes by status.|
+|Insights.container/nodes |diskUsedPercentage |Percentage of disk used on the node by device.|
+|Insights.container/pods |podCount |Count of pods by controller, namespace, node, and phase.|
+|Insights.container/pods |completedJobsCount |Completed jobs count older user configurable threshold (default is six hours) by controller, Kubernetes namespace. |
+|Insights.container/pods |restartingContainerCount |Count of container restarts by controller, Kubernetes namespace.|
+|Insights.container/pods |oomKilledContainerCount |Count of OOMkilled containers by controller, Kubernetes namespace.|
+|Insights.container/pods |podReadyPercentage |Percentage of pods in ready state by controller, Kubernetes namespace.|
+|Insights.container/containers |**(Old)cpuExceededPercentage** |CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
+|Insights.container/containers |**(New)cpuThresholdViolated** |Metric triggered when CPU utilization percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.<br> Collected |
+|Insights.container/containers |**(Old)memoryRssExceededPercentage** |Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
+|Insights.container/containers |**(New)memoryRssThresholdViolated** |Metric triggered when Memory RSS percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
+|Insights.container/containers |**(Old)memoryWorkingSetExceededPercentage** |Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
+|Insights.container/containers |**(New)memoryWorkingSetThresholdViolated** |Metric triggered when Memory Working Set percentage for containers exceeding user configurable threshold (default is 95.0) by container name, controller name, Kubernetes namespace, pod name.|
+|Insights.container/persistentvolumes |**(Old)pvUsageExceededPercentage** |PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.|
+|Insights.container/persistentvolumes |**(New)pvUsageThresholdViolated** |Metric triggered when PV utilization percentage for persistent volumes exceeding user configurable threshold (default is 60.0) by claim name, Kubernetes namespace, volume name, pod name, and node name.
+## Enable alert rules
-### Enable and configure alert rules
+Follow these steps to enable the metric alerts in Azure Monitor from the Azure portal. To enable using a Resource Manager template, see [Enable with a Resource Manager template](#enable-with-a-resource-manager-template).
-#### [Azure portal](#tab/azure-portal)
+### From the Azure portal
-#### Enable alert rules
+This section walks through enabling Container insights metric alert (preview) from the Azure portal.
-1. From the **Insights** menu for your cluster, select **Recommended alerts**.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
- :::image type="content" source="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" lightbox="media/container-insights-metric-alerts/command-bar-recommended-alerts.png" alt-text="Screenshot showing recommended alerts option in Container insights.":::
+2. Access to the Container insights metrics alert (preview) feature is available directly from an AKS cluster by selecting **Insights** from the left pane in the Azure portal.
+3. From the command bar, select **Recommended alerts**.
-2. Toggle the **Status** for each alert rule to enable. The alert rule is created and the rule name updates to include a link to the new alert resource.
+ ![Screenshot showing the Recommended alerts option in Container insights.](./media/container-insights-metric-alerts/command-bar-recommended-alerts.png)
- :::image type="content" source="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" lightbox="media/container-insights-metric-alerts/recommended-alerts-pane-enable.png" alt-text="Screenshot showing list of recommended alerts and option for enabling each.":::
+4. The **Recommended alerts** property pane automatically displays on the right side of the page. By default, all alert rules in the list are disabled. After selecting **Enable**, the alert rule is created and the rule name updates to include a link to the alert resource.
-3. Alert rules aren't associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** to open the **Action Groups** page, specify an existing or create an action group by selecting **Create action group**.
+ ![Screenshot showing the Recommended alerts properties pane.](./media/container-insights-metric-alerts/recommended-alerts-pane.png)
- :::image type="content" source="media/container-insights-metric-alerts/select-action-group.png" lightbox="media/container-insights-metric-alerts/select-action-group.png" alt-text="Screenshot showing selection of an action group.":::
+ After selecting the **Enable/Disable** toggle to enable the alert, an alert rule is created and the rule name updates to include a link to the actual alert resource.
-#### Edit alert rules
+ ![Screenshot showing the option to enable an alert rule.](./media/container-insights-metric-alerts/recommended-alerts-pane-enable.png)
-To edit the threshold for a rule or configure an [action group](../alerts/action-groups.md) for your AKS cluster.
+5. Alert rules are not associated with an [action group](../alerts/action-groups.md) to notify users that an alert has been triggered. Select **No action group assigned** and on the **Action Groups** page, specify an existing or create an action group by selecting **Add** or **Create**.
-1. From Container insights for your cluster, select **Recommended alerts**.
-2. Click the **Rule Name** to open the alert rule.
-3. See [Create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=metric) for details on the alert rule settings.
+ ![Screenshot showing the option to select an action group.](./media/container-insights-metric-alerts/select-action-group.png)
-#### Disable alert rules
-1. From Container insights for your cluster, select **Recommended alerts**.
-2. Change the status for the alert rule to **Disabled**.
+### Enable with a Resource Manager template
-### [Resource Manager](#tab/resource-manager)
-For custom metrics, a separate Resource Manager template is provided for each alert rule.
+You can use an Azure Resource Manager template and parameters file to create the included metric alerts in Azure Monitor.
-#### Enable alert rules
+The basic steps are as follows:
1. Download one or all of the available templates that describe how to create the alert from [GitHub](https://github.com/microsoft/Docker-Provider/tree/ci_dev/alerts/recommended_alerts_ARM).+ 2. Create and use a [parameters file](../../azure-resource-manager/templates/parameter-files.md) as a JSON to set the values required to create the alert rule.
-3. Deploy the template using any standard methods for installing Resource Manager templates. See [Resource Manager template samples for Azure Monitor](../resource-manager-samples.md) for guidance.
-#### Disable alert rules
-To disable custom alert rules, use the same Resource Manager template to create the rule, but change the `isEnabled` value in the parameters file to `false`.
+3. Deploy the template from the Azure portal, PowerShell, or Azure CLI.
-
+#### Deploy through Azure portal
+1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-## Alert rule details
-The following sections provide details on the alert rules provided by Container insights.
-
-### Community alert rules
-These are hand-picked alerts from Prometheus community. Source code for these mixin alerts can be found in [GitHub](https://aka.ms/azureprometheus-mixins).
--- KubeJobNotCompleted-- KubeJobFailed-- KubePodCrashLooping-- KubePodNotReady-- KubeDeploymentReplicasMismatch-- KubeStatefulSetReplicasMismatch-- KubeHpaReplicasMismatch-- KubeHpaMaxedOut-- KubeQuotaAlmostFull-- KubeMemoryQuotaOvercommit-- KubeCPUQuotaOvercommit-- KubeVersionMismatch-- KubeNodeNotReady-- KubeNodeReadinessFlapping-- KubeletTooManyPods-- KubeNodeUnreachable
-### Recommended alert rules
-The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics.
-
-| Prometheus alert name | Custom metric alert name | Description | Default threshold |
-|:|:|:|:|
-| Average container CPU % | Average container CPU % | Calculates average CPU used per container. | 95% |
-| Average container working set memory % | Average container working set memory % | Calculates average working set memory used per container. | 95% |
-| Average CPU % | Average CPU % | Calculates average CPU used per node. | 80% |
-| Average Disk Usage % | Average Disk Usage % | Calculates average disk usage for a node. | 80% |
-| Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average PV usage per pod. | 80% |
-| Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% |
-| Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 |
-| Failed Pod Counts | Failed Pod Counts | Calculates number of restarting containers. | 0 |
-| Node NotReady status | Node NotReady status | Calculates if any node is in NotReady state. | 0 |
-| OOM Killed Containers | OOM Killed Containers | Calculates number of OOM killed containers. | 0 |
-| Pods ready % | Pods ready % | Calculates the average ready state of pods. | 80% |
-| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 |
+2. To deploy a customized template through the portal, select **Create a resource** from the [Azure portal](https://portal.azure.com).
-> [!NOTE]
-> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule is not included with the Prometheus alert rules.
->
-> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) using the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
+3. Search for **template**, and then select **Template deployment**.
+4. Select **Create**.
-Common properties across all of these alert rules include:
+5. You see several options for creating a template, select **Build your own template in editor**.
-- All alert rules are evaluated once per minute and they look back at last 5 minutes of data.-- All alert rules are disabled by default.-- Alerts rules don't have an action group assigned to them by default. You can add an [action group](../alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group while editing the alert rule.-- You can modify the threshold for alert rules by directly editing the template and redeploying it. Refer to the guidance provided in each alert rule before modifying its threshold.
+6. On the **Edit template page**, select **Load file** and then select the template file.
-The following metrics have unique behavior characteristics:
+7. On the **Edit template** page, select **Save**.
-**Prometheus and custom metrics**
-- `completedJobsCount` metric is only sent when there are jobs that are completed greater than six hours ago.-- `containerRestartCount` metric is only sent when there are containers restarting.-- `oomKilledContainerCount` metric is only sent when there are OOM killed containers.-- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). cpuThresholdViolated, memoryRssThresholdViolated, and memoryWorkingSetThresholdViolated metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule.-- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). `pvUsageThresholdViolated` metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold (the default threshold is 60%). *pvUsageThresholdViolated* metric is equal to 0 when the PV usage percentage is below the threshold and is equal 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.
+8. On the **Custom deployment** page, specify the following and then when complete select **Purchase** to deploy the template and create the alert rule.
-
-**Prometheus only**
-- If you want to collect `pvUsageExceededPercentage` and analyze it from [metrics explorer](../essentials/metrics-getting-started.md), you should configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for persistent volume utilization thresholds can be overridden in the ConfigMaps file under the section `alertable_metrics_configuration_settings.pv_utilization_thresholds`. See [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file. Collection of persistent volume metrics with claims in the *kube-system* namespace are excluded by default. To enable collection in this namespace, use the section `[metric_collection_settings.collect_kube_system_pv_metrics]` in the ConfigMap file. See [Metric collection settings](./container-insights-agent-config.md#metric-collection-settings) for details.-- `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory Rss, and Memory Working set values exceed the configured threshold (the default threshold is 95%). *cpuThresholdViolated*, *memoryRssThresholdViolated*, and *memoryWorkingSetThresholdViolated* metrics are equal to 0 is the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. Meaning, if you want to collect these metrics and analyze them from [Metrics explorer](../essentials/metrics-getting-started.md), we recommend you configure the threshold to a value lower than your alerting threshold. The configuration related to the collection settings for their container resource utilization thresholds can be overridden in the ConfigMaps file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]`. See the section [Configure alertable metrics ConfigMaps](#configure-alertable-metrics-in-configmaps) for details related to configuring your ConfigMap configuration file.
+ * Resource group
+ * Location
+ * Alert Name
+ * Cluster Resource ID
+#### Deploy with Azure PowerShell or CLI
+1. Download and save to a local folder, the Azure Resource Manager template and parameter file, to create the alert rule using the following commands:
-## View alerts
-View fired alerts for your cluster from [**Alerts** in the **Monitor menu** in the Azure portal] with other fired alerts in your subscription. You can also select **View in alerts** from the **Recommended alerts** pane to view alerts from custom metrics.
+2. You can create the metric alert using the template and parameters file using PowerShell or Azure CLI.
-> [!NOTE]
-> Prometheus alerts will not currently be displayed when you select **Alerts** from your AKs cluster since the alert rule doesn't use the cluster as its target.
+ Using Azure PowerShell
+
+ ```powershell
+ Connect-AzAccount
+
+ Select-AzSubscription -SubscriptionName <yourSubscriptionName>
+ New-AzResourceGroupDeployment -Name CIMetricAlertDeployment -ResourceGroupName ResourceGroupofTargetResource `
+ -TemplateFile templateFilename.json -TemplateParameterFile templateParameterFilename.parameters.json
+ ```
+
+ Using Azure CLI
+
+ ```azurecli
+ az login
+
+ az deployment group create \
+ --name AlertDeployment \
+ --resource-group ResourceGroupofTargetResource \
+ --template-file templateFileName.json \
+ --parameters @templateParameterFilename.parameters.json
+ ```
+
+ >[!NOTE]
+ >While the metric alert could be created in a different resource group to the target resource, we recommend using the same resource group as your target resource.
+
+## Edit alert rules
+You can view and manage Container insights alert rules, to edit its threshold or configure an [action group](../alerts/action-groups.md) for your AKS cluster. While you can perform these actions from the Azure portal and Azure CLI, it can also be done directly from your AKS cluster in Container insights.
+
+1. From the command bar, select **Recommended alerts**.
+
+2. To modify the threshold, on the **Recommended alerts** pane, select the enabled alert. In the **Edit rule**, select the **Alert criteria** you want to edit.
+
+ * To modify the alert rule threshold, select the **Condition**.
+ * To specify an existing or create an action group, select **Add** or **Create** under **Action group**
+
+To view alerts created for the enabled rules, in the **Recommended alerts** pane select **View in alerts**. You are redirected to the alert menu for the AKS cluster, where you can see all the alerts currently created for your cluster.
+
+## Configure alertable metrics in ConfigMaps
+
+Perform the following steps to configure your ConfigMap configuration file to override the default utilization thresholds. These steps are applicable only for the following alertable metrics:
+
+* *cpuExceededPercentage*
+* *cpuThresholdViolated*
+* *memoryRssExceededPercentage*
+* *memoryRssThresholdViolated*
+* *memoryWorkingSetExceededPercentage*
+* *memoryWorkingSetThresholdViolated*
+* *pvUsageExceededPercentage*
+* *pvUsageThresholdViolated*
+
+1. Edit the ConfigMap YAML file under the section `[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]` or `[alertable_metrics_configuration_settings.pv_utilization_thresholds]`.
+
+ - To modify the *cpuExceededPercentage* threshold to 90% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
+
+ ```
+ [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
+ # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage
+ container_cpu_threshold_percentage = 90.0
+ # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage
+ container_memory_rss_threshold_percentage = 95.0
+ # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage
+ container_memory_working_set_threshold_percentage = 95.0
+ ```
+
+ - To modify the *pvUsageExceededPercentage* threshold to 80% and begin collection of this metric when that threshold is met and exceeded, configure the ConfigMap file using the following example:
+
+ ```
+ [alertable_metrics_configuration_settings.pv_utilization_thresholds]
+ # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage
+ pv_usage_threshold_percentage = 80.0
+ ```
+
+2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
+
+ Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
+
+The configuration change can take a few minutes to finish before taking effect, and all Azure Monitor agent pods in the cluster will restart. The restart is a rolling restart for all Azure Monitor agent pods; they don't all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following example and includes the result: `configmap "container-azm-ms-agentconfig" created`.
## Next steps -- [Read about the different alert rule types in Azure Monitor](../alerts/alerts-types.md).-- [Read about alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).
+- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.
+
+- To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md).
azure-monitor Container Insights Prometheus Monitoring Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-monitoring-addon.md
Perform the following steps to configure your ConfigMap configuration file for y
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-The configuration change can take a few minutes to finish before taking effect. You must restart all omsagent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
+The configuration change can take a few minutes to finish before taking effect. You must restart all Azure Monitor Agent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
## Verify configuration
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs omsagent-fdf58 -n=kube-system`.
+To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`.
-If there are configuration errors from the omsagent pods, the output will show errors similar to the following example:
+If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
``` ***************Start Config Processing********************
Errors related to applying configuration changes are also available for review.
``` - From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.-- For Azure Red Hat OpenShift v3.x and v4.x, check the omsagent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
+- For Azure Red Hat OpenShift v3.x and v4.x, check the Azure Monitor Agent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.
-Errors prevent omsagent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
+Errors prevent Azure Monitor Agent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
Use the following steps to diagnose the problem if you can't view status inform
1. Check the status of the agent by running the command:
- `kubectl get ds omsagent --namespace=kube-system`
+ `kubectl get ds ama-logs --namespace=kube-system`
The output should resemble the following example, which indicates that it was deployed properly: ```
- User@aksuser:~$ kubectl get ds omsagent --namespace=kube-system
+ User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- omsagent 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
+ ama-logs 2 2 2 2 2 beta.kubernetes.io/os=linux 1d
``` 2. If you have Windows Server nodes, then check the status of the agent by running the command:
Use the following steps to diagnose the problem if you can't view status inform
The output should resemble the following example, which indicates that it was deployed properly: ```
- User@aksuser:~$ kubectl get ds omsagent-win --namespace=kube-system
+ User@aksuser:~$ kubectl get ds ama-logs-windows --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
- omsagent-win 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
+ ama-logs-windows 2 2 2 2 2 beta.kubernetes.io/os=windows 1d
``` 3. Check the deployment status with agent version *06072018* or later using the command:
- `kubectl get deployment omsagent-rs -n=kube-system`
+ `kubectl get deployment ama-logs-rs -n=kube-system`
The output should resemble the following example, which indicates that it was deployed properly: ``` User@aksuser:~$ kubectl get deployment omsagent-rs -n=kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
- omsagent 1 1 1 1 3h
+ ama-logs 1 1 1 1 3h
``` 4. Check the status of the pod to verify that it's running using the command: `kubectl get pods --namespace=kube-system`
Use the following steps to diagnose the problem if you can't view status inform
aks-ssh-139866255-5n7k5 1/1 Running 0 8d azure-vote-back-4149398501-7skz0 1/1 Running 0 22d azure-vote-front-3826909965-30n62 1/1 Running 0 22d
- omsagent-484hw 1/1 Running 0 1d
- omsagent-fkq7g 1/1 Running 0 1d
- omsagent-win-6drwq 1/1 Running 0 1d
+ ama-logs-484hw 1/1 Running 0 1d
+ ama-logs-fkq7g 1/1 Running 0 1d
+ ama-logs-windows-6drwq 1/1 Running 0 1d
``` ## Container insights agent ReplicaSet Pods aren't scheduled on non-Azure Kubernetes cluster
To view the non-Azure Kubernetes cluster in Container insights, Read access is r
2. Verify that the **Monitoring Metrics Publisher** role assignment exists using the following CLI command: ``` azurecli
- az role assignment list --assignee "SP/UserassignedMSI for omsagent" --scope "/subscriptions/<subid>/resourcegroups/<RG>/providers/Microsoft.ContainerService/managedClusters/<clustername>" --role "Monitoring Metrics Publisher"
+ az role assignment list --assignee "SP/UserassignedMSI for Azure Monitor Agent" --scope "/subscriptions/<subid>/resourcegroups/<RG>/providers/Microsoft.ContainerService/managedClusters/<clustername>" --role "Monitoring Metrics Publisher"
```
- For clusters with MSI, the user assigned client ID for omsagent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client ID.
+ For clusters with MSI, the user assigned client ID for Azure Monitor Agent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client ID.
3. For clusters with Azure Active Directory pod identity enabled and using MSI:
- - Verify the required label **kubernetes.azure.com/managedby: aks** is present on the omsagent pods using the following command:
+ - Verify the required label **kubernetes.azure.com/managedby: aks** is present on the Azure Monitor Agent pods using the following command:
- `kubectl get pods --show-labels -n kube-system | grep omsagent`
+ `kubectl get pods --show-labels -n kube-system | grep ama-logs`
- Verify that exceptions are enabled when pod identity is enabled using one of the supported methods at https://github.com/Azure/aad-pod-identity#1-deploy-aad-pod-identity.
The error _manifests contain a resource that already exists_ indicates that reso
`helm del azmon-containers-release-1` ### For AKS clusters
-1. Run the following commands and look for omsagent addon profile to verify whether the AKS monitoring addon is enabled:
+1. Run the following commands and look for Azure Monitor Agent addon profile to verify whether the AKS monitoring addon is enabled:
``` az account set -s <clusterSubscriptionId> az aks show -g <clusterResourceGroup> -n <clusterName> ```
-2. If the output includes an omsagent addon profile config with a log analytics workspace resource ID, this indicates that AKS Monitoring addon is enabled and needs to be disabled:
+2. If the output includes an Azure Monitor Agent addon profile config with a log analytics workspace resource ID, this indicates that AKS Monitoring addon is enabled and needs to be disabled:
`az aks disable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName>`
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
To help you determine an appropriate daily cap for your workspace, see [Azure M
## Workspaces with Microsoft Defender for Cloud
-For workspaces with [Microsoft Defender for Cloud](../../security-center/index.yml), the daily cap doesn't stop the collection of the following data types except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017. :
+Some data security-related data types collected [Microsoft Defender for Cloud](../../security-center/index.yml) or Microsoft Sentinel are collected despite any daily cap. The data types listed below will not be capped except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017:
- WindowsEvent - SecurityAlert
Add `Update` and `UpdateSummary` data types to the `where Datatype` line when th
- See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges. - See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
There are some important best practices to follow for optimal performance of NFS
- Create Azure NetApp Files volumes using **Standard** network features to enable optimized connectivity from Azure VMware Solution private cloud via ExpressRoute FastPath connectivity. - For optimized performance, choose **UltraPerformance** gateway and enable [ExpressRoute FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md). - Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For best performance, it's recommended to use the Ultra tier.-- Create multiple datastores of 4-TB size for better performance. The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+- Create multiple datastores of 4-TB size for better performance. The default limit is 64 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](../availability-zones/az-overview.md#availability-zones). ## Attach an Azure NetApp Files volume to your private cloud
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/plan-private-cloud-deployment.md
Before requesting a host quota, make sure you've identified the Azure subscripti
After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts. -- [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-customers)
+- [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-and-mca-customers)
- [CSP customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-csp-customers)
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
You'll need an Azure account in an Azure subscription that adheres to one of the
- A subscription under an [Azure Enterprise Agreement (EA)](../cost-management-billing/manage/ea-portal-agreements.md) with Microsoft. - A Cloud Solution Provider (CSP) managed subscription under an existing CSP Azure offers contract or an Azure plan.-- A [Microsoft Customer Agreement](../cost-management-billing/understand/mca-overview.md) with Microsoft.
+- A [Microsoft Customer Agreement (MCA)](../cost-management-billing/understand/mca-overview.md) with Microsoft.
-## Request host quota for EA customers
+## Request host quota for EA and MCA customers
1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information: - **Issue type:** Technical
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
You can use the [Azure Backup service](./backup-overview.md) to back up Azure fi
## Supported regions
-Azure file shares backup is available in all regions, **except** for Germany Central (Sovereign), Germany Northeast (Sovereign), China East, China East 2, China North, China North 2, France South and US Gov Iowa.
+Azure file shares backup is available in all regions, **except** for Germany Central (Sovereign), Germany Northeast (Sovereign), China East, China East 2, China North, China North 2, China North 3, France South, and US Gov Iowa.
## Supported storage accounts
backup Backup Azure Enhanced Soft Delete About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md
In this article, you'll learn about:
>[!div class="checklist"] >- What's soft delete? >- What's enhanced soft delete?
+>- Supported regions
+>- Supported scenarios
>- States of soft delete setting >- Soft delete retention >- Soft deleted items reregistration >- Pricing
->- Supported scenarios
## What's soft delete?
The key benefits of enhanced soft delete are:
- **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers. - **Soft delete across workloads**: Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups.
+## Supported regions
+
+Enhanced soft delete is currently available in the following regions: West Central US, Australia East, and North Europe.
+
+## Supported scenarios
+
+- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults.
+- All existing Recovery Services vaults in the preview regions are upgraded with an option to use enhanced soft delete.
+ ## States of soft delete settings The following table lists the soft delete properties for vaults:
If a backup item/container is in soft deleted state, you can register it to a va
## Pricing
-Soft deleted data involves no retention cost for the default duration of *14* days. For soft deleted data retention more than the default period, it incurs regular backup charges.
+There is no retention cost for the default duration of *14* days, after which, it incurs regular backup charges. For soft delete retention *>14* days, the default period applies to the *last 14 days* of the continuous retention configured in soft delete, and then backups are permanently deleted.
-For example, you've deleted backups for one of the instances in the vault that has soft delete retention of *60* days. If you want to recover the soft deleted data after *50* days of deletion, the pricing is:
+For example, you've deleted backups for one of the instances in the vault that has soft delete retention of *60* days. If you want to recover the soft deleted data after *52* days of deletion, the pricing is:
-- Standard rates (similar rates apply when the instance is in *stop protection with retain data* state) are applicable for the first *36* days (*50* days of data retained in soft deleted state minus *14* days of default soft delete retention).
+- Standard rates (similar rates apply when the instance is in *stop protection with retain data* state) are applicable for the first *46* days (*60* days of soft delete retention configured minus *14* days of default soft delete retention).
- No charges for the last *6* days of soft delete retention.
-## Supported scenarios
--- Enhanced soft delete is currently available in the following regions: West Central US, Australia East, North Europe.-- It's supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults.-- All existing Recovery Services vaults in the preview regions are upgraded with an option to use enhanced soft delete.- ## Next steps [Configure and manage enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-configure-manage.md).
backup Backup Azure Immutable Vault Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-concept.md
Immutable vault can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the Immutable vault setting to make it irreversible to prevent any malicious actors from disabling immutability and deleting backups.
+In this article, you'll learn about:
+
+> [!div class="checklist"]
+>
+> - Before you start
+> - How does immutability work?
+> - Making immutability irreversible
+> - Restricted operations
++ ## Before you start - Immutable vault is currently in preview and is available in the following regions: East US, West US, West US 2, West Central US, North Europe, Brazil South, Japan East.-- Immutable vault is currently supported for Recovery Services vaults only.
+- Immutable vault is supported for Recovery Services vaults and Backup vaults.
- Enabling Immutable vault blocks you from performing specific operations on the vault and its protected items. See the [restricted operations](#restricted-operations). - Enabling immutability for the vault is a reversible operation. However, you can choose to make it irreversible to prevent any malicious actors from disabling it (after disabling it, they can perform destructive operations). Learn about [making Immutable vault irreversible](#making-immutability-irreversible). - Immutable vault applies to all the data in the vault. Therefore, all instances that are protected in the vault have immutability applied to them.
The immutability of a vault is a reversible setting that allows you to disable t
Immutable vault prevents you from performing the following operations on the vault that could lead to loss of data:
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
+ | Operation type | Description | | | |
-| **Stop protection with delete data** | A protected item can't have its recovery points deleted before their respective expiry date. However, you can still stop the protection of the instances while retaining data forever or until their expiry. |
+| **Stop protection with delete data** | A protected item can't have its recovery points deleted before their respective expiry date. However, you can still stop protection of the instances while retaining data forever or until their expiry. |
| **Modify backup policy to reduce retention** | Any actions that reduce the retention period in a backup policy are disallowed on Immutable vault. However, you can make policy changes that result in the increase of retention. You can also make changes to the schedule of a backup policy. | | **Change backup policy to reduce retention** | Any attempt to replace a backup policy associated with a backup item with another policy with retention lower than the existing one is blocked. However, you can replace a policy with the one that has higher retention. |
+# [Backup vault](#tab/backup-vault)
+
+| Operation type | Description |
+| | |
+| **Stop protection with delete data** | A protected item can't have its recovery points deleted before their respective expiry date. However, you can still stop protection of the instances while retaining data forever or until their expiry. |
+++ ## Next steps - Learn [how to manage operations of Azure Backup vault immutability (preview)](backup-azure-immutable-vault-how-to-manage.md).
backup Backup Azure Immutable Vault How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-how-to-manage.md
In this article, you'll learn how to:
## Enable Immutable vault
-You can enable immutability for a vault through its properties, follow these steps:
+You can enable immutability for a vault through its properties.
-1. Go to the Recovery Services vault for which you want to enable immutability.
+**Choose a vault**
-1. In the vault, go to **Properties** -> **Immutable vault**, and then select **Settings**.
+# [Recovery Services vault](#tab/recovery-services-vault)
+
+Follow these steps:
+
+1. Go to the **Recovery Services vault** for which you want to enable immutability.
+
+1. In the vault, go to **Properties** > **Immutable vault**, and then select **Settings**.
:::image type="content" source="./media/backup-azure-immutable-vault/enable-immutable-vault-settings.png" alt-text="Screenshot showing how to open the Immutable vault settings.":::
-1. On **Immutable vault**, select the checkbox for **Enable vault immutability** to enable immutability for the vault.
+1. On **Immutable vault**, select the **Enable vault immutability** checkbox to enable immutability for the vault.
At this point, immutability of the vault is reversible, and it can be disabled, if needed. 1. Once you enable immutability, the option to lock the immutability for the vault appears.
- This makes immutability setting for the vault irreversible. While this helps secure the backup data in the vault, we recommend you make a well-informed decision when opting to lock. You can also test and validate how the current settings of the vault, backup policies, and so on, meet your requirements and can lock the immutability setting later.
+ Once you enable this lock, it makes immutability setting for the vault irreversible. While this helps secure the backup data in the vault, we recommend you make a well-informed decision when opting to lock. You can also test and validate how the current settings of the vault, backup policies, and so on, meet your requirements and can lock the immutability setting later.
1. Select **Apply** to save the changes. :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-enable-immutability.png" alt-text="Screenshot showing how to enable the Immutable vault settings.":::
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to the **Backup vault** for which you want to enable immutability.
+
+1. In the vault, go to **Properties** > **Immutable vault**, and then select **Settings**.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/enable-immutable-vault-settings-backup-vault.png" alt-text="Screenshot showing how to open the Immutable vault settings for a Backup vault.":::
+
+1. On **Immutable vault**, select the **Enable vault immutability** checkbox to enable immutability for the vault.
+
+ At this point, immutability of the vault is reversible, and it can be disabled, if needed.
+
+1. Once you enable immutability, the option to lock the immutability for the vault appears.
+
+ Once you enable this lock, it makes immutability setting for the vault irreversible. While this helps secure the backup data in the vault, we recommend you to make a well-informed decision when opting to lock. You can also test and validate how the current settings of the vault, backup policies, and so on, meet your requirements, and can lock the immutability setting later.
+
+1. Select **Apply** to save the changes.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-enable-immutability.png" alt-text="Screenshot showing how to enable the Immutable vault settings for a Backup vault.":::
+++ ## Perform operations on Immutable vault As per the [Restricted operations](backup-azure-immutable-vault-concept.md#restricted-operations), certain operations are restricted on Immutable vault. However, other operations on the vault or the items it contains remain unaffected. ### Perform restricted operations
-[Restricted operations](backup-azure-immutable-vault-concept.md#restricted-operations) are disallowed on the vault. Consider the following example when trying to modify a policy to reduce its retention in a vault with immutability enabled.
+[Restricted operations](backup-azure-immutable-vault-concept.md#restricted-operations) are disallowed on the vault. Consider the following example when trying to modify a policy to reduce its retention in a vault with immutability enabled. This example shows operation on the Recovery Services vaults; however, similar experiences apply for other operations and operations on the Backup vaults.
Consider a policy with a daily backup point retention of *35 days* and weekly backup point retention of *two weeks*, as shown in the following screenshot.
This time, the operation successfully passes as no recovery points can be delete
## Disable immutability
-You can disable immutability only for vaults that have immutability enabled, but not locked. To disable immutability for such vaults, follow these steps:
+You can disable immutability only for vaults that have immutability enabled, but not locked.
+
+**Choose a vault**
+
+# [Recovery Services vault](#tab/recovery-services-vault)
-1. Go to the Recovery Services vault for which you want to disable immutability.
+Follow these steps:
-1. In the vault, go to **Properties** -> **Immutable vault**, and then select **Settings**.
+1. Go to the **Recovery Services** vault for which you want to disable immutability.
+
+1. In the vault, go to **Properties** > **Immutable vault**, and then select **Settings**.
:::image type="content" source="./media/backup-azure-immutable-vault/disable-immutable-vault-settings.png" alt-text="Screenshot showing how to open the Immutable vault settings to disable.":::
-1. In the **Immutable vault** blade, clear the checkbox for **Enable vault Immutability**.
+1. In the **Immutable vault** blade, clear the **Enable vault Immutability** checkbox.
1. Select **Apply** to save the changes. :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-disable-immutability.png" alt-text="Screenshot showing how to disable the Immutable vault settings.":::
+# [Backup vault](#tab/backup-vault)
+
+Follow these steps:
+
+1. Go to the **Backup vault** for which you want to disable immutability.
+
+1. In the vault, go to **Properties** > **Immutable vault**, and then select **Settings**.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/disable-immutable-vault-settings-backup-vault.png" alt-text="Screenshot showing how to open the Immutable vault settings to disable for a Backup vault.":::
+
+1. In the **Immutable vault** blade, clear the **Enable vault Immutability** checkbox.
+
+1. Select **Apply** to save the changes.
+
+ :::image type="content" source="./media/backup-azure-immutable-vault/backup-azure-disable-immutability.png" alt-text="Screenshot showing how to disable the Immutable vault settings for a Backup vault.":::
+++ ## Next steps - Learn [about Immutable vault for Azure Backup (preview)](backup-azure-immutable-vault-concept.md).
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 08/16/2022 Last updated : 10/14/2022 +++ # Support matrix for Azure Backup
The following table describes the features of Recovery Services vaults:
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported. **Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.
-**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central, India Central, South Africa North, West US 2, Japan East, East US, US Gov Virginia and West US 3.
**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault. ## On-premises backup support
Azure Backup supports encryption for in-transit and at-rest data.
### Network traffic to Azure -- Backup traffic from servers to the Recovery Services vault is encrypted by using Advanced Encryption Standard 256.
+- The backup traffic from servers to the Recovery Services vault is encrypted by using Advanced Encryption Standard 256.
- Backup data is sent over a secure HTTPS link. ### Data security - Backup data is stored in the Recovery Services vault in encrypted form.-- When data is backed up from on-premises servers with the MARS agent, data is encrypted with a passphrase before upload to Azure Backup and decrypted only after it's downloaded from Azure Backup.
+- When data is backed-up from on-premises servers with the MARS agent, data is encrypted with a passphrase before upload to the Azure Backup service and decrypted only after it's downloaded from Azure Backup.
- When you're backing up Azure VMs, you need to set up encryption *within* the virtual machine. - Azure Backup supports Azure Disk Encryption, which uses BitLocker on Windows virtual machines and **dm-crypt** on Linux virtual machines. - On the back end, Azure Backup uses [Azure Storage Service Encryption](../storage/common/storage-service-encryption.md), which protects data at rest.
The resource health check functions in following conditions:
| **Supported Regions** | East US, East US 2, Central US, South Central US, North Central US, West Central US, West US, West US 2, West US 3, Canada East, Canada Central, North Europe, West Europe, UK West, UK South, France Central, France South, Sweden Central, Sweden South, East Asia, South East Asia, Japan East, Japan West, Korea Central, Korea South, Australia East, Australia Central, Australia Central 2, Australia South East, South Africa North, South Africa West, UAE North, UAE Central, Brazil South East, Brazil South, Switzerland North, Switzerland West, Norway East, Norway West, Germany North, Germany West Central, West India, Central India, South India, Jio India West, Jio India Central. | | **For unsupported regions** | The resource health status is shown as "Unknown". |
+## Zone-redundant storage support
+
+Azure Backup now supports zone-redundant storage (ZRS).
+
+### Supported regions
+
+- Azure Backup currently supports ZRS for all workloads, except Azure Disk, in the following regions: UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central, India Central, South Africa North, West US 2, Japan East, East US, US Gov Virginia, Switzerland North, Qatar, UAE North, and West US 3.
+
+- ZRS support for Azure Disk is generally available in the following regions: UK South, Southeast Asia, Australia East, North Europe, Central US, South Central US, West Europe, West US 2, Japan East, East US, US Gov Virginia, Qatar, and West US 3.
+
+### Supported scenarios
+
+Here's the list of scenarios supported even if zone gets unavailable in the supported regions:
+
+- Create/List/Update Policy
+- List backup jobs
+- List of protected items
+- Update vault config
+- Create vault
+- Get vault credential file
+
+### Supported operations
+
+The following table lists the workload specific operations supported even if zone gets unavailable in the supported regions:
+
+| Protected workload | Supported Operations |
+| | |
+| **IAAS VM** | - Backups are successful, if the protected VM is in an active zone. <br><br> - Original location recovery (OLR) is successful, if the protected VM is in an active zone. <br><br> - Alternate location restores (ALR) to an active zone is successful. |
+| **SQL/ SAP HANA database in Azure VM** | - Backups are successful, if the protected workload is in an active zone. <br><br> - Original location recovery (OLR) is successful, if the protected workload is in an active zone. <br><br> - Alternate location restores (ALR) to an active zone is successful. |
+| **Azure Files** | Backups, OLR, and ALR are successful, if the protected file share is in a ZRS account. |
+| **Blob** | Recovery is successful, if the protected storage account is in ZRS. |
+| **Disk** | - Backups are successful, if the protected disk is in an active zone. <br><br> - Restore to an active zone is successful. |
+| **MARS** | Backups and restores are successful. |
## Next steps
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md
Previously updated : 05/13/2022 Last updated : 10/14/2022
If you're [importing a project](../how-to/create-project.md#import-project) into
```json {
- "projectFileVersion": "2022-05-01",
+ "projectFileVersion": "2022-10-01-preview",
"stringIndexType": "Utf16CodeUnit", "metadata": { "projectKind": "Conversation", "projectName": "{PROJECT-NAME}", "multilingual": true, "description": "DESCRIPTION",
- "language": "{LANGUAGE-CODE}"
+ "language": "{LANGUAGE-CODE}",
+ "settings": {
+ "confidenceThreshold": 0
+ }
}, "assets": { "projectKind": "Conversation",
If you're [importing a project](../how-to/create-project.md#import-project) into
"entities": [ { "category": "entity1",
- "compositionSetting": "requireExactOverlap",
+ "compositionSetting": "{COMPOSITION-SETTING}",
"list": { "sublists": [ {
If you're [importing a project](../how-to/create-project.md#import-project) into
}, "prebuilts": [ {
- "category": "PREBUILT1"
+ "category": "{PREBUILT-COMPONENTS}"
}
+ ],
+ "regex": {
+ "expressions": [
+ {
+ "regexKey": "regex1",
+ "language": "{LANGUAGE-CODE}",
+ "regexPattern": "{REGEX-PATTERN}"
+ }
+ ]
+ },
+ "requiredComponents": [
+ "{REQUIRED-COMPONENTS}"
] } ],
If you're [importing a project](../how-to/create-project.md#import-project) into
|Key |Placeholder |Value | Example | |||-|--| | `api-version` | `{API-VERSION}` | The version of the API you're calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-05-01` |
-|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md)|`0.7`|
+|`confidenceThreshold`|`{CONFIDENCE-THRESHOLD}`|This is the threshold score below which the intent will be predicted as [none intent](none-intent.md). Values are from `0` to `1`|`0.7`|
| `projectName` | `{PROJECT-NAME}` | The name of your project. This value is case-sensitive. | `EmailApp` |
-| `multilingual` | `true`| A boolean value that enables you to have documents in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [Language support](../language-support.md#multi-lingual-option) for more information about supported language codes. | `true`|
-|`sublists`|`[]`|Array containing a sublists|`[]`|
+| `multilingual` | `true`| A boolean value that enables you to have utterances in multiple languages in your dataset and when your model is deployed you can query the model in any supported language (not necessarily included in your training documents. See [Language support](../language-support.md#multi-lingual-option) for more information about supported language codes. | `true`|
+|`sublists`|`[]`|Array containing sublists. Each sublist is a key and its associated values.|`[]`|
+|`compositionSetting`|`{COMPOSITION-SETTING}`|Rule that defines how to manage multiple components in your entity. Options are `combineComponents` or `separateComponents`. |`combineComponents`|
|`synonyms`|`[]`|Array containing all the synonyms|synonym|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. If your project is a multilingual project, choose the [language code](../language-support.md) of the majority of the utterances. |`en-us`|
-| `intents` | `[]` | Array containing all the intents you have in the project. These are the intent types that will be extracted from your utterances.| `[]` |
-| `entities` | `[]` | Array containing all the entities in your project. These are the entities that will be extracted from your utterances.| `[]` |
+| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances, synonyms, and regular expressions used in your project. If your project is a multilingual project, choose the [language code](../language-support.md) of the majority of the utterances. |`en-us`|
+| `intents` | `[]` | Array containing all the intents you have in the project. These are the intents that will be classified from your utterances.| `[]` |
+| `entities` | `[]` | Array containing all the entities in your project. These are the entities that will be extracted from your utterances. Every entity can have additional optional components defined with them: list, prebuilt, or regex. | `[]` |
| `dataset` | `{DATASET}` | The test set to which this utterance will go to when split before training. Learn more about data splitting [here](../how-to/train-model.md#data-splitting) . Possible values for this field are `Train` and `Test`. |`Train`| | `category` | ` ` | The type of entity associated with the span of text specified. | `Entity1`| | `offset` | ` ` | The inclusive character position of the start of the entity. |`5`| | `length` | ` ` | The character length of the entity. |`5`|
-| `language` | `{LANGUAGE-CODE}` | A string specifying the language code for the utterances used in your project. If your project is a multilingual project, choose the [language code](../language-support.md) of the majority of the utterances. |`en-us`|
+| `listKey`| ` ` | A normalized value for the list of synonyms to map back to in prediction. | `Microsoft` |
+| `values`| `{VALUES-FOR-LIST}` | A list of comma separated strings that will be matched exactly for extraction and map to the list key. | `"msft", "microsoft", "MS"` |
+| `regexKey`| `{REGEX-PATTERN}` | A regular expression. | `ProductPattern1` |
+| `regexPattern`| `{REGEX-PATTERN}` | A regular expression. | `^pre` |
+| `prebuilts`| `{PREBUILT-COMPONENTS}` | The prebuilt components that can extract common types. You can find the list of prebuilts you can add [here](../prebuilt-component-reference.md). | `Quantity.Number` |
+| `requiredComponents` | `{REQUIRED-COMPONENTS}` | A setting that specifies a requirement that a specific component be present to return the entity. You can learn more [here](./entity-components.md#required-components). The possible values are `learned`, `regex`, `list`, or `prebuilts` |`"learned", "prebuilt"`|
+++ ## Utterance file format
cognitive-services Migrate From Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md
CLU offers the following advantages over LUIS: -- Improved accuracy with state-of-the-art machine learning models for better intent classification and entity extraction. -- Multilingual support for model learning and training.
+- Improved accuracy with state-of-the-art machine learning models for better intent classification and entity extraction. LUIS required more examples to generalize certain concepts in intents and entities, while CLU's more advanced machine learning reduces the burden on customers by requiring significantly less data.
+- Multilingual support for model learning and training. Train projects in one language and immediately predict intents and entities across 96 languages.
- Ease of integration with different CLU and [custom question answering](../../question-answering/overview.md) projects using [orchestration workflow](../../orchestration-workflow/overview.md). - The ability to add testing data within the experience using Language Studio and APIs for model performance evaluation prior to deployment.
The following table presents a side-by-side comparison between the features of L
|LUIS features | CLU features | Post migration | |::|:-:|:--:|
-|Machine-learned and Structured ML entities| Learned [entity components](#how-are-entities-different-in-clu) |Machine-learned entities without subentities will be transferred as CLU entities. Structured ML entities will only transfer leaf nodes (lowest level subentities without their own subentities) as entities in CLU. The name of the entity in CLU will be the name of the subentity concatenated with the parent. For example, _Order.Size_|
-|List and prebuilt entities| List and prebuilt [entity components](#how-are-entities-different-in-clu) | List and prebuilt entities will be transferred as entities in CLU with a populated entity component based on the entity type.|
-|Regex and `Pattern.Any` entities| Not currently available | `Pattern.Any` entities will be removed. Regex entities will be removed.|
+|Machine-learned and Structured ML entities| Learned [entity components](#how-are-entities-different-in-clu) |Machine-learned entities without subentities will be transferred as CLU entities. Structured ML entities will only transfer leaf nodes (lowest level subentities that do not have their own subentities) as entities in CLU. The name of the entity in CLU will be the name of the subentity concatenated with the parent. For example, _Order.Size_|
+|List, regex, and prebuilt entities| List, regex, and prebuilt [entity components](#how-are-entities-different-in-clu) | List, regex, and prebuilt entities will be transferred as entities in CLU with a populated entity component based on the entity type.|
+|`Pattern.Any` entities| Not currently available | `Pattern.Any` entities will be removed.|
|Single culture for each application|[Multilingual models](#how-is-conversational-language-understanding-multilingual) enable multiple languages for each project. |The primary language of your project will be set as your LUIS application culture. Your project can be trained to extend to different languages.| |Entity roles |[Roles](#how-are-entity-roles-transferred-to-clu) are no longer needed. | Entity roles will be transferred as entities.| |Settings for: normalize punctuation, normalize diacritics, normalize word form, use all training data |[Settings](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Settings will not be transferred. |
The following table presents a side-by-side comparison between the features of L
|Entity features| Entity components| List or prebuilt entities added as features to an entity will be transferred as added components to that entity. [Entity features](#how-do-entity-features-get-transferred-in-clu) will not be transferred for intents. | |Intents and utterances| Intents and utterances |All intents and utterances will be transferred. Utterances will be labeled with their transferred entities. | |Application GUIDs |Project names| A project will be created for each migrating application with the application name. Any special characters in the application names will be removed in CLU.|
-|Versioning| Can only be stored [locally](#how-do-i-manage-versions-in-clu). | A project will be created for the selected application version. |
-|Evaluation using batch testing |Evaluation using testing sets | [Uploading your testing dataset](../how-to/tag-utterances.md#how-to-label-your-utterances) will be required.|
+|Versioning| Every time you train, a model is created and acts as a version of your [project](#how-do-i-manage-versions-in-clu). | A project will be created for the selected application version. |
+|Evaluation using batch testing |Evaluation using testing sets | [Adding your testing dataset](../how-to/tag-utterances.md#how-to-label-your-utterances) will be required.|
|Role-Based Access Control (RBAC) for LUIS resources |Role-Based Access Control (RBAC) available for Language resources |Language resource RBAC must be [manually added after migration](../../concepts/role-based-access-control.md). |
-|Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu) | Training will be required after application migration. |
+|Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu-how-is-standard-training-different-from-advanced-training) | Training will be required after application migration. |
|Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. | |LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](/rest/api/language/conversational-analysis-authoring). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. | |LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
Follow these steps to begin migration using the [LUIS Portal](https://www.luis.a
> [!NOTE] > Special characters are not supported by conversational language understanding. Any special characters in your selected LUIS application names will be removed in your new migrated applications. - :::image type="content" source="../media/backwards-compatibility/select-applications.svg" alt-text="A screenshot showing the application selection window." lightbox="../media/backwards-compatibility/select-applications.svg"::: 1. Review your Language resource and LUIS applications selections. Click **Finish** to migrate your applications.
CLU supports the model JSON version 7.0.0. If the JSON format is older, it would
### How are entities different in CLU?
-In CLU, a single entity can have multiple entity components, which are different methods for extraction. Those components are then combined together using rules you can define. The available components are: learned (equivalent to ML entities in LUIS), list, and prebuilt.
+In CLU, a single entity can have multiple entity components, which are different methods for extraction. Those components are then combined together using rules you can define. The available components are:
+- Learned: Equivalent to ML entities in LUIS, labels are used to train a machine-learned model to predict an entity based on the content and context of the provided labels.
+- List: Just like list entities in LUIS, list components exact match a set of synonyms and maps them back to a normalized value called a **list key**.
+- Prebuilt: Prebuilt components allow you to define an entity with the prebuilt extractors for common types available in both LUIS and CLU.
+- Regex: Regex components use regular expressions to capture custom defined patterns, exactly like regex entities in LUIS.
+
+Entities in LUIS will be transferred over as entities of the same name in CLU with the equivalent components transferred.
-After migrating, your structured machine-learned leaf nodes and bottom-level subentities will be transferred to the new CLU model while all the parent entities and higher-level entities will be ignored. The name of the entity will be the bottom-level entityΓÇÖs name concatenated with its parent entity.
+After migrating, your structured machine-learned leaf nodes and bottom-level subentities will be transferred to the new CLU model while all the parent entities and higher-level entities will be ignored. The name of the entity will be the bottom-level entityΓÇÖs name concatenated with its parent entity.
#### Example:
Migrated LUIS entity in CLU:
* Pizza Order.Topping * Pizza Order.Size
-
+
+You also cannot label 2 different entities in CLU for the same span of characters. Learned components in CLU are mutually exclusive and do not provide overlapping predictions for learned components only. When migrating your LUIS application, entity labels that overlapped preserved the longest label and ignored any others.
+ For more information on entity components, see [Entity components](../concepts/entity-components.md). ### How are entity roles transferred to CLU? Your roles will be transferred as distinct entities along with their labeled utterances. Each roleΓÇÖs entity type will determine which entity component will be populated. For example, a list entity role will be transferred as an entity with the same name as the role, with a populated list component.
+### How do entity features get transferred in CLU?
+
+Entities used as features for intents will not be transferred. Entities used as features for other entities will populate the relevant component of the entity. For example, if a list entity named _SizeList_ was used as a feature to a machine-learned entity named _Size_, then the _Size_ entity will be transferred to CLU with the list values from _SizeList_ added to its list component. The same is applied for prebuilt and regex entities.
+
+### How are entity confidence scores different in CLU?
+
+Any extracted entity has a 100% confidence score and therefore entity confidence scores should not be used to make decisions between entities.
+ ### How is conversational language understanding multilingual? Conversational language understanding projects accept utterances in different languages. Furthermore, you can train your model in one language and extend it to predict in other languages.
Runtime utterance (French): *Comment ça va?*
Predicted intent: Greeting
-### How are entity confidence scores different in CLU?
-
-Any extracted entity has a 100% confidence score and therefore entity confidence scores should not be used to make decisions between entities.
- ### How is the accuracy of CLU better than LUIS? CLU uses state-of-the-art models to enhance machine learning performance of different models of intent classification and entity extraction. These models are insensitive to minor variations, removing the need for the following settings: _Normalize punctuation_, _normalize diacritics_, _normalize word form_, and _use all training data_.
-Additionally, the new models do not support phrase list features as they no longer require supplementary information from the user to provide semantically similar words for better accuracy. Patterns were also used to provide improved intent classification using rule-based matching techniques that are not necessary in the new model paradigm.
+Additionally, the new models do not support phrase list features as they no longer require supplementary information from the user to provide semantically similar words for better accuracy. Patterns were also used to provide improved intent classification using rule-based matching techniques that are not necessary in the new model paradigm. The question below explains this in more detail.
+
+### What do I do if the features I am using in LUIS are no longer present?
+
+There are several features that were present in LUIS that will no longer be available in CLU. This includes the ability to do feature engineering, having patterns and pattern.any entities, and structured entities. If you had dependencies on these features in LUIS, use the following guidance:
+
+- **Patterns**: Patterns were added in LUIS to assist the intent classification through defining regular expression template utterances. This included the ability to define Pattern only intents (without utterance examples). CLU is capable of generalizing by leveraging the state-of-the-art models. You can provide a few utterances to that matched a specific pattern to the intent in CLU, and it will likely classify the different patterns as the top intent without the need of the pattern template utterance. This simplifies the requirement to formulate these patterns, which was limited in LUIS, and provides a better intent classification experience.
+
+- **Phrase list features**: The ability to associate features mainly occurred to assist the classification of intents by highlighting the key elements/features to use. This is no longer required since the deep models used in CLU already possess the ability to identify the elements that are inherent in the language. In turn removing these features will have no effect on the classification ability of the model.
+
+- **Structured entities**: The ability to define structured entities was mainly to enable multilevel parsing of utterances. With the different possibilities of the sub-entities, LUIS needed all the different combinations of entities to be defined and presented to the model as examples. In CLU, these structured entities are no longer supported, since overlapping learned components are not supported. There are a few possible approaches to handling these structured extractions:
+ - **Non-ambiguous extractions**: In most cases the detection of the leaf entities is enough to understand the required items within a full span. For example, structured entity such as _Trip_ that fully spanned a source and destination (_London to New York_ or _Home to work_) can be identified with the individual spans predicted for source and destination. Their presence as individual predictions would inform you of the _Trip_ entity.
+ - **Ambiguous extractions**: When the boundaries of different sub-entities are not very clear. To illustrate, take the example ΓÇ£I want to order a pepperoni pizza and an extra cheese vegetarian pizzaΓÇ¥. While the different pizza types as well as the topping modifications can be extracted, having them extracted without context would have a degree of ambiguity of where the extra cheese is added. In this case the extent of the span is context based and would require ML to determine this. For ambiguous extractions you can use one of the following approaches:
+
+1. Combine sub-entities into different entity components within the same entity.
+
+#### Example:
+
+LUIS Implementation:
+
+* Pizza Order (entity)
+ * Size (subentity)
+ * Quantity (subentity)
+
+CLU Implementation:
+
+* Pizza Order (entity)
+ * Size (list entity component: small, medium, large)
+ * Quantity (prebuilt entity component: number)
+
+In CLU, you would label the entire span for _Pizza Order_ inclusive of the size and quantity, which would return the pizza order with a list key for size, and a number value for quantity in the same entity object.
+
+2. For more complex problems where entities contain several levels of depth, you can create a project for each level of depth in the entity structure. This gives you the option to:
+- Pass the utterance to each project.
+- Combine the analyses of each project in the stage proceeding CLU.
+
+For a detailed example on this concept, check out the pizza sample projects available on [GitHub](https://aka.ms/clu-pizza).
### How do I manage versions in CLU?
-Although CLU does not offer versioning, you can export your CLU projects using [Language Studio](https://language.cognitive.azure.com/home) or [programmatically](../how-to/fail-over.md#export-your-primary-project-assets) and store different versions of the assets locally.
+CLU saves the data assets used to train your model. You can export a model's assets or load them back into the project at any point. So models act as different versions of your project.
+
+You can export your CLU projects using [Language Studio](https://language.cognitive.azure.com/home) or [programmatically](../how-to/fail-over.md#export-your-primary-project-assets) and store different versions of the assets locally.
### Why is CLU classification different from LUIS? How does None classification work?
If you are using the LUIS [programmatic](https://westus.dev.cognitive.microsoft.
You can use the [.NET](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0-beta.3/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/) or [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md) CLU runtime SDK to replace the LUIS runtime SDK. There is currently no authoring SDK available for CLU.
-### How are the training times different in CLU?
+### How are the training times different in CLU? How is standard training different from advanced training?
-CLU offers standard training, which trains and learns in English and is comparable to the training time of LUIS. It also offers advanced training, which takes a considerably longer duration as it extends the training to all other [supported languages](../language-support.md).
+CLU offers standard training, which trains and learns in English and is comparable to the training time of LUIS. It also offers advanced training, which takes a considerably longer duration as it extends the training to all other [supported languages](../language-support.md). The train API will continue to be an asynchronous process, and you will need to assess the change in the DevOps process you employ for your solution.
-### How can I link subentities to parent entities from my LUIS application in CLU?
+### How has the experience changed in CLU compared to LUIS? How is the development lifecycle different?
-One way to implement the concept of subentities in CLU is to combine the subentities into different entity components within the same entity.
+In LUIS you would Build-Train-Test-Publish, whereas in CLU you Build-Train-Evaluate-Deploy-Test.
-#### Example:
+1. **Build**: In CLU, you can define your intents, entities, and utterances before you train. CLU additionally offers you the ability to specify _test data_ as you build your application to be used for model evaluation. Evaluation assesses how well your model is performing on your test data and provides you with precision, recall, and F1 metrics.
+2. **Train**: You create a model with a name each time you train. You can overwrite an already trained model. You can specify either _standard_ or _advanced_ training, and determine if you would like to use your test data for evaluation, or a percentage of your training data to be left out from training and used as testing data. After training is complete, you can evaluate how well your model is doing on the outside.
+3. **Deploy**: After training is complete and you have a model with a name, it can be deployed for predictions. A deployment is also named and has an assigned model. You could have multiple deployments for the same model. A deployment can be overwritten with a different model, or you can swap models with other deployments in the project.
+4. **Test**: Once deployment is complete, you can use it for predictions through the deployment endpoint. You can also test it in the studio in the Test deployment page.
-LUIS Implementation:
+This process is in contrast to LUIS, where the application ID was attached to everything, and you deployed a version of the application in either the staging or production slots.
-* Pizza Order (entity)
- * Size (subentity)
- * Quantity (subentity)
-
-CLU Implementation:
+This will influence the DevOps processes you use.
-* Pizza Order (entity)
- * Size (list entity component: small, medium, large)
- * Quantity (prebuilt entity component: number)
-
-In CLU, you would label the entire span for _Pizza Order_ inclusive of the size and quantity, which would return the pizza order with a list key for size, and a number value for quantity in the same entity object.
-
-For more complex problems where entities contain several levels of depth, you can create a project for each couple of levels of depth in the entity structure. This gives you the option to:
-1. Pass the utterance to each project.
-1. Combine the analyses of each project in the stage proceeding CLU.
-
-For a detailed example on this concept, check out the pizza bot sample available on [GitHub](https://github.com/Azure-Samples/cognitive-service-language-samples/tree/main/CoreBotWithCLU).
-
-### How do entity features get transferred in CLU?
+### Does CLU have container support?
-Entities used as features for intents will not be transferred. Entities used as features for other entities will populate the relevant component of the entity. For example, if a list entity named _SizeList_ was used as a feature to a machine-learned entity named _Size_, then the _Size_ entity will be transferred to CLU with the list values from _SizeList_ added to its list component.
+No, you cannot export CLU to containers.
### How will my LUIS applications be named in CLU after migration?
If you have any questions that were unanswered in this article, consider leaving
## Next steps * [Quickstart: create a CLU project](../quickstart.md) * [CLU language support](../language-support.md)
-* [CLU FAQ](../faq.md)
+* [CLU FAQ](../faq.md)
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md
Use this article to learn about the data and service limits when using custom NE
## Regional availability
-Custom named entity recognition is only available in some Azure regions. To use custom named entity recognition, you must choose a Language resource in one of following regions:
-
-* West US 2
-* East US
-* East US 2
-* West US 3
-* South Central US
-* West Europe
-* North Europe
-* UK south
-* Australia East
+Custom named entity recognition is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
## API limits
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/service-limits.md
See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan
## Regional availability
-Custom text classification is only available in some Azure regions. To use custom text classification, you must choose a Language resource in one of following regions:
-
-* West US 2
-* East US
-* East US 2
-* West US 3
-* South Central US
-* West Europe
-* North Europe
-* UK south
-* Australia East
-
+Custom text classification is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md).
+
+| Region | Authoring | Prediction |
+|--|--|-|
+| Australia East | Γ£ô | Γ£ô |
+| Brazil South | | Γ£ô |
+| Canada Central | | Γ£ô |
+| Central India | Γ£ô | Γ£ô |
+| Central US | | Γ£ô |
+| East Asia | | Γ£ô |
+| East US | Γ£ô | Γ£ô |
+| East US 2 | Γ£ô | Γ£ô |
+| France Central | | Γ£ô |
+| Japan East | | Γ£ô |
+| Japan West | | Γ£ô |
+| Jio India West | | Γ£ô |
+| Korea Central | | Γ£ô |
+| North Central US | | Γ£ô |
+| North Europe | Γ£ô | Γ£ô |
+| Norway East | | Γ£ô |
+| South Africa North | | Γ£ô |
+| South Central US | Γ£ô | Γ£ô |
+| Southeast Asia | | Γ£ô |
+| Sweden Central | | Γ£ô |
+| Switzerland North | Γ£ô | Γ£ô |
+| UAE North | | Γ£ô |
+| UK South | Γ£ô | Γ£ô |
+| West Central US | | Γ£ô |
## API limits
cognitive-services Entity Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md
This category contains the following entity:
Names of people. Returned as both PII and PHI.
- To get this entity category, add `Person` to the `pii-categories` parameter. `Person` will be returned in the API response if detected.
+ To get this entity category, add `Person` to the `piiCategories` parameter. `Person` will be returned in the API response if detected.
:::column-end:::
This category contains the following entity:
Job types or roles held by a person.
- To get this entity category, add `PersonType` to the `pii-categories` parameter. `PersonType` will be returned in the API response if detected.
+ To get this entity category, add `PersonType` to the `piiCategories` parameter. `PersonType` will be returned in the API response if detected.
:::column-end:::
This category contains the following entity:
Phone numbers (US and EU phone numbers only). Returned as both PII and PHI.
- To get this entity category, add `PhoneNumber` to the `pii-categories` parameter. `PhoneNumber` will be returned in the API response if detected.
+ To get this entity category, add `PhoneNumber` to the `piiCategories` parameter. `PhoneNumber` will be returned in the API response if detected.
:::column-end:::
This category contains the following entity:
Companies, political groups, musical bands, sport clubs, government bodies, and public organizations. Nationalities and religions are not included in this entity type. Returned as both PII and PHI.
- To get this entity category, add `Organization` to the `pii-categories` parameter. `Organization` will be returned in the API response if detected.
+ To get this entity category, add `Organization` to the `piiCategories` parameter. `Organization` will be returned in the API response if detected.
:::column-end:::
The entity in this category can have the following subcategories.
Medical companies and groups.
- To get this entity category, add `OrganizationMedical` to the `pii-categories` parameter. `OrganizationMedical` will be returned in the API response if detected.
+ To get this entity category, add `OrganizationMedical` to the `piiCategories` parameter. `OrganizationMedical` will be returned in the API response if detected.
:::column-end:::
The entity in this category can have the following subcategories.
Stock exchange groups.
- To get this entity category, add `OrganizationStockExchange` to the `pii-categories` parameter. `OrganizationStockExchange` will be returned in the API response if detected.
+ To get this entity category, add `OrganizationStockExchange` to the `piiCategories` parameter. `OrganizationStockExchange` will be returned in the API response if detected.
:::column-end:::
The entity in this category can have the following subcategories.
Sports-related organizations.
- To get this entity category, add `OrganizationSports` to the `pii-categories` parameter. `OrganizationSports` will be returned in the API response if detected.
+ To get this entity category, add `OrganizationSports` to the `piiCategories` parameter. `OrganizationSports` will be returned in the API response if detected.
:::column-end:::
This category contains the following entity:
Full mailing address. Returned as both PII and PHI.
- To get this entity category, add `Address` to the `pii-categories` parameter. `Address` will be returned in the API response if detected.
+ To get this entity category, add `Address` to the `piiCategories` parameter. `Address` will be returned in the API response if detected.
:::column-end:::
This category contains the following entity:
Email addresses. Returned as both PII and PHI.
- To get this entity category, add `Email` to the `pii-categories` parameter. `Email` will be returned in the API response if detected.
+ To get this entity category, add `Email` to the `piiCategories` parameter. `Email` will be returned in the API response if detected.
:::column-end::: :::column span="":::
This category contains the following entity:
URLs to websites. Returned as both PII and PHI.
- To get this entity category, add `URL` to the `pii-categories` parameter. `URL` will be returned in the API response if detected.
+ To get this entity category, add `URL` to the `piiCategories` parameter. `URL` will be returned in the API response if detected.
:::column-end:::
This category contains the following entity:
Network IP addresses. Returned as both PII and PHI.
- To get this entity category, add `IP` to the `pii-categories` parameter. `IP` will be returned in the API response if detected.
+ To get this entity category, add `IP` to the `piiCategories` parameter. `IP` will be returned in the API response if detected.
:::column-end:::
This category contains the following entities:
Dates and times of day.
- To get this entity category, add `DateTime` to the `pii-categories` parameter. `DateTime` will be returned in the API response if detected.
+ To get this entity category, add `DateTime` to the `piiCategories` parameter. `DateTime` will be returned in the API response if detected.
:::column-end::: :::column span="":::
The entity in this category can have the following subcategories.
Calender dates. Returned as both PII and PHI.
- To get this entity category, add `Date` to the `pii-categories` parameter. `Date` will be returned in the API response if detected.
+ To get this entity category, add `Date` to the `piiCategories` parameter. `Date` will be returned in the API response if detected.
:::column-end::: :::column span="2":::
This category contains the following entities:
Numbers and numeric quantities.
- To get this entity category, add `Quantity` to the `pii-categories` parameter. `Quantity` will be returned in the API response if detected.
+ To get this entity category, add `Quantity` to the `piiCategories` parameter. `Quantity` will be returned in the API response if detected.
:::column-end::: :::column span="2":::
The entity in this category can have the following subcategories.
Ages.
- To get this entity category, add `Age` to the `pii-categories` parameter. `Age` will be returned in the API response if detected.
+ To get this entity category, add `Age` to the `piiCategories` parameter. `Age` will be returned in the API response if detected.
:::column-end::: :::column span="2":::
These entity categories include identifiable Azure information like authenticati
Authorization key for an Azure Cosmos DB server.
- To get this entity category, add `AzureDocumentDBAuthKey` to the `pii-categories` parameter. `AzureDocumentDBAuthKey` will be returned in the API response if detected.
+ To get this entity category, add `AzureDocumentDBAuthKey` to the `piiCategories` parameter. `AzureDocumentDBAuthKey` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Connection string for an Azure infrastructure as a service (IaaS) database, and SQL connection string.
- To get this entity category, add `AzureIAASDatabaseConnectionAndSQLString` to the `pii-categories` parameter. `AzureIAASDatabaseConnectionAndSQLString` will be returned in the API response if detected.
+ To get this entity category, add `AzureIAASDatabaseConnectionAndSQLString` to the `piiCategories` parameter. `AzureIAASDatabaseConnectionAndSQLString` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Connection string for Azure IoT.
- To get this entity category, add `AzureIoTConnectionString` to the `pii-categories` parameter. `AzureIoTConnectionString` will be returned in the API response if detected.
+ To get this entity category, add `AzureIoTConnectionString` to the `piiCategories` parameter. `AzureIoTConnectionString` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Password for Azure publish settings.
- To get this entity category, add `AzurePublishSettingPassword` to the `pii-categories` parameter. `AzurePublishSettingPassword` will be returned in the API response if detected.
+ To get this entity category, add `AzurePublishSettingPassword` to the `piiCategories` parameter. `AzurePublishSettingPassword` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Connection string for a Redis cache.
- To get this entity category, add `AzureRedisCacheString` to the `pii-categories` parameter. `AzureRedisCacheString` will be returned in the API response if detected.
+ To get this entity category, add `AzureRedisCacheString` to the `piiCategories` parameter. `AzureRedisCacheString` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Connection string for Azure software as a service (SaaS).
- To get this entity category, add `AzureSAS` to the `pii-categories` parameter. `AzureSAS` will be returned in the API response if detected.
+ To get this entity category, add `AzureSAS` to the `piiCategories` parameter. `AzureSAS` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Connection string for an Azure service bus.
- To get this entity category, add `AzureServiceBusString` to the `pii-categories` parameter. `AzureServiceBusString` will be returned in the API response if detected.
+ To get this entity category, add `AzureServiceBusString` to the `piiCategories` parameter. `AzureServiceBusString` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Account key for an Azure storage account.
- To get this entity category, add `AzureStorageAccountKey` to the `pii-categories` parameter. `AzureStorageAccountKey` will be returned in the API response if detected.
+ To get this entity category, add `AzureStorageAccountKey` to the `piiCategories` parameter. `AzureStorageAccountKey` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Generic account key for an Azure storage account.
- To get this entity category, add `AzureStorageAccountGeneric` to the `pii-categories` parameter. `AzureStorageAccountGeneric` will be returned in the API response if detected.
+ To get this entity category, add `AzureStorageAccountGeneric` to the `piiCategories` parameter. `AzureStorageAccountGeneric` will be returned in the API response if detected.
:::column-end::: :::column span="":::
These entity categories include identifiable Azure information like authenticati
Connection string for a computer running SQL Server.
- To get this entity category, add `SQLServerConnectionString` to the `pii-categories` parameter. `SQLServerConnectionString` will be returned in the API response if detected.
+ To get this entity category, add `SQLServerConnectionString` to the `piiCategories` parameter. `SQLServerConnectionString` will be returned in the API response if detected.
:::column-end::: :::column span="":::
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* [Conversational language understanding](./conversational-language-understanding/overview.md) * [Orchestration workflow](./orchestration-workflow/overview.md) * [Custom text classification](./custom-text-classification/overview.md)
- * [Custom named entity recognition](./custom-named-entity-recognition/overview.md).
+ * [Custom named entity recognition](./custom-named-entity-recognition/overview.md)
* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an additional ability to influence entity predictions. * [Entity resolution](./named-entity-recognition/concepts/entity-resolutions.md) in named entity recognition
+* New region support for:
+ * [Conversational language understanding](./conversational-language-understanding/service-limits.md#regional-availability)
+ * [Orchestration workflow](./orchestration-workflow/service-limits.md#regional-availability)
+ * [Custom text classification](./custom-text-classification/service-limits.md#regional-availability)
+ * [Custom named entity recognition](./custom-named-entity-recognition/service-limits.md#regional-availability)
## September 2022
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
This article demonstrates how to deploy an existing container to Azure Container
- An Azure account with an active subscription. - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - Install the [Azure CLI](/cli/azure/install-azure-cli).-- Access to a public or private container registry.
+- Access to a public or private container registry, such as the [Azure Container Registry](/azure/container-registry/).
[!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)]
The example shown in this article demonstrates how to use a custom container ima
# [Bash](#tab/bash)
-For details on how to provide values for any of these parameters to the `create` command, run `az containerapp create --help`.
+For details on how to provide values for any of these parameters to the `create` command, run `az containerapp create --help` or [visit the online reference](/cli/azure/containerapp#az-containerapp-create). To generate credentials for an Azure Container Registry, use [az acr credential show](/cli/azure/acr/credential#az-acr-credential-show).
```bash CONTAINER_IMAGE_NAME=<CONTAINER_IMAGE_NAME>
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
When using Azure Files, you must use the Azure CLI with a YAML definition to cre
```azure-cli az containerapp update --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> \
- --yaml my-app.yaml
+ --yaml app.yaml
``` ::: zone-end
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Azure Cosmos DB analytical store is a fully isolated column store for enabling l
Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. This article describes in detailed about analytical storage.
-> [!NOTE]
-> Synapse Link for Gremlin API is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md).
--- ## Challenges with large-scale analytics on operational data The multi-model operational data in an Azure Cosmos DB container is internally stored in an indexed row-based "transactional store". Row store format is designed to allow fast transactional reads and writes in the order-of-milliseconds response times, and operational queries. If your dataset grows large, complex analytical queries can be expensive in terms of provisioned throughput on the data stored in this format. High consumption of provisioned throughput in turn, impacts the performance of transactional workloads that are used by your real-time applications and services.
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/support.md
keytool -importcert -alias bc2025ca -file bc2025.crt
echo "deb https://downloads.apache.org/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list curl https://downloads.apache.org/cassandra/KEYS | sudo apt-key add - sudo apt-get update
-sudo apt-get install cassandra
+sudo apt-get install cassandra=3.11.13
``` **Connect with Unix/Linux/Mac:**
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
[Azure Synapse Link for Azure Cosmos DB](synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data in Azure Cosmos DB. Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
-> [!NOTE]
-> Synapse Link for API for Gremlin is now in preview. You can enable Synapse Link in your new or existing Graphs using Azure CLI.
-
-Azure Synapse Link is available for Azure Cosmos DB for NoSQL or MongoDB accounts. Use the following steps to run analytical queries with the Azure Synapse Link for Azure Cosmos DB:
+Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos DB API for Mongo DB accounts. And it is in preview for Gremlin API, with activation via CLI commands. Use the following steps to run analytical queries with the Azure Synapse Link for Azure Cosmos DB:
* [Enable Azure Synapse Link for your Azure Cosmos DB accounts](#enable-synapse-link) * [Enable Azure Synapse Link for your containers](#update-analytical-ttl)
Please note the following details when enabling Azure Synapse Link on your exist
* You won't be able to query analytical store of an existing container while Synapse Link is being enabled on that container. Your OLTP workload isn't impacted and you can keep on reading data normally. Data ingested after the start of the initial sync will be merged into analytical store by the regular analytical store auto-sync process. > [!NOTE]
-> Currently you can't enable Synapse Link for MongoDB API containers.
+> Currently you can't enable Synapse Link on your existing MongoDB API containers. Synapse Link can be enabled on newly created Mongo DB containers.
### Azure portal
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional an
Using [Azure Cosmos DB analytical store](analytical-store-introduction.md), a fully isolated column store, Azure Synapse Link enables no Extract-Transform-Load (ETL) analytics in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) against your operational data at scale. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines. You can achieve this without impacting the performance of your transactional workloads on Azure Cosmos DB.
-> [!NOTE]
-> Synapse Link for Gremlin API is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, check the [configuration documentation](configure-synapse-link.md).
- The following image shows the Azure Synapse Link integration with Azure Cosmos DB and Azure Synapse Analytics: :::image type="content" source="./media/synapse-link/synapse-analytics-cosmos-db-architecture.png" alt-text="Architecture diagram for Azure Synapse Analytics integration with Azure Cosmos DB" border="false":::
Synapse Link isn't recommended if you're looking for traditional data warehouse
## Limitations
-* Azure Synapse Link for Azure Cosmos DB is not supported for API for Gremlin, Cassandra, and Table. It is supported for API for NoSQL and MongoDB.
+* Azure Synapse Link for Azure Cosmos DB is not supported for Cassandra, and Table APIs. It is supported for API for NoSQL and MongoDB. And it is in preview for Gremlin API.
* Accessing the Azure Cosmos DB analytics store with Azure Synapse Dedicated SQL Pool currently isn't supported.
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Budget integration with action groups works for action groups that have enabled
## View budgets in the Azure mobile app
-You can view budgets for your subscriptions and resource groups from the **Cost Management** card in Azure mobile app.
+You can view budgets for your subscriptions and resource groups from the **Cost Management** card in the [Azure app](https://azure.microsoft.com/get-started/azure-portal/mobile-app/).
1. Navigate to any subscription or resource group. 1. Find the **Cost Management** card and tap **More**. 1. Budgets load below the **Current cost** card. They're sorted by descending order of usage.
+> [!NOTE]
+> Currently, the Azure mobile app only supports the subscription and resource group scopes for budgets.
+ :::image type="content" source="./media/tutorial-acm-create-budgets/azure-app-budgets.png" alt-text="Screenshot showing budgets in the Azure app." lightbox="./media/tutorial-acm-create-budgets/azure-app-budgets.png" ::: ## Create and edit budgets with PowerShell
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
Previously updated : 10/12/2022 Last updated : 10/14/2022 # How saving plan discount is applied
The benefit is first applied to the product that has the greatest savings plan d
A savings plan discount only applies to resources associated with Enterprise Agreement, Microsoft Partner Agreement, and Microsoft Customer Agreements. Resources that run in a subscription with other offer types don't receive the discount.
+## Benefit allocation window
+
+With an Azure savings plan, you get significant and flexible discounts off your pay-as-you-go rates in exchange for a one or three-year spend commitment. When you use an Azure resource, usage details are periodically reported to the Azure billing system. The billing system is tasked with quickly applying your savings plan in the most beneficial manner possible. The plan benefits are applied to usage that has the largest discount percentage first. For the application to be most effective, the billing system needs visibility to your usage in a timely manner.
+
+The Azure savings plan benefit application operates under a best fit benefit model. When your benefit application is evaluated for a given hour, the billing system incorporates usage arriving up to 48 hours after the given hour. During the sliding 48-hour window, you may see changes to charges, including the possibility of savings plan utilization that's greater than 100%. This situation happens because the system is constantly working to provide the best possible benefit application. Keep the 48-hour window in mind when you inspect your usage.
+ ## When the savings plan term expires At the end of the savings plan term, the billing discount expires, and the resources are billed at the pay-as-you go price. By default, the savings plans aren't set to renew automatically. You can choose to enable automatic renewal of a savings plan by selecting the option in the renewal settings.
cost-management-billing Troubleshoot Savings Plan Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/troubleshoot-savings-plan-utilization.md
+
+ Title: Troubleshoot Azure savings plan utilization
+
+description: This article helps you understand why Azure savings plans can temporarily have utilization greater than 100% in usage reporting UIs and APIs.
+++++ Last updated : 10/14/2022+++
+# Troubleshoot Azure savings plan utilization
+
+This article helps you understand why Azure savings plans can temporarily have high utilization.
+
+## Why is my savings plan utilization greater than 100%?
+
+Azure savings plans can temporarily have utilization greater than 100%, as shown in the Azure portal and from APIs.
+
+Azure saving plan benefits are flexible and cover usage across various products and regions. Under an Azure savings plan, Azure applies plan benefits to your usage that has the largest percentage discount off its pay-as-you-go rate first, until we reach your hourly commitment.
+
+The Azure usage and billing systems determine your hourly cost by examining your usage for each hour. Usage is reported to the Azure billing systems. It's sent by all services that you used for the previous hour. However, usage isn't always sent instantly, which makes it difficult to determine which resources should receive the benefit. To compensate, Azure temporarily applies the maximum benefit to all usage received. Azure then does extra processing to quickly reconcile utilization to 100%.
+
+Periods of such high utilization are most likely to occur immediately after a usage hour.
+
+## Next steps
+
+- Learn more about [Azure saving plans](index.yml).
cost-management-billing Utilization Cost Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/utilization-cost-reports.md
Previously updated : 10/12/2022 Last updated : 10/14/2022
Here's a comparison of the two data sets:
| | | | | Savings plan purchases | Available in the view.<br><br>To get the data, filter on ChargeType = `Purchase`.<br><br>Refer to `BenefitID` or `BenefitName` to know which savings plan the charge is for. | Not applicable to the view.<br><br>Purchase costs aren't provided in amortized data. | | `EffectivePrice` | The value is zero for usage that gets savings plan discount. | The value is per-hour prorated cost of the savings plan for usage that has the savings plan discount. |
-| Unused benefit (provides the number of hours the savings plan wasn't used in a day and the monetary value of the waste) | Not applicable in the view. | Available in the view.<br><br>To get the data, filter on ChargeType = `UnusedBenefit`.<br><br>Refer to `BenefitID` or `BenefitName` to know which savings plan was underutilized. Indicates how much of the savings plan was wasted for the day. |
+| Unused Savings Plan (provides the number of hours the savings plan wasn't used in a day and the monetary value of the waste) | Not applicable in the view. | Available in the view.<br><br>To get the data, filter on ChargeType = `UnusedSavingsPlan`.<br><br>Refer to `BenefitID` or `BenefitName` to know which savings plan was underutilized. Indicates how much of the savings plan was wasted for the day. |
| UnitPrice (price of the resource from your price sheet) | Available | Available | ## Get Azure consumption and savings plan cost data using API
Information in the following table about metric and filter can help solve for co
| **Usage that got savings plan discount** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, look for records with ChargeType = 'Usage' and PricingModel = 'SavingsPlan'. | | **Usage that didn't get savings plan discount** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with PricingModel = 'OnDemand'. | | **Amortized charges (usage and purchases)** | Request for an AmortizedCost report. |
-| **Unused savings plan report** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'UnusedBenefit' and PricingModel ='SavingsPlan'. |
-| **Savings plan purchases** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Purchase' and PricingModel = 'SavingsPlan'. |
-| **Refunds** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Refund'. |
+| **Unused savings plan report** | Request for an AmortizedCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'UnusedSavingsPlan' and PricingModel ='SavingsPlan'. |
+| **Savings plan purchases** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Purchase' and PricingModel = 'SavingsPlan'. |
+| **Refunds** | Request for an ActualCost report.<br><br> Once you've ingested all of the usage, filter for usage records with ChargeType = 'Refund'. |
## Download the cost CSV file with new data
Savings plan purchase costs are available in Actual Cost data. Filter for Charge
### Get underutilized savings plan quantity and costs
-Get amortized cost data and filter for `ChargeType` = `UnusedBenefit` and `PricingModel` = `SavingsPlan`. You get the daily unused savings plan quantity and the cost. You can filter the data for a savings plan or savings plan order using `BenefitId` and `ProductOrderId` fields, respectively. If a savings plan was 100% utilized, the record has a quantity of 0.
+Get amortized cost data and filter for `ChargeType` = `UnusedSavingsPlan` and `PricingModel` = `SavingsPlan`. You get the daily unused savings plan quantity and the cost. You can filter the data for a savings plan or savings plan order using `BenefitId` and `ProductOrderId` fields, respectively. If a savings plan was 100% utilized, the record has a quantity of 0.
### Amortized savings plan costs
Get the Amortized costs data and filter the data for a savings plan instance. Th
2. Get the savings plan costs. Sum the _Cost_ values to get the monetary value of what you paid for the savings plan. It includes the used and unused costs of the savings plan. 3. Subtract savings plan costs from estimated pay-as-you-go costs to get the estimated savings.
-Keep in mind that if you have an underutilized savings plan, the _UnusedBenefit_ entry for _ChargeType_ becomes a factor to consider. When you have a fully utilized savings plan, you receive the maximum savings possible. Any _UnusedBenefit_ quantity reduces savings.
+Keep in mind that if you have an underutilized savings plan, the _UnusedSavingsPlan_ entry for _ChargeType_ becomes a factor to consider. When you have a fully utilized savings plan, you receive the maximum savings possible. Any _UnusedSavingsPlan_ quantity reduces savings.
## Purchase and amortization costs in cost analysis
Savings plan costs are available inΓÇ»[cost analysis](https://aka.ms/costanalysi
:::image type="content" source="./media/utilization-cost-reports/portal-cost-analysis-amortized-view.png" alt-text="Example showing where to select amortized cost in cost analysis." lightbox="./media/utilization-cost-reports/portal-cost-analysis-amortized-view.png" :::
-Group by _Charge Type_ to see a breakdown of usage, purchases, and refunds; or by _Pricing Model_ for a breakdown of savings plan and on-demand costs. You can also group by _Benefit_ and use the _BenefitId_ and _BenefitName_ associated with your Savings Plan to identify the costs related to specific savings plan purchases. The only savings plan costs that you see when looking at actual cost are purchases. Costs aren't allocated to the individual resources that used the benefit when looking at amortized cost. You'll also see a new _**UnusedBenefit**_ plan charge type when looking at amortized cost.
+Group by _Charge Type_ to see a breakdown of usage, purchases, and refunds; or by _Pricing Model_ for a breakdown of savings plan and on-demand costs. You can also group by _Benefit_ and use the _BenefitId_ and _BenefitName_ associated with your Savings Plan to identify the costs related to specific savings plan purchases. The only savings plan costs that you see when looking at actual cost are purchases. Costs aren't allocated to the individual resources that used the benefit when looking at amortized cost. You'll also see a new _**UnusedSavingsPlan**_ plan charge type when looking at amortized cost.
## Next steps
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
-+ Previously updated : 07/26/2022 Last updated : 10/14/2022 # Change data capture in Azure Data Factory and Azure Synapse Analytics
To learn more, see [Azure Data Factory overview](introduction.md) or [Azure Syna
## Overview
-When you perform data integration and ETL processes in the cloud, your jobs can perform much better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. Executing pipelines that only read the latest changed data is available in many of ADF's source connectors by simply enabling a checkbox property inside the source transformation. Support for full-fidelity CDC, which includes row markers for upserts, deletes, and updates, as well as rules for resetting the ADF-managed checkpoint are available in several ADF connectors. To easily capture changes and deltas, ADF supports patterns and templates for managing incremental pipelines with user-controlled checkpoints as well, which you'll find in the table below.
+When you perform data integration and ETL processes in the cloud, your jobs can perform much better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. Executing pipelines that only read the latest changed data is available in many of ADF's source connectors by simply enabling a checkbox property inside the source transformation. Support for full-fidelity CDC, which includes row markers for inserts, upserts, deletes, and updates, as well as rules for resetting the ADF-managed checkpoint are available in several ADF connectors. To easily capture changes and deltas, ADF supports patterns and templates for managing incremental pipelines with user-controlled checkpoints as well, which you'll find in the table below.
## CDC Connector support
When you perform data integration and ETL processes in the cloud, your jobs can
| [ADLS Gen1](load-azure-data-lake-store.md) | &nbsp; | Γ£ô | &nbsp; | | [ADLS Gen2](load-azure-data-lake-storage-gen2.md) | &nbsp; | Γ£ô | &nbsp; | | [Azure Blob Storage](connector-azure-blob-storage.md) | &nbsp; | Γ£ô | &nbsp; |
-| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md) | &nbsp; | Γ£ô | &nbsp; |
+| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md) | Γ£ô | Γ£ô | &nbsp; |
| [Azure Database for MySQL](connector-azure-database-for-mysql.md) | &nbsp; | Γ£ô | &nbsp; | | [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) | &nbsp; | Γ£ô | &nbsp; | | [Azure SQL Database](connector-azure-sql-database.md) | Γ£ô | Γ£ô | [Γ£ô](tutorial-incremental-copy-portal.md) |
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
Previously updated : 04/20/2022 Last updated : 09/28/2022
Here's an example of what an Resource Manager parameter configuration might look
} }, "Microsoft.DataFactory/factories/datasets": {
- "properties": {
- "typeProperties": {
- "*": "="
+ "*": {
+ "properties": {
+ "typeProperties": {
+ "folderPath": "=",
+ "fileName": "="
+ }
} } },
Below is the current default parameterization template. If you need to add only
"parameters": { "*": "=" }
- },
- "pipelineReference.referenceName"
+ }
], "pipeline": { "parameters": {
The following example shows how to add a single value to the default parameteriz
"parameters": { "*": "=" }
- },
- "pipelineReference.referenceName"
+ }
], "pipeline": { "parameters": {
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-customer-managed-key.md
Previously updated : 08/05/2022 Last updated : 10/14/2022
Make sure Azure Key Vault and Azure Data Factory are in the same Azure Active Di
### Generate or upload customer-managed key to Azure Key Vault
-You can either create your own keys and store them in a key vault. Or you can use the Azure Key Vault APIs to generate keys. Only 2048-bit RSA keys are supported with Data Factory encryption. For more information, see [About keys, secrets, and certificates](../key-vault/general/about-keys-secrets-certificates.md).
+You can either create your own keys and store them in a key vault. Or you can use the Azure Key Vault APIs to generate keys. Only RSA keys are supported with Data Factory encryption. RSA-HSM is also supported. For more information, see [About keys, secrets, and certificates](../key-vault/general/about-keys-secrets-certificates.md).
:::image type="content" source="media/enable-customer-managed-key/03-create-key.png" alt-text="Screenshot showing how to generate Customer-Managed Key.":::
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Previously updated : 05/31/2022 Last updated : 10/13/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
databox-online Azure Stack Edge Pro R Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-activate.md
Previously updated : 02/23/2021 Last updated : 10/13/2022 # Customer intent: As an IT admin, I need to understand how to activate Azure Stack Edge Pro R device so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Pro R Deploy Configure Certificates Vpn Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-configure-certificates-vpn-encryption.md
Previously updated : 10/19/2020 Last updated : 10/13/2022 # Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro R so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Pro R Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy.md
Previously updated : 02/23/2022 Last updated : 10/14/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro R so I can use it to transfer data to Azure.
In this tutorial, you learn about:
> > * Prerequisites > * Configure network
-> * Enable compute network
+> * Configure advanced networking
> * Configure web proxy
+> * Validate network settings
## Prerequisites
Follow these steps to configure the network for your device.
Once the device network is configured, the page updates as shown below.
- ![Local web UI "Network settings" page 2](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/network-2a.png)<!--change-->
+ ![Local web UI "Network settings" page 2](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/network-2a.png)
>[!NOTE] > We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has activated with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service.
- After you have configured and applied the network settings, select **Next: Compute** to configure compute network.
+ After you have configured and applied the network settings, select **Next: Advanced networking** to configure compute network.
## Enable compute network Follow these steps to enable compute and configure compute network.
-1. In the **Compute** page, select a network interface that you want to enable for compute.
+1. In the **Advanced networking** page, select a network interface that you want to enable for compute.
![Compute page in local UI 2](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/compute-network-2.png)
-1. In the **Network settings** dialog, select **Enable**. When you enable compute, a virtual switch is created on your device on that network interface. The virtual switch is used for the compute infrastructure on the device.
-
-1. Assign **Kubernetes node IPs**. These static IP addresses are for the compute VM.
+## Configure virtual switches
- For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. Given Azure Stack Edge is a 1-node device, a minimum of 2 contiguous IPv4 addresses are provided. These IP addresses must be in the same network where you enabled compute and the virtual switch was created.
+Follow these steps to add or delete virtual switches and virtual networks.
- > [!IMPORTANT]
- > Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+1. In the local UI, go to **Advanced networking** page.
+1. In the **Virtual switch** section, you'll add or delete virtual switches. Select **Add virtual switch** to create a new switch.
+
+ ![Add virtual switch page in local UI 2](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/add-virtual-switch-1.png)
+
+1. In the **Network settings** blade, if using a new switch, provide the following:
+
+ 1. Provide a name for your virtual switch.
+ 1. Choose the network interface on which the virtual switch should be created.
+ 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
+ 1. Select **Apply**. You can see that the specified virtual switch is created.
+
+ ![Screenshot of **Advanced networking** page with virtual switch added and enabled for compute in local UI for one node.](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/add-virtual-switch-1.png)
+
+1. You can create more than one switch by following the steps described earlier.
+
+1. To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
+
+You can now create virtual networks and associate with the virtual switches you created.
+
+## Configure virtual networks
+
+You can add or delete virtual networks associated with your virtual switches. To add a virtual switch, follow these steps:
+
+1. In the local UI on the **Advanced networking** page, under the **Virtual network** section, select **Add virtual network**.
+
+1. In the **Add virtual network** blade, input the following information:
+
+ 1. Select a virtual switch for which you want to create a virtual network.
+ 1. Provide a **Name** for your virtual network.
+ 1. Enter a **VLAN ID** as a unique number in 1-4094 range. The VLAN ID that you provide should be in your trunk configuration. For more information on trunk configuration for your switch, refer to the instructions from your physical switch manufacturer.
+ 1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration.
+ 1. Select **Apply**. A virtual network is created on the specified virtual switch.
+
+ <!--![Screenshot of how to add virtual network in **Advanced networking** page in local UI for one node.()-->
+
+1. To delete a virtual network, under the **Virtual network** section, select **Delete virtual network** and select the virtual network you want to delete.
+
+1. Select **Next: Kubernetes >** to next configure your compute IPs for Kubernetes.
+
+## Configure compute IPs
+
+Follow these steps to configure compute IPs for your Kubernetes workloads.
+
+1. In the local UI, go to the **Kubernetes** page.
+
+1. From the dropdown select a virtual switch that you will use for Kubernetes compute traffic. <!--By default, all switches are configured for management. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.-->
+
+1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
+
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of two, free, contiguous IPv4 addresses.
+
+ > [!IMPORTANT]
+ > * Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the ```Set-HcsKubeClusterNetworkInfo``` cmdlet from the PowerShell interface of the device. For more information, see Change Kubernetes pod and service subnets. <!--Target URL not available.-->
+ > * DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+ > * If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the ```Set-HcsMacAddressPool``` cmdlet on the PowerShell interface of the device.
+
+1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of one IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
-1. Assign **Kubernetes external service IPs**. These are also the load balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
-
- > [!IMPORTANT]
- > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Pro R Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
-
1. Select **Apply**.
- ![Compute page in local UI 3](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/compute-network-3.png)
+ ![Screenshot of "Advanced networking" page in local UI with fully configured Add virtual switch blade for one node.](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/compute-virtual-switch-1.png)
-1. The configuration is takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified port is enabled for compute.
-
- ![Compute page in local UI 4](./media/azure-stack-edge-pro-r-deploy-configure-network-compute-web-proxy/compute-network-4.png)
+1. The configuration takes a couple minutes to apply and you may need to refresh the browser.
- Select **Next: Web proxy** to configure web proxy.
+1. Select **Next: Web proxy** to configure web proxy.
-
## Configure web proxy This is an optional configuration. > [!IMPORTANT]
-> Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. Proxies that try to intercept and read all the traffic (then re-sign everything with their own certification) aren't compatible since the proxy's certificate is not trusted. Typically transparent proxies work well with Azure Stack Edge Pro R. Non-transparent web proxies are not supported.
-
+> Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. Proxies that try to intercept and read all the traffic (then re-sign everything with their own certification) aren't compatible since the proxy's certificate is not trusted. Typically, transparent proxies work well with Azure Stack Edge Pro R. Non-transparent web proxies are not supported.
1. On the **Web proxy settings** page, take the following steps:
In this tutorial, you learned about:
> [!div class="checklist"] > * Prerequisites > * Configure network
-> * Enable compute network
+> * Configure advanced networking
> * Configure web proxy
+> * Validate network settings
To learn how to set up your Azure Stack Edge Pro R device, see:
databox-online Azure Stack Edge Pro R Deploy Set Up Device Update Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-set-up-device-update-time.md
Previously updated : 10/18/2020 Last updated : 10/14/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to configure device related settings:
5. After the settings are applied, select **Next: Update server**.
- ![Local web UI "Device" page 3](./media/azure-stack-edge-pro-r-deploy-set-up-device-update-time/device-4.png)
+ ![Local web UI "Device" page 3](./media/azure-stack-edge-pro-r-deploy-set-up-device-update-time/device-6.png)
## Configure update
Follow these steps to configure device related settings:
- You can get the updates directly from the **Microsoft Update server**.
- ![Local web UI "Update Server" page](./media/azure-stack-edge-pro-r-deploy-set-up-device-update-time/update-2.png)
+ ![Local web UI "Update Server" page](./media/azure-stack-edge-pro-r-deploy-set-up-device-update-time/device-7.png)
You can also choose to deploy updates from the **Windows Server Update services** (WSUS). Provide the path to the WSUS server.
deployment-environments Configure Catalog Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md
Provide a new catalog item to your development team as follows:
name: WebApp version: 1.0.0 description: Deploys an Azure Web App without a data store
- engine:
- type: ARM
- templatePath: azuredeploy.json
+ runner: ARM
+ templatePath: azuredeploy.json
``` >[!NOTE]
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Last updated 02/20/2020
This article describes some common issues and errors that Azure Database Migration Service users can come across. The article also includes information about how to resolve these issues and errors.
-> [!NOTE]
-> Bias-free communication
->
-> Microsoft supports a diverse and inclusionary environment. This article contains references to the word _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes this as an exclusionary word. The word is used in this article for consistency because it's currently the word that appears in the software. When the software is updated to remove the word, this article will be updated to be in alignment.
->
- ## Migration activity in queued state When you create new activities in an Azure Database Migration Service project, the activities remain in a queued state.
event-hubs Event Hubs For Kafka Ecosystem Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-for-kafka-ecosystem-overview.md
Title: Use event hub from Apache Kafka app - Azure Event Hubs | Microsoft Docs description: This article provides information on Apache Kafka support by Azure Event Hubs. Previously updated : 06/27/2022 Last updated : 10/14/2022 # Use Azure Event Hubs from Apache Kafka applications
The Event Hubs for Apache Kafka feature is one of three protocols concurrently a
Additionally, Event Hubs features such as [Capture](event-hubs-capture-overview.md), which enables extremely cost efficient long-term archival via Azure Blob Storage and Azure Data Lake Storage, and [Geo Disaster-Recovery](event-hubs-geo-dr.md) also work with the Event Hubs for Kafka feature.
+## Idempotency
+
+Event Hubs for Apache Kafka supports both idempotent producers and idempotent consumers.
+
+One of the core tenets of Azure Event Hubs is the concept of **at-least once** delivery. This approach ensures that events will always be delivered. It also means that events can be received more than once, even repeatedly, by consumers such as a function. For this reason, it's important that the consumer supports the [idempotent consumer](https://microservices.io/patterns/communication-style/idempotent-consumer.html) pattern.
+ ## Apache Kafka feature differences The goal of Event Hubs for Apache Kafka is to provide access to Azure Event Hubs capabilities to applications that are locked into the Apache Kafka API and would otherwise have to be backed by an Apache Kafka cluster.
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
initiative definition.
|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
-### PCI DSS requirement 1.3.4
- ## Requirement 10 ### PCI DSS requirement 10.5.4
hdinsight Hdinsight Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-managed-identities.md
Managed identities are used in Azure HDInsight in multiple scenarios. See the re
* [Enterprise Security Package](domain-joined/apache-domain-joined-configure-using-azure-adds.md#create-and-authorize-a-managed-identity) * [Customer-managed key disk encryption](disk-encryption.md)
-HDInsight will automatically renew the certificates for the managed identities you use for these scenarios. However, there is a limitation when multiple different managed identities are used for long running clusters, the certificate renewal may not work as expected for all of the managed identities. Due to this limitation, if you are planning to use long running clusters (e.g. more than 60 days), we recommend to use the same managed identity for all of the above scenarios.
+HDInsight will automatically renew the certificates for the managed identities you use for these scenarios. However, there is a limitation when multiple different managed identities are used for long running clusters, the certificate renewal may not work as expected for all of the managed identities. Due to this limitation, we recommend to use the same managed identity for all of the above scenarios.
If you have already created a long running cluster with multiple different managed identities and are running into one of these issues: * In ESP clusters, cluster services starts failing or scale up and other operations start failing with authentications errors.
hdinsight Kafka Mirrormaker 2 0 Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-mirrormaker-2-0-guide.md
The summary of the broker setup process is as follows:
**MirrorCheckpointConnector:**
- 1. Consumes offset-syncsr.
- 1. Emits checkpoints to enable failover points.
+ 1. Consumes offset-syncsr.
+ 1. Emits checkpoints to enable failover points.
**MirrorHeartBeatConnector:**
The summary of the broker setup process is as follows:
1. Connect-mirror-maker.sh script bundled with the Kafka library implements a distributed MM2 cluster, which manages the Connect workers internally based on a config file. Internally MirrorMaker driver creates and handles pairs of each connector ΓÇô MirrorSourceConnector, MirrorSinkConnector, MirrorCheckpoint connector and MirrorHeartbeatConnector. 1. Start MirrorMaker 2.0.
-
-```
-./bin/connect-mirror-maker.sh ./config/mirror-maker.properties
-```
+
+ ```
+ ./bin/connect-mirror-maker.sh ./config/mirror-maker.properties
+ ```
> [!NOTE] > For Kerberos enabled clusters, the JAAS configuration must be exported to the KAFKA_OPTS or must be specified in the MM2 config file.
The summary of the broker setup process is as follows:
``` export KAFKA_OPTS="-Djava.security.auth.login.config=<path-to-jaas.conf>" ```+ ### Sample MirrorMaker 2.0 Configuration file ```
- # specify any number of cluster aliases
- clusters = source, destination
+# specify any number of cluster aliases
+clusters = source, destination
- # connection information for each cluster
- # This is a comma separated host:port pairs for each cluster
- # for example. "A_host1:9092, A_host2:9092, A_host3:9092" and you can see the exact host name on Ambari > Hosts
- source.bootstrap.servers = wn0-src-kafka.bx.internal.cloudapp.net:9092,wn1-src-kafka.bx.internal.cloudapp.net:9092,wn2-src-kafka.bx.internal.cloudapp.net:9092
- destination.bootstrap.servers = wn0-dest-kafka.bx.internal.cloudapp.net:9092,wn1-dest-kafka.bx.internal.cloudapp.net:9092,wn2-dest-kafka.bx.internal.cloudapp.net:9092
+# connection information for each cluster
+# This is a comma separated host:port pairs for each cluster
+# for example. "A_host1:9092, A_host2:9092, A_host3:9092" and you can see the exact host name on Ambari > Hosts
+source.bootstrap.servers = wn0-src-kafka.bx.internal.cloudapp.net:9092,wn1-src-kafka.bx.internal.cloudapp.net:9092,wn2-src-kafka.bx.internal.cloudapp.net:9092
+destination.bootstrap.servers = wn0-dest-kafka.bx.internal.cloudapp.net:9092,wn1-dest-kafka.bx.internal.cloudapp.net:9092,wn2-dest-kafka.bx.internal.cloudapp.net:9092
- # enable and configure individual replication flows
- source->destination.enabled = true
+# enable and configure individual replication flows
+source->destination.enabled = true
- # regex which defines which topics gets replicated. For eg "foo-.*"
- source->destination.topics = toa.evehicles-latest-dev
- groups=.*
- topics.blacklist="*.internal,__.*"
+# regex which defines which topics gets replicated. For eg "foo-.*"
+source->destination.topics = toa.evehicles-latest-dev
+groups=.*
+topics.blacklist="*.internal,__.*"
- # Setting replication factor of newly created remote topics
- replication.factor=3
+# Setting replication factor of newly created remote topics
+replication.factor=3
- checkpoints.topic.replication.factor=1
- heartbeats.topic.replication.factor=1
- offset-syncs.topic.replication.factor=1
+checkpoints.topic.replication.factor=1
+heartbeats.topic.replication.factor=1
+offset-syncs.topic.replication.factor=1
- offset.storage.replication.factor=1
- status.storage.replication.factor=1
- config.storage.replication.factor=1
+offset.storage.replication.factor=1
+status.storage.replication.factor=1
+config.storage.replication.factor=1
``` ### SSL configuration
destination.ssl.keystore.location=/path/to/kafka.server.keystore.jks
destination.sasl.mechanism=GSSAPI ``` - ### Global configurations |Property |Default value |Description |
destination.sasl.mechanism=GSSAPI
Custom Replication Policy can be created by implementing the interface below. ```
- /** Defines which topics are "remote topics", e.g. "us-west.topic1". */
- public interface ReplicationPolicy {
-
+/** Defines which topics are "remote topics", e.g. "us-west.topic1". */
+public interface ReplicationPolicy {
+ /** How to rename remote topics; generally should be like us-west.topic1. */ String formatRemoteTopic(String sourceClusterAlias, String topic);
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
Previously updated : 06/11/2022 Last updated : 10/13/2022
Currently the supported versions are:
* v1.0-prerelease * v1
-The OpenApi Doc for the supported versions can be found at the following url:
+The OpenAPI Doc for the supported versions can be found at the following url:
`<service_url>/v<version>/api.yaml`
healthcare-apis Dicom Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-change-feed-overview.md
deleted | This instance has been deleted and is no longer available in the serv
### Read Change Feed **Route**: /changefeed?offset={int}&limit={int}&includemetadata={**true**|false}
-```
+```json
[ { "Sequence": 1,
includemetadata | bool | Whether or not to include the metadata (default: true)
**Route**: /changefeed/latest?includemetadata={**true**|false}
-```
+```json
{ "Sequence": 2, "StudyInstanceUid": "{uid}",
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Previously updated : 06/10/2022 Last updated : 10/13/2022 # DICOM Conformance Statement
-The **DICOM service within Azure Health Data Services** supports a subset of the DICOMweb&trade; Standard. This support includes:
-
-* [Store (STOW-RS)](#store-stow-rs)
-* [Retrieve (WADO-RS)](#retrieve-wado-rs)
-* [Search (QIDO-RS)](#search-qido-rs)
+The Medical Imaging Server for DICOM supports a subset of the DICOMwebΓäó Standard. Support includes:
+
+* [Studies Service](#studies-service)
+ * [Store (STOW-RS)](#store-stow-rs)
+ * [Retrieve (WADO-RS)](#retrieve-wado-rs)
+ * [Search (QIDO-RS)](#search-qido-rs)
+ * [Delete](#delete)
+* [Worklist Service (UPS Push and Pull SOPs)](#worklist-service-ups-rs)
+ * [Create Workitem](#create-workitem)
+ * [Retrieve Workitem](#retrieve-workitem)
+ * [Update Workitem](#update-workitem)
+ * [Change Workitem State](#change-workitem-state)
+ * [Request Cancellation](#request-cancellation)
+ * [Search Workitems](#search-workitems)
Additionally, the following non-standard API(s) are supported: -- [Delete](#delete)
+* [Change Feed](dicom-change-feed-overview.md)
+* [Extended Query Tags](dicom-extended-query-tags-overview.md)
-Our service makes use of REST API versioning.
+The service uses REST API versioning. The version of the REST API must be explicitly specified as part of the base URL, as in the following example:
-> [!NOTE]
-> The version of the REST API must be explicitly specified in the request URL as in the following example:
->
-> `https://<service_url>/v<version>/studies`
+`https://<service_url>/v<version>/studies`
+
+For more information on how to specify the version when making requests, see the [API Versioning Documentation](api-versioning-dicom-service.md).
+
+You'll find example requests for supported transactions in the [Postman collection](https://github.com/microsoft/dicom-server/blob/main/docs/resources/Conformance-as-Postman.postman_collection.json).
+
+## Preamble Sanitization
+
+The service ignores the 128-byte File Preamble, and replaces its contents with null characters. This ensures that no files passed through the service are vulnerable to the [malicious preamble vulnerability](https://dicom.nema.org/medical/dicom/current/output/chtml/part10/sect_7.5.html). However, this also means that [preambles used to encode dual format content](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6489422/) such as TIFF can't be used with the service.
+
+## Studies Service
-For information on how to specify the version when making requests visit the [API Versioning for DICOM service documentation](api-versioning-dicom-service.md).
+The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We've added the non-standard Delete transaction to enable a full resource lifecycle.
-## Store (STOW-RS)
+### Store (STOW-RS)
This transaction uses the POST method to store representations of studies, series, and instances contained in the request payload.
The following `Content-Type` header(s) are supported:
The following DICOM elements are required to be present in every DICOM file attempting to be stored:
-* StudyInstanceUID
-* SeriesInstanceUID
-* SOPInstanceUID
-* SOPClassUID
-* PatientID
+* `StudyInstanceUID`
+* `SeriesInstanceUID`
+* `SOPInstanceUID`
+* `SOPClassUID`
+* `PatientID`
> [!NOTE]
-> All identifiers must be between 1 and 64 characters long, and only contain alpha numeric characters or the following special characters: '.', '-'.
+> All identifiers must be between 1 and 64 characters long, and only contain alpha numeric characters or the following special characters: `.`, `-`.
-Each file stored must have a unique combination of StudyInstanceUID, SeriesInstanceUID, and SopInstanceUID. The warning code `45070` will be returned if a file with the same identifiers already exists.
+Each file stored must have a unique combination of `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID`. The warning code `45070` will be returned if a file with the same identifiers already exists.
-DICOM File Size Limit: there's a size limit of 2 GB for a DICOM file by default.
+Only transfer syntaxes with explicit Value Representations are accepted.
-### Store response status codes
+#### Store response status codes
-| Code | Description |
-| : | :- |
-| 200 (OK) | All the SOP instances in the request have been stored. |
-| 202 (Accepted) | Some instances in the request have been stored but others have failed. |
-| 204 (No Content) | No content was provided in the store transaction request. |
-| 400 (Bad Request) | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format. |
-| 401 (Unauthorized) | The client isn't authenticated. |
-| 403 (Forbidden) | The user isn't authorized. |
-| 406 (Not Acceptable) | The specified `Accept` header isn't supported. |
-| 409 (Conflict) | None of the instances in the store transaction request have been stored. |
-| 415 (Unsupported Media Type) | The provided `Content-Type` isn't supported. |
-| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. |
+| Code | Description |
+| :-- | :- |
+| `200 (OK)` | All the SOP instances in the request have been stored. |
+| `202 (Accepted)` | Some instances in the request have been stored but others have failed. |
+| `204 (No Content)` | No content was provided in the store transaction request. |
+| `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format. |
+| `401 (Unauthorized)` | The client isnn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `406 (Not Acceptable)` | The specified `Accept` header isn't supported. |
+| `409 (Conflict)` | None of the instances in the store transaction request have been stored. |
+| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
### Store response payload
The response payload will populate a DICOM dataset with the following elements:
| Tag | Name | Description | | :-- | :-- | :- |
-| (0008, 1190) | RetrieveURL | The Retrieve URL of the study if the StudyInstanceUID was provided in the store request and at least one instance is successfully stored. |
-| (0008, 1198) | FailedSOPSequence | The sequence of instances that failed to store. |
-| (0008, 1199) | ReferencedSOPSequence | The sequence of stored instances. |
+| (0008, 1190) | `RetrieveURL` | The Retrieve URL of the study if the StudyInstanceUID was provided in the store request and at least one instance is successfully stored. |
+| (0008, 1198) | `FailedSOPSequence` | The sequence of instances that failed to store. |
+| (0008, 1199) | `ReferencedSOPSequence` | The sequence of stored instances. |
Each dataset in the `FailedSOPSequence` will have the following elements (if the DICOM file attempting to be stored could be read): | Tag | Name | Description | | :-- | :-- | :- |
-| (0008, 1150) | ReferencedSOPClassUID | The SOP class unique identifier of the instance that failed to store. |
-| (0008, 1155) | ReferencedSOPInstanceUID | The SOP instance unique identifier of the instance that failed to store. |
-| (0008, 1197) | FailureReason | The reason code why this instance failed to store. |
+| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store. |
+| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store. |
+| (0008, 1197) | `FailureReason` | The reason code why this instance failed to store. |
Each dataset in the `ReferencedSOPSequence` will have the following elements: | Tag | Name | Description | | :-- | :-- | :- |
-| (0008, 1150) | ReferencedSOPClassUID | The SOP class unique identifier of the instance that failed to store. |
-| (0008, 1155) | ReferencedSOPInstanceUID | The SOP instance unique identifier of the instance that failed to store. |
-| (0008, 1190) | RetrieveURL | The retrieve URL of this instance on the DICOM server. |
+| (0008, 1150) | `ReferencedSOPClassUID` | The SOP class unique identifier of the instance that failed to store. |
+| (0008, 1155) | `ReferencedSOPInstanceUID` | The SOP instance unique identifier of the instance that failed to store. |
+| (0008, 1190) | `RetrieveURL` | The retrieve URL of this instance on the DICOM server. |
-Below is an example response with `Accept` header `application/dicom+json`:
+An example response with `Accept` header `application/dicom+json`:
```json {
Below is an example response with `Accept` header `application/dicom+json`:
} ```
-### Store failure reason codes
+#### Store failure reason codes
| Code | Description | | :- | :- |
-| 272 | The store transaction didn't store the instance because of a general failure in processing the operation. |
-| 43264 | The DICOM instance failed the validation. |
-| 43265 | The provided instance StudyInstanceUID didn't match the specified StudyInstanceUID in the store request. |
-| 45070 | A DICOM instance with the same StudyInstanceUID, SeriesInstanceUID, and SopInstanceUID has already been stored. If you wish to update the contents, delete this instance first. |
-| 45071 | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process hasn't had chance to clean up yet. Delete the instance first before attempting to create again. |
+| `272` | The store transaction didn't store the instance because of a general failure in processing the operation. |
+| `43264` | The DICOM instance failed the validation. |
+| `43265` | The provided instance `StudyInstanceUID` didn't match the specified `StudyInstanceUID` in the store request. |
+| `45070` | A DICOM instance with the same `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` has already been stored. If you wish to update the contents, delete this instance first. |
+| `45071` | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process hasn't had chance to clean up yet. Delete the instance first before attempting to create again. |
-## Retrieve (WADO-RS)
+### Retrieve (WADO-RS)
This Retrieve Transaction offers support for retrieving stored studies, series, instances, and frames by reference.
-| Method | Path | Description |
-| :-- | :- | :- |
-| GET | ../studies/{study} | Retrieves all instances within a study. |
-| GET | ../studies/{study}/metadata | Retrieves the metadata for all instances within a study. |
-| GET | ../studies/{study}/series/{series} | Retrieves all instances within a series. |
-| GET | ../studies/{study}/series/{series}/metadata | Retrieves the metadata for all instances within a series. |
-| GET | ../studies/{study}/series/{series}/instances/{instance} | Retrieves a single instance. |
-| GET | ../studies/{study}/series/{series}/instances/{instance}/metadata | Retrieves the metadata for a single instance. |
+| Method | Path | Description |
+| :-- | :| :- |
+| GET | ../studies/{study} | Retrieves all instances within a study. |
+| GET | ../studies/{study}/metadata | Retrieves the metadata for all instances within a study. |
+| GET | ../studies/{study}/series/{series} | Retrieves all instances within a series. |
+| GET | ../studies/{study}/series/{series}/metadata | Retrieves the metadata for all instances within a series. |
+| GET | ../studies/{study}/series/{series}/instances/{instance} | Retrieves a single instance. |
+| GET | ../studies/{study}/series/{series}/instances/{instance}/metadata | Retrieves the metadata for a single instance. |
| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance. To specify more than one frame, use a comma to separate each frame to return. For example, /studies/1/series/2/instance/3/frames/4,5,6 |
-### Retrieve instances within study or series
+#### Retrieve instances within study or series
The following `Accept` header(s) are supported for retrieving instances within a study or a series: - * `multipart/related; type="application/dicom"; transfer-syntax=*` * `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default) * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`
-### Retrieve an instance
+#### Retrieve an Instance
The following `Accept` header(s) are supported for retrieving a specific instance: * `application/dicom; transfer-syntax=*` * `multipart/related; type="application/dicom"; transfer-syntax=*`
-* `application/dicom;` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
-* `multipart/related; type="application/dicom"` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
+* `application/dicom;` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default)
+* `multipart/related; type="application/dicom"` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default)
* `application/dicom; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `application/dicom; transfer-syntax=1.2.840.10008.1.2.4.90` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`
-### Retrieve frames
+#### Retrieve Frames
The following `Accept` headers are supported for retrieving frames: * `multipart/related; type="application/octet-stream"; transfer-syntax=*`
-* `multipart/related; type="application/octet-stream";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
+* `multipart/related; type="application/octet-stream";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default)
* `multipart/related; type="application/octet-stream"; transfer-syntax=1.2.840.10008.1.2.1`
-* `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.4.90 is used as default)
+* `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.4.90` is used as default)
* `multipart/related; type="image/jp2";transfer-syntax=1.2.840.10008.1.2.4.90`
-### Retrieve transfer syntax
+#### Retrieve transfer syntax
When the requested transfer syntax is different from original file, the original file is transcoded to requested transfer syntax. The original file needs to be one of the formats below for transcoding to succeed; otherwise, transcoding may fail:
An unsupported `transfer-syntax` will result in `406 Not Acceptable`.
### Retrieve metadata (for study, series, or instance)
-The following `Accept` header(s) are supported for retrieving metadata for a study, a series, or an instance:
+The following `Accept` header is supported for retrieving metadata for a study, a series, or an instance:
* `application/dicom+json`
-Retrieving metadata won't return attributes with the following value representations:
+Retrieving metadata will not return attributes with the following value representations:
| VR Name | Description | | : | : |
Retrieving metadata won't return attributes with the following value representat
Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists:
-* Data hasn't changed since the last request: HTTP 304 (Not Modified) response will be sent with no response body.
-* Data has changed since the last request: HTTP 200 (OK) response will be sent with updated ETag. Required data will also be returned as part of the body.
+* Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response will be sent with no response body.
+* Data has changed since the last request: `HTTP 200 (OK)` response will be sent with updated ETag. Required data will also be returned as part of the body.
### Retrieve response status codes | Code | Description | | : | :- |
-| 200 (OK) | All requested data has been retrieved. |
-| 304 (Not Modified) | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
-| 400 (Bad Request) | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. |
-| 401 (Unauthorized) | The client isn't authenticated. |
-| 403 (Forbidden) | The user isn't authorized. |
-| 404 (Not Found) | The specified DICOM resource couldn't be found. |
-| 406 (Not Acceptable) | The specified `Accept` header isn't supported. |
-| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. |
+| `200 (OK)` | All requested data has been retrieved. |
+| `304 (Not Modified)` | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
+| `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | The specified DICOM resource couldn't be found. |
+| `406 (Not Acceptable)` | The specified `Accept` header isn't supported. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
-## Search (QIDO-RS)
+### Search (QIDO-RS)
Query based on ID for DICOM Objects (QIDO) enables you to search for studies, series, and instances by attributes.
-| Method | Path | Description |
-| :-- | :- | :-- |
-| *Search for Studies* |
-| GET | ../studies?... | Search for studies |
-| *Search for Series* |
-| GET | ../series?... | Search for series |
-| GET |../studies/{study}/series?... | Search for series in a study |
-| *Search for Instances* |
-| GET |../instances?... | Search for instances |
-| GET |../studies/{study}/instances?... | Search for instances in a study |
+| Method | Path | Description |
+| :-- | :-- | : |
+| *Search for Studies* |
+| GET | ../studies?... | Search for studies |
+| *Search for Series* |
+| GET | ../series?... | Search for series |
+| GET |../studies/{study}/series?... | Search for series in a study |
+| *Search for Instances* |
+| GET |../instances?... | Search for instances |
+| GET |../studies/{study}/instances?... | Search for instances in a study |
| GET |../studies/{study}/series/{series}/instances?... | Search for instances in a series | The following `Accept` header(s) are supported for searching: -- `application/dicom+json`
+* `application/dicom+json`
### Supported search parameters The following parameters for each query are supported:
-| Key | Support Value(s) | Allowed Count | Description |
-| : | :- | : | :- |
-| `{attributeID}=` | {value} | 0...N | Search for attribute/ value matching in query. |
-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided. Refer to [Search Response](#search-response) for more information about which attributes will be returned for each query type.<br/>If a mixture of {attributeID} and 'all' is provided, the server will default to using 'all'. |
-| `limit=` | {value} | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. |
-| `offset=` | {value} | 0..1 | Skip {value} results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response will be returned. |
-| `fuzzymatching=` | `true` \| `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It will do a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" will all match. However, "ohn" won't match. |
+| Key | Support Value(s) | Allowed Count | Description |
+| :-- | :-- | : | :- |
+| `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided. Refer to [Search Response](#search-response) for more information about which attributes will be returned for each query type.<br/>If a mixture of `{attributeID}` and `all` is provided, the server will default to using `all`. |
+| `limit=` | `{value}` | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. |
+| `offset=` | `{value}` | 0..1 | Skip `{value}` results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response will be returned. |
+| `fuzzymatching=` | `true` / `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It will do a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" will all match. However, "ohn" won't match. |
#### Searchable attributes
We support searching the following attributes and search types.
| Attribute Keyword | Study | Series | Instance | | :- | :: | :-: | :: |
-| StudyInstanceUID | X | X | X |
-| PatientName | X | X | X |
-| PatientID | X | X | X |
-| AccessionNumber | X | X | X |
-| ReferringPhysicianName | X | X | X |
-| StudyDate | X | X | X |
-| StudyDescription | X | X | X |
-| SeriesInstanceUID | | X | X |
-| Modality | | X | X |
-| PerformedProcedureStepStartDate | | X | X |
-| SOPInstanceUID | | | X |
+| `StudyInstanceUID` | X | X | X |
+| `PatientName` | X | X | X |
+| `PatientID` | X | X | X |
+| `PatientBirthDate` | X | X | X |
+| `AccessionNumber` | X | X | X |
+| `ReferringPhysicianName` | X | X | X |
+| `StudyDate` | X | X | X |
+| `StudyDescription` | X | X | X |
+| `SeriesInstanceUID` | | X | X |
+| `Modality` | | X | X |
+| `PerformedProcedureStepStartDate` | | X | X |
+| `SOPInstanceUID` | | | X |
#### Search matching
We support the following matching types.
| Search Type | Supported Attribute | Example | | :- | : | : |
-| Range Query | StudyDate | {attributeID}={value1}-{value2}. For date/ time values, we supported an inclusive range on the tag. This will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. |
-| Exact Match | All supported attributes | {attributeID}={value1} |
-| Fuzzy Match | PatientName | Matches any component of the patient name that starts with the value. |
+| Range Query | `StudyDate`/`PatientBirthDate` | `{attributeID}={value1}-{value2}`. For date/ time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Exact Match | All supported attributes | `{attributeID}={value1}` |
+| Fuzzy Match | `PatientName`, `ReferringPhysicianName` | Matches any component of the name which starts with the value. |
#### Attribute ID
-Tags can be encoded in many ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in a number of ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
| Value | Example | | : | : |
-| {group}{element} | 0020000D |
-| {dicomKeyword} | StudyInstanceUID |
+| `{group}{element}` | `0020000D` |
+| `{dicomKeyword}` | `StudyInstanceUID` |
-Example query searching for instances: **../instances?Modality=CT&00280011=512&includefield=00280010&limit=5&offset=0**
+Example query searching for instances:
+
+ `../instances?Modality=CT&00280011=512&includefield=00280010&limit=5&offset=0`
### Search response
-The response will be an array of DICOM datasets. Depending on the resource, by *default* the following attributes are returned:
+The response will be an array of DICOM datasets. Depending on the resource, by *default* the following attributes are returned
#### Default study tags | Tag | Attribute Name | | :-- | :- |
-| (0008, 0005) | SpecificCharacterSet |
-| (0008, 0020) | StudyDate |
-| (0008, 0030) | StudyTime |
-| (0008, 0050) | AccessionNumber |
-| (0008, 0056) | InstanceAvailability |
-| (0008, 0090) | ReferringPhysicianName |
-| (0008, 0201) | TimezoneOffsetFromUTC |
-| (0010, 0010) | PatientName |
-| (0010, 0020) | PatientID |
-| (0010, 0030) | PatientBirthDate |
-| (0010, 0040) | PatientSex |
-| (0020, 0010) | StudyID |
-| (0020, 000D) | StudyInstanceUID |
+| (0008, 0005) | `SpecificCharacterSet` |
+| (0008, 0020) | `StudyDate` |
+| (0008, 0030) | `StudyTime` |
+| (0008, 0050) | `AccessionNumber` |
+| (0008, 0056) | `InstanceAvailability` |
+| (0008, 0090) | `ReferringPhysicianName` |
+| (0008, 0201) | `TimezoneOffsetFromUTC` |
+| (0010, 0010) | `PatientName` |
+| (0010, 0020) | `PatientID` |
+| (0010, 0030) | `PatientBirthDate` |
+| (0010, 0040) | `PatientSex` |
+| (0020, 0010) | `StudyID` |
+| (0020, 000D) | `StudyInstanceUID` |
#### Default series tags | Tag | Attribute Name | | :-- | :- |
-| (0008, 0005) | SpecificCharacterSet |
-| (0008, 0060) | Modality |
-| (0008, 0201) | TimezoneOffsetFromUTC |
-| (0008, 103E) | SeriesDescription |
-| (0020, 000E) | SeriesInstanceUID |
-| (0040, 0244) | PerformedProcedureStepStartDate |
-| (0040, 0245) | PerformedProcedureStepStartTime |
-| (0040, 0275) | RequestAttributesSequence |
+| (0008, 0005) | `SpecificCharacterSet` |
+| (0008, 0060) | `Modality` |
+| (0008, 0201) | `TimezoneOffsetFromUTC` |
+| (0008, 103E) | `SeriesDescription` |
+| (0020, 000E) | `SeriesInstanceUID` |
+| (0040, 0244) | `PerformedProcedureStepStartDate` |
+| (0040, 0245) | `PerformedProcedureStepStartTime` |
+| (0040, 0275) | `RequestAttributesSequence` |
#### Default instance tags | Tag | Attribute Name | | :-- | :- |
-| (0008, 0005) | SpecificCharacterSet |
-| (0008, 0016) | SOPClassUID |
-| (0008, 0018) | SOPInstanceUID |
-| (0008, 0056) | InstanceAvailability |
-| (0008, 0201) | TimezoneOffsetFromUTC |
-| (0020, 0013) | InstanceNumber |
-| (0028, 0010) | Rows |
-| (0028, 0011) | Columns |
-| (0028, 0100) | BitsAllocated |
-| (0028, 0008) | NumberOfFrames |
+| (0008, 0005) | `SpecificCharacterSet` |
+| (0008, 0016) | `SOPClassUID` |
+| (0008, 0018) | `SOPInstanceUID` |
+| (0008, 0056) | `InstanceAvailability` |
+| (0008, 0201) | `TimezoneOffsetFromUTC` |
+| (0020, 0013) | `InstanceNumber` |
+| (0028, 0010) | `Rows` |
+| (0028, 0011) | `Columns` |
+| (0028, 0100) | `BitsAllocated` |
+| (0028, 0008) | `NumberOfFrames` |
-If includefield=all, the below attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
+If `includefield=all`, the below attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
-#### Other study tags
+#### Additional Study tags
| Tag | Attribute Name | | :-- | :- |
-| (0008, 1030) | Study Description |
-| (0008, 0063) | AnatomicRegionsInStudyCodeSequence |
-| (0008, 1032) | ProcedureCodeSequence |
-| (0008, 1060) | NameOfPhysiciansReadingStudy |
-| (0008, 1080) | AdmittingDiagnosesDescription |
-| (0008, 1110) | ReferencedStudySequence |
-| (0010, 1010) | PatientAge |
-| (0010, 1020) | PatientSize |
-| (0010, 1030) | PatientWeight |
-| (0010, 2180) | Occupation |
-| (0010, 21B0) | AdditionalPatientHistory |
-
-#### Other series tags
+| (0008, 1030) | `Study Description` |
+| (0008, 0063) | `AnatomicRegionsInStudyCodeSequence` |
+| (0008, 1032) | `ProcedureCodeSequence` |
+| (0008, 1060) | `NameOfPhysiciansReadingStudy` |
+| (0008, 1080) | `AdmittingDiagnosesDescription` |
+| (0008, 1110) | `ReferencedStudySequence` |
+| (0010, 1010) | `PatientAge` |
+| (0010, 1020) | `PatientSize` |
+| (0010, 1030) | `PatientWeight` |
+| (0010, 2180) | `Occupation` |
+| (0010, 21B0) | `AdditionalPatientHistory` |
+
+#### Additional Series tags
| Tag | Attribute Name | | :-- | :- |
-| (0020, 0011) | SeriesNumber |
-| (0020, 0060) | Laterality |
-| (0008, 0021) | SeriesDate |
-| (0008, 0031) | SeriesTime |
+| (0020, 0011) | `SeriesNumber` |
+| (0020, 0060) | `Laterality` |
+| (0008, 0021) | `SeriesDate` |
+| (0008, 0031) | `SeriesTime` |
-The following attributes are returned:
+Along with those below attributes are returned:
* All the match query parameters and UIDs in the resource url.
-* IncludeField attributes supported at that resource level.
-* If the target resource is All Series, then Study level attributes are also returned.
-* If the target resource is All Instances, then Study and Series level attributes are also returned.
-* If the target resource is Study's Instances, then Series level attributes are also returned.
+* `IncludeField` attributes supported at that resource level.
+* If the target resource is `All Series`, then `Study` level attributes are also returned.
+* If the target resource is `All Instances`, then `Study` and `Series` level attributes are also returned.
+* If the target resource is `Study's Instances`, then `Series` level attributes are also returned.
### Search response codes
The query API returns one of the following status codes in the response:
| Code | Description | | : | :- |
-| 200 (OK) | The response payload contains all the matching resources. |
-| 204 (No Content) | The search completed successfully but returned no results. |
-| 400 (Bad Request) | The server was unable to perform the query because the query component was invalid. Response body contains details of the failure. |
-| 401 (Unauthorized) | The client isn't authenticated. |
-| 403 (Forbidden) | The user isn't authorized. |
-| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. |
-
-### Extra notes
-
-* Querying using the `TimezoneOffsetFromUTC` (`00080201`) isn't supported.
-* The query API won't return 413 (request entity too large). If the requested query response limit is outside of the acceptable range, a bad request will be returned. Anything requested within the acceptable range will be resolved.
-* When target resource is study/series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest will win, and you can search only on the latest data.
-* Paged results are optimized to return the matched *newest* instance first. This may result in duplicate records in subsequent pages if newer data matching the query was added.
+| `200 (OK)` | The response payload contains all the matching resources. |
+| `204 (No Content)` | The search completed successfully but returned no results. |
+| `400 (Bad Request)` | The server was unable to perform the query because the query component was invalid. Response body contains details of the failure. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
+
+### Additional notes
+
+* Querying using the `TimezoneOffsetFromUTC (00080201)` isn't supported.
+* The query API won't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request will be returned. Anything requested within the acceptable range, will be resolved.
+* When target resource is Study/Series there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest will win and you can search only on the latest data.
+* Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
* Matching is case in-sensitive and accent in-sensitive for PN VR types. * Matching is case in-sensitive and accent sensitive for other string VR types.
+* Only the first value will be indexed of a single valued data element that incorrectly has multiple values.
-## Delete
+### Delete
This transaction isn't part of the official DICOMweb&trade; Standard. It uses the DELETE method to remove representations of studies, series, and instances from the store.
-| Method | Path | Description |
-| :-- | : | :- |
-| DELETE | ../studies/{study} | Delete all instances for a specific study. |
-| DELETE | ../studies/{study}/series/{series} | Delete all instances for a specific series within a study. |
+| Method | Path | Description |
+| :-- | : | :- |
+| DELETE | ../studies/{study} | Delete all instances for a specific study. |
+| DELETE | ../studies/{study}/series/{series} | Delete all instances for a specific series within a study. |
| DELETE | ../studies/{study}/series/{series}/instances/{instance} | Delete a specific instance within a series. |
-Parameters `study`, `series`, and `instance` correspond to the DICOM attributes StudyInstanceUID, SeriesInstanceUID, and SopInstanceUID respectively.
+Parameters `study`, `series`, and `instance` correspond to the DICOM attributes `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` respectively.
There are no restrictions on the request's `Accept` header, `Content-Type` header or body content. > [!NOTE]
-> After a Delete transaction, the deleted instances won't be recoverable.
+> After a Delete transaction, the deleted instances will not be recoverable.
### Response Status Codes | Code | Description | | : | :- |
-| 204 (No Content) | When all the SOP instances have been deleted. |
-| 400 (Bad Request) | The request was badly formatted. |
-| 401 (Unauthorized) | The client isn't authenticated. |
-| 403 (Forbidden) | The user isn't authorized. |
-| 404 (Not Found) | When the specified series wasn't found within a study, or the specified instance wasn't found within the series. |
-| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. |
+| `204 (No Content)` | When all the SOP instances have been deleted. |
+| `400 (Bad Request)` | The request was badly formatted. |
+| `401 (Unauthorized)` | The client isn't authenticated. |
+| `403 (Forbidden)` | The user isn't authorized. |
+| `404 (Not Found)` | When the specified series wasn't found within a study or the specified instance wasn't found within the series. |
+| `503 (Service Unavailable)` | The service is unavailable or busy. Please try again later. |
### Delete response payload The response body will be empty. The status code is the only useful information returned.
+## Worklist Service (UPS-RS)
+
+The DICOM service supports the Push and Pull SOPs of the [Worklist Service (UPS-RS)](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_11). This service provides access to one Worklist containing Workitems, each of which represents a Unified Procedure Step (UPS).
+
+Throughout, the variable `{workitem}` in a URI template stands for a Workitem UID.
+
+Available UPS-RS endpoints include:
+
+|Verb| Path | Description |
+|: |: |: |
+|POST| {s}/workitems{?AffectedSOPInstanceUID}| Create a work item|
+|POST| {s}/workitems/{instance}{?transaction}| Update a work item
+|GET| {s}/workitems{?query*} | Search for work items
+|GET| {s}/workitems/{instance}| Retrieve a work item
+|PUT| {s}/workitems/{instance}/state| Change work item state
+|POST| {s}/workitems/{instance}/cancelrequest | Cancel work item|
+|POST |{s}/workitems/{instance}/subscribers/{AETitle}{?deletionlock} | Create subscription|
+|POST| {s}/workitems/1.2.840.10008.5.1.4.34.5/ | Suspend subscription|
+|DELETE | {s}/workitems/{instance}/subscribers/{AETitle} | Delete subscription
+|GET | {s}/subscribers/{AETitle}| Open subscription channel |
+
+### Create Workitem
+
+This transaction uses the POST method to create a new Workitem.
+
+|Method| Path |Description|
+|:|:|:|
+| POST |../workitems| Create a Workitem.|
+| POST |../workitems?{workitem}| Creates a Workitem with the specified UID.|
+
+If not specified in the URI, the payload dataset must contain the Workitem in the `SOPInstanceUID` attribute.
+
+The `Accept` and `Content-Type` headers are required in the request, and must both have the value `application/dicom+json`.
+
+There are a number of requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found [in this table](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3).
+
+Notes on dataset attributes:
+
+* **SOP Instance UID:** Although the reference table above says that SOP Instance UID shouldn't be present, this guidance is specific to the DIMSE protocol and is handled differently in DICOMWebΓäó. SOP Instance UID should be present in the dataset if not in the URI.
+* **Conditional requirement codes:** All the conditional requirement codes including 1C and 2C are treated as optional.
+
+#### Create Response Status Codes
+
+|Code |Description|
+|:|:|
+|`201 (Created)`| The target Workitem was successfully created.|
+|`400 (Bad Request)`| There was a problem with the request. For example, the request payload didn't satisfy the requirements above.|
+|`401 (Unauthorized)`| The client isn't authenticated.
+|`403 (Forbidden)` | The user isn't authorized. |
+|`409 (Conflict)` |The Workitem already exists.
+|`415 (Unsupported Media Type)`| The provided `Content-Type` isn't supported.
+|`503 (Service Unavailable)`| The service is unavailable or busy. Please try again later.|
+
+#### Create Response Payload
+
+A success response will have no payload. The `Location` and `Content-Location` response headers will contain a URI reference to the created Workitem.
+
+A failure response payload will contain a message describing the failure.
+
+### Request Cancellation
+
+This transaction enables the user to request cancellation of a non-owned Workitem.
+
+There are [four valid Workitem states](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.1.1-1):
+
+* `SCHEDULED`
+* `IN PROGRESS`
+* `CANCELED`
+* `COMPLETED`
+
+This transaction will only succeed against Workitems in the `SCHEDULED` state. Any user can claim ownership of a Workitem by setting its Transaction UID and changing its state to `IN PROGRESS`. From then on, a user can only modify the Workitem by providing the correct Transaction UID. While UPS defines Watch and Event SOP classes that allow cancellation requests and other events to be forwarded, this DICOM service does not implement these classes, and so cancellation requests on workitems that are `IN PROGRESS` will return failure. An owned Workitem can be canceled via the [Change Workitem State](#change-workitem-state) transaction.
+
+|Method |Path| Description|
+|:|:|:|
+|POST |../workitems/{workitem}/cancelrequest| Request the cancellation of a scheduled Workitem|
+
+The `Content-Type` header is required, and must have the value `application/dicom+json`.
+
+The request payload may include Action Information as [defined in the DICOM Standard](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.2-1).
+
+#### Request Cancellation Response Status Codes
+
+|Code |Description|
+|:|:|
+|`202 (Accepted)`| The request was accepted by the server, but the Target Workitem state hasn't necessarily changed yet.|
+|`400 (Bad Request)`| There was a problem with the syntax of the request.|
+|`401 (Unauthorized)`| The client isn;t authenticated.
+|`403 (Forbidden)` | The user isn't authorized. |
+|`404 (Not Found)`| The Target Workitem wasn't found.
+|`409 (Conflict)`| The request is inconsistent with the current state of the Target Workitem. For example, the Target Workitem is in the **SCHEDULED** or **COMPLETED** state.
+|`415 (Unsupported Media Type)` |The provided `Content-Type` isn't supported.|
+
+#### Request Cancellation Response Payload
+
+A success response will have no payload, and a failure response payload will contain a message describing the failure. If the Workitem Instance is already in a canceled state, the response will include the following HTTP Warning header: `299: The UPS is already in the requested state of CANCELED.`
+
+### Retrieve Workitem
+
+This transaction retrieves a Workitem. It corresponds to the UPS DIMSE N-GET operation.
+
+Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.5
+
+If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) Attribute. This is necessary to preserve this Attribute's role as an access lock.
+
+|Method |Path |Description
+|:|:|:|
+|GET| ../workitems/{workitem}| Request to retrieve a Workitem
+
+The `Accept` header is required and must have the value `application/dicom+json`.
+
+#### Retrieve Workitem Response Status Codes
+
+|Code |Description|
+|: |:
+|`200 (OK)`| Workitem Instance was successfully retrieved.|
+|`400 (Bad Request)`| There was a problem with the request.|
+|`401 (Unauthorized)`| The client isn't authenticated.|
+|`403 (Forbidden)` | The user isn't authorized. |
+|`404 (Not Found)`| The Target Workitem wasn't found.|
+
+#### Retrieve Workitem Response Payload
+
+* A success response has a single part payload containing the requested Workitem in the Selected Media Type.
+* The returned Workitem shall not contain the Transaction UID (0008, 1195) attribute of the Workitem, since that should only be known to the Owner.
+
+### Update Workitem
+
+This transaction modifies attributes of an existing Workitem. It corresponds to the UPS DIMSE N-SET operation.
+
+Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.6
+
+To update a Workitem currently in the **SCHEDULED** state, the `Transaction UID` attribute shall not be present. For a Workitem in the **IN PROGRESS** state, the request must include the current Transaction UID as a query parameter. If the Workitem is already in the **COMPLETED** or **CANCELED** states, the response will be `400 (Bad Request)`.
+
+|Method |Path |Description
+|:|:|:|
+|POST| ../workitems/{workitem}?{transaction-uid}| Update Workitem Transaction|
+
+The `Content-Type` header is required, and must have the value `application/dicom+json`.
+
+The request payload contains a dataset with the changes to be applied to the target Workitem. When modifying a sequence, the request must include all Items in the sequence, not just the Items to be modified. When multiple Attributes need updating as a group, do this as multiple Attributes in a single request, not as multiple requests.
+
+There are a number of requirements related to DICOM data attributes in the context of a specific transaction. Attributes may be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be found in this table.
+
+Notes on dataset attributes:
+
+* **Conditional requirement codes:** All the conditional requirement codes including 1C and 2C are treated as optional.
+
+* The request can't set the value of the Procedure Step State (0074,1000) attribute. Procedure Step State is managed using the Change State transaction, or the Request Cancellation transaction.
+
+#### Update Workitem Transaction Response Status Codes
+
+|Code |Description|
+|:|:|
+|`200 (OK)`| The Target Workitem was updated.|
+|`400 (Bad Request)`| There was a problem with the request. For example: (1) the Target Workitem was in the `COMPLETED` or `CANCELED` state. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect. (4) the dataset didn't conform to the requirements.|
+|`401 (Unauthorized)`| The client isn't authenticated.|
+| `403 (Forbidden)` | The user isn't authorized. |
+|`404 (Not Found)`| The Target Workitem wasn't found.|
+|`409 (Conflict)` |The request is inconsistent with the current state of the Target Workitem.|
+|`415 (Unsupported Media Type)`| The provided `Content-Type` isn't supported.|
+
+#### Update Workitem Transaction Response Payload
+
+The origin server shall support header fields as required in [Table 11.6.3-2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#table_11.6.3-2).
+
+A success response shall have either no payload or a payload containing a Status Report document.
+
+A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+
+### Change Workitem State
+
+This transaction is used to change the state of a Workitem. It corresponds to the UPS DIMSE N-ACTION operation "Change UPS State". State changes are used to claim ownership, complete, or cancel a Workitem.
+
+Refer to: https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7
+
+If the Workitem exists on the origin server, the Workitem shall be returned in an Acceptable Media Type. The returned Workitem shall not contain the Transaction UID (0008,1195) attribute. This is necessary to preserve this attribute's role as an access lock, as described here.
+
+|Method| Path| Description|
+|:|:|:|
+|PUT| ../workitems/{workitem}/state|Change Workitem State |
+
+The `Accept` header is required, and must have the value `application/dicom+json`.
+
+The request payload shall contain the Change UPS State Data Elements. These data elements are:
+
+* **Transaction UID (0008, 1195)** The request payload shall include a Transaction UID. The user agent creates the Transaction UID when requesting a transition to the `IN PROGRESS` state for a given Workitem. The user agent provides that Transaction UID in subsequent transactions with that Workitem.
+* **Procedure Step State (0074, 1000)** The legal values correspond to the requested state transition. They are: `IN PROGRESS`, `COMPLETED`, or `CANCELED`.
+
+#### Change Workitem State Response Status Codes
+
+|Code| Description|
+|:|:|
+|`200 (OK)`| Workitem Instance was successfully retrieved.|
+|`400 (Bad Request)` |The request cannot be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect|
+|`401 (Unauthorized)` |The client isn't authenticated.|
+|`403 (Forbidden)` | The user isn't authorized. |
+|`404 (Not Found)`| The Target Workitem wasn't found.|
+|`409 (Conflict)`| The request is inconsistent with the current state of the Target Workitem.|
+
+#### Change Workitem State Response Payload
+
+* Responses will include the header fields specified in [section 11.7.3.2](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#sect_11.7.3.2).
+* A success response shall have no payload.
+* A failure response payload may contain a Status Report describing any failures, warnings, or other useful information.
+
+### Search Workitems
+
+This transaction enables you to search for Workitems by attributes.
+
+|Method |Path| Description|
+|:|:|:|
+|GET| ../workitems?| Search for Workitems|
+
+The following `Accept` header(s) are supported for searching:
+
+`application/dicom+json`
+
+#### Supported Search Parameters
+
+The following parameters for each query are supported:
+
+|Key |Support| Values| Allowed| Count |Description|
+|: |: |: |: |: |:|
+|`{attributeID}=`| `{value}` |0...N |Search for attribute/ value matching in query.
+|`includefield=` |`{attributeID} all`| 0...N |The additional attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes will be returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server will default to using 'all'.
+|`limit=`| `{value}`| 0...1| Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`.|
+|`offset=`| `{value}`| 0...1| Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response will be returned.
+|`fuzzymatching=` |`true/false`| 0...1 |If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It will do a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` will all match. However `ohn` will **not** match.|
+
+##### Searchable Attributes
+
+We support searching on these attributes:
+
+|Attribute Keyword|
+|:|
+|`PatientName`|
+|`PatientID`|
+|`ReferencedRequestSequence.AccessionNumber`|
+|`ReferencedRequestSequence.RequestedProcedureID`|
+|`ScheduledProcedureStepStartDateTime`|
+|`ScheduledStationNameCodeSequence.CodeValue`|
+|`ScheduledStationClassCodeSequence.CodeValue`|
+|`ScheduledStationGeographicLocationCodeSequence.CodeValue`|
+|`ProcedureStepState`|
+|`StudyInstanceUID`|
+
+##### Search Matching
+
+We support these matching types:
+
+|Search Type |Supported Attribute| Example|
+|:|:|:|
+|Range Query| `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime`| `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This will be mapped to `attributeID >= {value1}` AND `attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid.
+|Exact Match |All supported attributes| `{attributeID}={value1}`
+|Fuzzy Match| `PatientName` |Matches any component of the name that starts with the value.
+
+> [!NOTE]
+> While we don't support full sequence matching, we do support exact match on the attributes listed above that are contained in a sequence.
+
+##### Attribute ID
+
+Tags can be encoded in a number of ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+
+|Value |Example|
+|:|:|
+|`{group}{element}` |`00100010`|
+|`{dicomKeyword}` |`PatientName`|
+
+Example query:
+
+`../workitems?PatientID=K123&0040A370.00080050=1423JS&includefield=00404005&limit=5&offset=0`
+
+#### Search Response
+
+The response will be an array of `0...N` DICOM datasets with the following attributes returned:
+
+* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1 or 2
+* All attributes in [DICOM PowerShell 3.4 Table CC.2.5-3](https://dicom.nema.org/medical/dicom/current/output/html/part04.html#table_CC.2.5-3) with a Return Key Type of 1C for which the conditional requirements are met
+* All other Workitem attributes passed as match parameters
+* All other Workitem attributes passed as includefield parameter values
+
+#### Search Response Codes
+
+The query API will return one of the following status codes in the response:
+
+|Code |Description|
+|:|:|
+|`200 (OK)`| The response payload contains all the matching resource.|
+|`206 (Partial Content)` | The response payload contains only some of the search results, and the rest can be requested through the appropriate request.|
+|`204 (No Content)`| The search completed successfully, but returned no results.|
+|`400 (Bad Request)`| The was a problem with the request. For example, invalid Query Parameter syntax. The Response body contains details of the failure.|
+|`401 (Unauthorized)`| The client isn't authenticated.|
+|`403 (Forbidden)` | The user isn't authorized. |
+|`503 (Service Unavailable)` | The service is unavailable or busy. Please try again later.|
+
+#### Additional Notes
+
+The query API will not return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request will be returned. Anything requested within the acceptable range, will be resolved.
+
+* Paged results are optimized to return matched newest instance first, this may result in duplicate records in subsequent pages if newer data matching the query was added.
+* Matching is case insensitive and accent insensitive for PN VR types.
+* Matching is case insensitive and accent sensitive for other string VR types.
+* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query will most likely exclude the Workitem that's getting updated and the response code will be `206 (Partial Content)`.
+ ### Next Steps
-For information about the DICOM service, see
+For more information, see
>[!div class="nextstepaction"] >[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Previously updated : 03/22/2022 Last updated : 10/13/2022
This tutorial provides an overview of how to use DICOMweb&trade; Standard APIs w
The DICOM service supports a subset of DICOMweb&trade; Standard that includes:
-* Store (STOW-RS)
-* Retrieve (WADO-RS)
-* Search (QIDO-RS)
+* [Store (STOW-RS)](dicom-services-conformance-statement.md#store-stow-rs)
+* [Retrieve (WADO-RS)](dicom-services-conformance-statement.md#retrieve-wado-rs)
+* [Search (QIDO-RS)](dicom-services-conformance-statement.md#search-qido-rs)
+* [Delete](dicom-services-conformance-statement.md#delete)
Additionally, the following non-standard API(s) are supported:
-* Delete
-* Change Feed
+* [Change Feed](dicom-change-feed-overview.md)
+* [Extended Query Tags](dicom-extended-query-tags-overview.md)
To learn more about our support of DICOM Web Standard APIs, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md) reference document.
Once deployment is complete, you can use the Azure portal to navigate to the new
## Overview of various methods to use with DICOM service
-Because DICOM service is exposed as a REST API, you can access it using any modern development language. For language-agnostic information on working with the service, see [DICOM Conformance Statement](dicom-services-conformance-statement.md).
+Because DICOM service is exposed as a REST API, you can access it using any modern development language. For language-agnostic information on working with the service, see [DICOM Services Conformance Statement](dicom-services-conformance-statement.md).
To see language-specific examples, refer to the examples below. You can view Postman collection examples in several languages including:
To see language-specific examples, refer to the examples below. You can view Pos
* PowerShell * Python * Ruby
-* Swift.
+* Swift
### C#
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Azure Health Data Services enables you to:
**Linked services**
-Azure Health Data Services now supports multiple health data standards for the exchange of structured data. A single collection of Azure Health Data Services enables you to deploy multiple instances of different service types (FHIR, DICOM, and MedTech) that seamlessly work with one another. Services deployed within a workspace also share a compliance boundary and common configuration settings. The product scales automatically to meet the varying demands of your workloads, so you spend less time managing infrastructure and more time generating insights from health data.
+Azure Health Data Services supports multiple health data standards for the exchange of structured data. A single collection of Azure Health Data Services enables you to deploy multiple instances of different service types (FHIR, DICOM, and MedTech) that seamlessly work with one another. Services deployed within a workspace also share a compliance boundary and common configuration settings. The product scales automatically to meet the varying demands of your workloads, so you spend less time managing infrastructure and more time generating insights from health data.
-**Introducing DICOM service**
+**DICOM service**
-Azure Health Data Services now includes support for DICOM service. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
+Azure Health Data Services includes support for the DICOM service. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
-**Incremental changes to the FHIR service**
+**MedTech service**
+
+Azure Health Data Services includes support for the MedTech service. The MedTech service enables you to ingest high-frequency IoT device data into the FHIR Service in a scalable, secure, and compliant manner. For more information about MedTech, see [Overview of MedTech](../healthcare-apis/iot/iot-connector-overview.md).
+
+**FHIR service**
+
+Azure Health Data Services includes support for FHIR service. The FHIR service enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. For more information about MedTech, see [Overview of FHIR](../healthcare-apis/fhir/overview.md).
For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in Azure API for FHIR. * **Support for transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](https://www.hl7.org/) and refer to batch/transaction interactions. * [Chained Search Improvements](./././fhir/overview-of-search.md#chained--reverse-chained-searching): Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query.
-* The $convert-data operation can now transform JSON objects to FHIR R4.
+* The `$convert-data` operation can now transform JSON objects to FHIR R4.
* Events: Trigger new workflows when resources are created, updated, or deleted in a FHIR service.
To start working with the Azure Health Data Services, follow the 5-minute quick
> [!div class="nextstepaction"] > [Workspace overview](workspace-overview.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Deploy 06 New Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-06-new-deploy.md
# Part 3: Manual Deployment and Post-deployment of MedTech service
-When you are satisfied with your configuration and it has been successfully validated, you can complete the deployment and post-deployment process.
+When you're satisfied with your configuration and it has been successfully validated, you can complete the deployment and post-deployment process.
## Create your manual deployment
For more information about authorizing access to Event Hubs resources, see [Auth
### Grant access to the FHIR service
-The process for granting your MedTech service system-assigned managed identity access to your FHIR service requires the same 13 steps that you used to grant access to your device message event hub. The only exception will be a change to step 6. Your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**.
+The process for granting your MedTech service system-assigned managed identity access to your FHIR service requires the same 13 steps that you used to grant access to your device message event hub. There are two exceptions. The first is that, instead of navigating to the Access Control (IAM) menu from within your event hub (as outlined in steps 1-4), you should navigate to the equivalent Access Control (IAM) menu from within your FHIR service. The second exception is that, in step 6, your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**.
The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized.
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/workspace-overview.md
Additionally, workspaces can be created using Azure Resource Manager deployment
You can use PowerShell, CLI, Terraform scripts, or the .NET SDK to deploy Azure Health Data Services. To create a service instance in the workspace, select **Create** (FHIR service, DICOM service, or MedTech service), and then enter the account details for that service instance that is being created. - ## FHIR service FHIR service includes FHIR APIs and endpoints in Azure for data access and storage in FHIR data
Deploy a DICOM service to bring medical imaging data into the cloud from any DIC
## MedTech service
-The IoT Connector service enables you to ingest high-frequency IoT device data into the FHIR Service in a scalable, secure, and compliant manner. For more information, see [the MedTech service documentation page](./iot/index.yml).
+The MedTech service enables you to ingest high-frequency IoT device data into the FHIR Service in a scalable, secure, and compliant manner. For more information, see [the MedTech service documentation page]see [Overview of MedTech](../healthcare-apis/iot/iot-connector-overview.md).
## Workspace configuration settings
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit.md
ms.devlang: c Previously updated : 06/04/2021- Last updated : 10/14/2022+ # Quickstart: Connect a Renesas RX65N Cloud Kit to IoT Central
To connect the Renesas RX65N to Azure, you'll modify a configuration file for Wi
|Constant name|Value| |-|--|
- |`WIFI_SSID` |{*Your Wi-Fi ssid*}|
+ |`WIFI_SSID` |{*Your Wi-Fi SSID*}|
|`WIFI_PASSWORD` |{*Your Wi-Fi password*}| |`WIFI_MODE` |{*One of the enumerated Wi-Fi mode values in the file*}|
To connect the Renesas RX65N to Azure, you'll modify a configuration file for Wi
:::image type="content" source="media/quickstart-devkit-renesas-rx65n-cloud-kit/rfp-auth.png" alt-text="Screenshot of Renesas Flash Programmer, Authentication":::
-6. Select the *Browse...* button and locate the *rx65n_azure_iot.hex* file created in the previous section.
+6. Select the *Connect Settings* tab, select the *Speed* dropdown, and set the speed to 1,000,000 bps.
+ > [!IMPORTANT]
+ > If there are errors when you try to flash the board, you might need to lower the speed in this setting to 750,000 bps or lower.
-7. Press *Start* to begin flashing. This process will take approximately 10 seconds.
+
+6. Select the *Operation* tab, then select the *Browse...* button and locate the *rx65n_azure_iot.hex* file created in the previous section.
+
+7. Press *Start* to begin flashing. This process takes less than a minute.
### Confirm device connection details
You can use the **Termite** app to monitor communication and confirm that your d
```output Starting Azure thread-
+
Initializing WiFi
- Connecting to SSID 'iot'
- SUCCESS: WiFi connected to iot
-
+ MAC address:
+ Firmware version 0.14
+ SUCCESS: WiFi initialized
+
+ Connecting WiFi
+ Connecting to SSID
+ Attempt 1...
+ SUCCESS: WiFi connected
+
Initializing DHCP
- IP address: 192.168.0.21
- Gateway: 192.168.0.1
+ IP address: 192.168.0.31
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized-
+
Initializing DNS client
- DNS address: 75.75.76.76
+ DNS address: 192.168.0.1
SUCCESS: DNS client initialized-
- Initializing SNTP client
- SNTP server 0.pool.ntp.org
- SNTP IP address: 45.79.214.107
- SNTP time update: May 21, 2021 20:24:10.76 UTC
+
+ Initializing SNTP time sync
+ SNTP server 0.pool.ntp.org
+ SNTP server 1.pool.ntp.org
+ SNTP time update: Oct 14, 2022 15:23:15.578 UTC
SUCCESS: SNTP initialized-
+
Initializing Azure IoT DPS client
- DPS endpoint: global.azure-devices-provisioning.net
- DPS ID scope: ***
- Registration ID: mydevice
+ DPS endpoint: global.azure-devices-provisioning.net
+ DPS ID scope:
+ Registration ID: mydevice
SUCCESS: Azure IoT DPS client initialized-
+
Initializing Azure IoT Hub client
- Hub hostname: ***.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgrx65ncloud;1
- Connected to IoT Hub
- SUCCESS: Azure IoT Hub client initialized
+ Hub hostname:
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgrx65ncloud;1
+ SUCCESS: Connected to IoT Hub
+
+ Receive properties: {"desired":{"$version":1},"reported":{"$version":1}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"Renesas","model":"RX65N Cloud Kit","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"RX65N","processorManufacturer":"Renesas","totalStorage":2048,"totalMemory":640}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
+
+ Starting Main loop
+ Telemetry message sent: {"humidity":29.37,"temperature":25.83,"pressure":92818.25,"gasResistance":151671.25}.
+ Telemetry message sent: {"accelerometerX":-887,"accelerometerY":236,"accelerometerZ":8272}.
+ Telemetry message sent: {"gyroscopeX":9,"gyroscopeY":1,"gyroscopeZ":4}.
``` Keep Termite open to monitor device output in the following steps.
To call a method in IoT Central portal:
:::image type="content" source="media/quickstart-devkit-renesas-rx65n-cloud-kit/iot-central-invoke-method.png" alt-text="Screenshot of calling a direct method on a device in IoT Central":::
-1. In the **State** dropdown, select **False**, and then select **Run**.. The LED light should turn off.
+1. In the **State** dropdown, select **False**, and then select **Run**. The LED light should turn off.
## View device information
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
This tutorial also assumes that the device update takes place in a secure enviro
This tutorial is oriented toward a Windows-based workstation. However, you can perform the procedures on Linux. For a Linux example, see [Tutorial: Provision for geolatency](how-to-provision-multitenant.md).
+>[!NOTE]
+> If you've previously completed [Quickstart: Provision a simulated symmetric key device](quick-create-simulated-device-symm-key.md) and still have your Azure resources and development environment set up, you can proceed to [Create a symmetric key enrollment group](#create-a-symmetric-key-enrollment-group) in this tutorial.
+ ## Prerequisites * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
If you plan to continue working on and exploring the device client sample, don't
## Next steps
-Provision an X.509 certificate device:
+Provision multiple symmetric key devices using an enrollment group:
> [!div class="nextstepaction"]
-> [Quickstart - Provision an X.509 device using the Azure IoT C SDK](quick-create-simulated-device-x509.md)
+> [Tutorial: Provision devices using symmetric key enrollment groups](how-to-legacy-device-symm-key.md)
lab-services How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-determine-your-quota-usage.md
+
+ Title: How to determine your quota usage
+description: Learn how to determine where the cores for your subscription are used and if you have any spare capacity against your quota.
++ Last updated : 10/11/2022 +
+
+
+# Determine usage and quota
+
+Keeping track of how your quota of VM cores is being used across your subscriptions can be difficult. You may want to know what your current usage is, how much you have left, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the Usage + Quotas page.
+
+## Determine your usage and quota
+
+1. In the [Azure portal](https://portal.azure.com), go to the subscription you want to examine.
+
+1. On the Subscription page, under Settings, select **Usage + quotas**.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/subscription-overview.png" alt-text="Screenshot showing the Subscription overview left menu, with Usage and quotas highlighted.":::
+
+1. To view Usage + quotas information about Azure Lab Services, select **Azure Lab Services**.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/select-azure-lab-services.png" alt-text="Screenshot showing the Usage and quotas page, Compute drop-down, with Azure Lab Services highlighted.":::
+
+ >[!Tip]
+ >If you donΓÇÖt see any data on the Usage + quotas page, or youΓÇÖre not able to select Azure Lab Services from the list, your subscription may not be registered with Azure Lab Services.
+ >Follow the steps in the Register your subscription with Azure Labs Services section below to resolve the issue.
+
+1. In this example, you can see the **Quota name**, the **Region**, the **Subscription** the quota is assigned to, and the **Current Usage**.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/example-subscription.png" alt-text="Screenshot showing the Usage and quotas page, with column headings highlighted.":::
+
+1. You can also see that the usage is grouped by level; regular, low, and no usage. Within the usage levels, the items are grouped in their sizes ΓÇô including Small / Medium / Large cores, and by labs and lab plans.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/example-subscription-groups.png" alt-text="Screenshot showing the Usage and quotas page, with VM size groups highlighted.":::
+
+1. To view quota and usage information for specific regions, select **Region:** and select the regions to display, and then select **Apply**.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/select-regions.png" alt-text="Screenshot showing the Usage and quotas page, with Regions drop down highlighted.":::
+
+1. To view only the items that are using part of your quota, select **Usage:**, and then select **Only items with usage**.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/select-items-with-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Only show items with usage option highlighted.":::
+
+1. To view items that are using above a certain amount of your quota, select **Usage:**, and then select **Select custom usage**.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/select-custom-usage-before.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Select custom usage option highlighted.":::
+
+1. You can then set a custom usage threshold, so only the items using above the specified percentage of the quota are displayed.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/select-custom-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Select custom usage option and configuration settings highlighted.":::
+
+1. Select **Apply**.
+
+ Each subscription has its own Usage + quotas page, which covers all the various services in the subscription, not just Azure Lab Services. Although you can request a quota increase from the Usage + quotas page, you'll have more relevant information at hand if you make your request from your lab plan page.
+
+## Register your subscription with Azure Labs Services
+
+In most cases, Azure Lab Services will register your subscription when you perform certain actions, like creating a lab plan. In some cases, you must register your subscription with the Azure Labs Service manually before you can view your usage and quota information on the Usage + Quotas page.
+
+### To register your subscription:
+
+1. In the [Azure portal](https://portal.azure.com), go to the subscription you want to examine.
+
+1. On the Subscription page, under Settings, select **Usage + quotas**.
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/subscription-overview.png" alt-text="Screenshot showing the Subscription overview left menu, with Usage and quotas highlighted.":::
+
+1. If you arenΓÇÖt registered with Azure Lab Services, youΓÇÖll see the message *ΓÇ£One or more resource providers selected are not registered for this subscription. To access your quotas, register the resource provider.ΓÇ¥*
+
+ :::image type="content" source="media/how-to-determine-your-core-usage/register-to-service.png" alt-text="Screenshot showing the register with service message and link to register the resource provider highlighted.":::
+
+1. Select the link and follow the instructions to register your account with Azure Lab Services.
+
+## Next steps
+
+- Learn more about [Capacity limits in Azure Lab Services](./capacity-limits.md).
+- Learn how to [Request a core limit increase](./how-to-request-capacity-increase.md).
+
+
lab-services How To Request Capacity Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-request-capacity-increase.md
description: Learn how to request a core limit (quota) increase to expand capaci
Previously updated : 08/26/2022 Last updated : 10/06/2022
-<!-- As a lab administrator, I want more cores available for my subscription so that I can support more students. -->
+<!-- As a lab administrator, I want more VM cores available for my subscription so that I can support more students. -->
# Request a core limit increase
-If you reach the cores limit for your subscription, you can request a limit increase to continue using Azure Lab Services. The request process allows the Azure Lab Services team to ensure your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
+If you reach the cores limit for your subscription, you can request a core limit increase (sometimes called an increase in capacity, or a quota increase) to continue using Azure Lab Services. The request process allows the Azure Lab Services team to ensure your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
For information about creating support requests in general, see how to create a [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ## Prepare to submit a request
-Before you begin your request for a capacity increase, you should make sure that you have all the information you need available and verify that you have the appropriate permissions. Review this article, and gather information like the number and size of cores you want to add, the regions you can use, and the location of resources like your existing labs and virtual networks.
+Before you begin your request for a core limit increase, you should make sure that you have all the information you need available and verify that you have the appropriate permissions. Gather information like the number and size of cores you want to add, the regions you can use, and the location of resources like your existing labs and virtual networks.
### Permissions To create a support request, you must be assigned to one of the following roles at the subscription level:
Azure Lab Services resources can exist in many regions. You can choose to deploy
### Determine the total number of cores in your request
-Your capacity can be divided amongst virtual machines (VMs) of different sizes. You must calculate the total number of cores for each size. You must include the number of cores you already have, and the number of cores you want to add to determine the total number of cores. You must then map the total number of cores you want to the SKU size groups listed below.
+Your cores can be divided amongst virtual machines (VMs) of different sizes. You must calculate the total number of cores for each size. You must include the number of cores you already have, and the number of cores you want to add to determine the total number of cores. You must then map the total number of cores you want to the SKU size groups listed below.
**Size groups**
To determine the total number of cores for your request, you must:
3. Map to SKU group and sum all cores under each group 4. Enter the resulting total number of cores for each group in your request
-As an example, suppose you have existing VMs and want to request more as shown in the following table:
+As an example, suppose you have existing VMs, and want to request more as shown in the following table:
| Size | Existing VMs | Additional VMs required | Total VMs | |--|--|--|--|
Calculate the total number of cores for each size group.
:::image type="content" source="./media/how-to-request-capacity-increase/total-cores-grouped.png" alt-text="Screenshot showing the total number of cores in each group."::: -
-Remember that the total number of cores = existing cores + desired cores.
-
-### Locate and copy lab plan resource ID
-Complete this step if you want to extend a lab plan in the updated version of Lab Services (August 2022).
-
-To add extra capacity to an existing subscription, you must specify a lab plan resource ID when you make the request. Although a lab plan is needed to make a capacity request, the actual capacity is assigned to your subscription, so you can use it where you need it. Capacity is not tied to individual lab plans. This means that you can delete all your lab plans and still have the same capacity assigned to your subscription.
-
-Use the following steps to locate and copy the lab plan resource ID so that you can paste it into your support request.
-1. In the [Azure portal](https://portal.azure.com), navigate to the lab plan to which you want to add cores.
-
-1. Under Settings, select Properties, and then copy the **Resource ID**.
- :::image type="content" source="./media/how-to-request-capacity-increase/resource-id.png" alt-text="Screenshot showing the lab plan properties with resource ID highlighted.":::
-
-1. Paste the resource ID into a document for safekeeping; you'll need it to complete the support request.
+Remember that the total number of cores is the existing cores and the new cores you are requesting.
## Start a new support request
-You can follow these steps to request a limit increase:
+You can follow these steps to request a core limit increase:
-1. In the Azure portal, in Support & Troubleshooting, select **Help + support**
- :::image type="content" source="./media/how-to-request-capacity-increase/support-troubleshooting.png" alt-text="Screenshot of the Azure portal showing Support & troubleshooting with Help + support highlighted.":::
-1. On the Help + support page, select **Create support request**.
- :::image type="content" source="./media/how-to-request-capacity-increase/create-support-request.png" alt-text="Screenshot of the Help + support page with Create support request highlighted.":::
+1. In the Azure portal, navigate to your lab plan or lab account, and then select **Create support request**.
+ :::image type="content" source="./media/how-to-request-capacity-increase/request-from-lab-plan.png" alt-text="Screenshot of the Help + support page with Create support request highlighted.":::
1. On the New support request page, use the following information to complete the **Problem description**, and then select **Next**.
You can follow these steps to request a limit increase:
:::image type="content" source="./media/how-to-request-capacity-increase/enter-details-link.png" alt-text="Screenshot of the Additional details page with Enter details highlighted."::: ## Make core limit increase request
-When you request core limit increase (sometimes called an increase in capacity), you must supply some information to help the Azure Lab Services team evaluate and action your request as quickly as possible. The more information you can supply and the earlier you supply it, the more quickly the Azure Lab Services team will be able to process your request.
+When you request core limit increase, you must supply some information to help the Azure Lab Services team evaluate and action your request as quickly as possible. The more information you can supply and the earlier you supply it, the more quickly the Azure Lab Services team will be able to process your request.
You need to specify different information depending on the version of Azure Lab Services you're using. The information required for the lab accounts used in original version of Lab Services (May 2019) and the lab plans used in the updated version of Lab Services (August 2022) is detailed on the tabs below. Use the appropriate tab to guide you as you complete the **Quota details** for your lab account or lab plan.
-#### [**Lab Accounts (Classic) - May 2019 version**](#tab/LabAccounts/)
+#### [**Lab Account (Classic) - May 2019 version**](#tab/LabAccounts/)
|Name |Value | |||
You need to specify different information depending on the version of Azure Lab
#### [**Lab Plans - August 2022 version**](#tab/Labplans/) |Name |Value | ||| |**Deployment Model**|Select **Lab Plan**|
+ |**What is the ideal date to have this by?**|It's best to request a core limit increase well before you need the new cores. Use this option to specify when the new cores are required.|
|**Region**|Enter the preferred location or region where you want the extra cores.|
- |**Alternate region**|If you're flexible with the location of your cores, you can select alternate regions.|
+ |**Alternate regions**|If you're flexible with the location of your cores, you can select alternate regions.|
|**If you plan to use the new capacity with advanced networking, what region does your virtual network reside in?**|If your lab plan uses advanced networking, you must specify the region your virtual network resides in.|
- |**Virtual Machine Size**|Select the virtual machine size that you require for the new cores.|
- |**Requested total core limit**|Enter the total number of cores you require; your existing cores + the number you're requesting.|
- |**What is the minimum number of cores you can start with?**|Your new cores may be made available gradually. Enter the minimum number of cores you require.|
- |**What's the ideal date to have this by? (MM/DD/YYY)**|Enter the date on which you want the extra cores to be available.|
- |**Is this for an existing lab or to create a new lab?**|Select **Existing lab** or **New lab**. </br> If you're adding cores to an existing lab, enter the lab's resource ID.|
+ |**Virtual machine size**|Select the virtual machine size(s) that you require for the new cores.|
+ |**Requested total core limit**|Enter the total number of cores you require; your existing cores + the number you're requesting. See [Determine the total number of cores in your request](#determine-the-total-number-of-cores-in-your-request) to learn how to calculate the total number of cores for each size group.|
+ |**Is this for an existing lab or to create a new lab?**|Select **Existing lab** or **New lab**.|
+ |**What is the lab account name?**|Only applies if you're adding cores to an existing lab. Select the lab account name.|
|**What is the month-by-month usage plan for the requested cores?**|Enter the rate at which you want to add the extra cores, on a monthly basis.| |**Additional details**|Answer the questions in the additional details box. The more information you can provide here, the easier it will be for the Azure Lab Services team to process your request. |
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Learn more at [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlf
## Training MLflow projects (preview) - You can submit training jobs to Azure Machine Learning by using [MLflow projects](https://www.mlflow.org/docs/latest/projects.html) (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud via [Azure Machine Learning compute](./how-to-create-attach-compute-cluster.md).
-Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
+Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning](how-to-train-mlflow-projects.md).
## MLflow SDK, Azure Machine Learning v2, and Azure Machine Learning studio capabilities
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
It's a shared responsibility between you and Microsoft to ensure that your envir
### Compute instance
-* Compute instances get latest VM images at time of provisioning.
-* Microsoft doesn't provide active OS patching for compute instance. To obtain the latest VM image, delete and recreate the compute instance.
-* You could use set up scripts to install extra scanning software. Azure Defender agents are currently not supported.
-* To query resource age, you could use the following log analytics query:
-
- ```kusto
- AmlComputeClusterEvent
- | where ClusterType == "DSI" and EventType =="CreateOperationCompleted" and split(_ResourceId, "/")[-1]=="<wsname>"
- | project ClusterName, TimeCreated=TimeGenerated
- | summarize Last_Time_Created=arg_max(TimeCreated, *) by ClusterName
- | join kind = leftouter (AmlComputeClusterEvent
- | where ClusterType == "DSI" and EventType =="DeleteOperationCompleted"
- | project ClusterName, TimeGenerated
- | summarize Last_Time_Deleted=arg_max(TimeGenerated, *) by ClusterName)
- on ClusterName
- | where (Last_Time_Created>Last_Time_Deleted or isnull(Last_Time_Deleted)) and Last_Time_Created < ago(30days)
- | project ClusterName, Last_Time_Created, Last_Time_Deleted
- ```
+Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. Once a compute instance is deployed, it does not get actively updated. To keep current with the latest software updates and security patches, you could:
+
+1. Recreate a compute instance to get the latest OS image (recommended)
+
+ * Data and customizations such as installed packages that are stored on the instanceΓÇÖs OS and temporary disks will be lost.
+ * [Store notebooks under "User files"](/azure/machine-learning/concept-compute-instance#accessing-files) to persist them when recreating your instance.
+ * [Mount data using datasets and datastores](/azure/machine-learning/v1/concept-azure-machine-learning-architecture#datasets-and-datastores) to persist files when recreating your instance.
+ * See [Compute Instance release notes](azure-machine-learning-ci-image-release-notes.md) for details on image releases.
+
+1. Alternatively, regularly update OS and python packages.
+
+ * Use Linux package management tools to update the package list with the latest versions.
+
+ ```bash
+ sudo apt-get update
+ ```
+
+ * Use Linux package management tools to upgrade packages to the latest versions. Note that package conflicts might occur using this approach.
+
+ ```bash
+ sudo apt-get upgrade
+ ```
+
+ * Use Python package management tools to upgrade packages and check for updates.
+
+ ```bash
+ pip list --outdated
+ ```
+
+You may install and run additional scanning software on compute instance to scan for security issues.
+
+* [Trivy](https://github.com/aquasecurity/trivy) may be used to discover OS and python package level vulnerabilities.
+* [ClamAV](https://www.clamav.net/) may be used to discover malware and comes pre-installed on compute instance.
+* Defender for Server agent installation is currently not supported.
+* Consider using [customization scripts](/azure/machine-learning/how-to-customize-compute-instance) for automation.
### Compute clusters
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Last updated 03/15/2022
#Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
-# Set up AutoML to train a natural language processing model (preview)
+# Set up AutoML to train a natural language processing model
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"] > * [v1](./v1/how-to-auto-train-nlp-models-v1.md) > * [v2 (current version)](how-to-auto-train-nlp-models.md) In this article, you learn how to train natural language processing (NLP) models with [automated ML](concept-automated-ml.md) in Azure Machine Learning. You can create NLP models with automated ML via the Azure Machine Learning Python SDK v2 or the Azure Machine Learning CLI v2.
For CLI v2 AutoML jobs you configure your experiment in a YAML file like the fol
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-For AutoML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`.
+For AutoML jobs via the SDK, you configure the job with the specific NLP task function. The following example demonstrates the configuration for `text_classification`.
```Python # general job parameters compute_name = "gpu-cluster"
You can also run your NLP experiments with distributed training on an Azure ML c
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] + This is handled automatically by automated ML when the parameters `max_concurrent_iterations = number_of_vms` and `enable_distributed_dnn_training = True` are provided in your `AutoMLConfig` during experiment setup. Doing so, schedules distributed training of the NLP models and automatically scales to every GPU on your virtual machine or cluster of virtual machines. The max number of virtual machines allowed is 32. The training is scheduled with number of virtual machines that is in powers of two. ```python
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZUR
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] + With the `MLClient` created earlier, you can run this `CommandJob` in the workspace. ```python
https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/au
## Model sweeping and hyperparameter tuning (preview) + AutoML NLP allows you to provide a list of models and combinations of hyperparameters, via the hyperparameter search space in the config. Hyperdrive generates several child runs, each of which is a fine-tuning run for a given NLP model and set of hyperparameter values that were chosen and swept over based on the provided search space. ## Supported model algorithms
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
> * [v1](./v1/how-to-configure-auto-train-v1.md) > * [v2 (current version)](how-to-configure-auto-train.md) - In this guide, learn how to set up an automated machine learning, AutoML, training job with the [Azure Machine Learning Python SDK v2](/python/api/overview/azure/ml/intro). Automated ML picks an algorithm and hyperparameters for you and generates a model ready for deployment. This guide provides details of the various options that you can use to configure automated ML experiments. If you prefer a no-code experience, you can also [Set up no-code AutoML training in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md).
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
-# Train ML models with MLflow Projects and Azure Machine Learning (preview)
-
+# Train ML models with MLflow Projects and Azure Machine Learning
In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support. You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Local deployment is deploying a model to a local Docker environment. Local deplo
Local deployment supports creation, update, and deletion of a local endpoint. It also allows you to invoke and get logs from the endpoint.
-# [Azure CLI](#tab/cli)
+## [Azure CLI](#tab/cli)
To use local deployment, add `--local` to the appropriate CLI command:
To use local deployment, add `--local` to the appropriate CLI command:
az ml online-deployment create --endpoint-name <endpoint-name> -n <deployment-name> -f <spec_file.yaml> --local ```
-# [Python SDK](#tab/python)
+## [Python SDK](#tab/python)
To use local deployment, add `local=True` parameter in the command:
Below is a list of common deployment errors that are reported as part of the dep
### ERROR: ImageBuildFailure
-This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location is returned as part of the error. For example, 'The build log is available in the workspace blob store "storage-account-name" under the path "/azureml/ImageLogs/your-image-id/build.log"'. In this case, "azureml" is the name of the blob container in the storage account.
+This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location may be returned as part of the error. For example, "The build log is available in the workspace blob store '[storage-account-name]' under the path '/azureml/ImageLogs/your-image-id/build.log'". In this case, "azureml" is the name of the blob container in the storage account.
+
+Below is a list of common image build failure scenarios:
+
+* [Azure Container Registry (ACR) authorization failure](#container-registry-authorization-failure)
+* [Generic or unknown failure](#generic-image-build-failure)
+
+#### Container registry authorization failure
-If no obvious error is found in the build log, and the last line is `Installing pip dependencies: ...working...`, then the error may be caused by a dependency. Pinning version dependencies in your conda file could fix this problem.
+If the error message mentions `"container registry authorization failure"`, that means the container registry could not be accessed with the current credentials.
+This can be caused by desynchronization of a workspace resource's keys and it takes some time to automatically synchronize.
+However, you can [manually call for a synchronization of keys](https://learn.microsoft.com/cli/azure/ml/workspace#az-ml-workspace-sync-keys) which may resolve the authorization failure.
-We also recommend using a [local deployment](#deploy-locally) to test and debug your models locally before deploying in the cloud.
+Container registries that are behind a virtual network may also encounter this error if set up incorrectly. You must verify that the virtual network been set up properly.
+
+#### Generic image build failure
+
+As stated above, you can check the build log for more information on the failure.
+If no obvious error is found in the build log and the last line is `Installing pip dependencies: ...working...`, then the error may be caused by a dependency. Pinning version dependencies in your conda file can fix this problem.
+
+We also recommend [deploying locally](#deploy-locally) to test and debug your models locally before deploying to the cloud.
### ERROR: OutOfQuota
If your container could not start, this means scoring could not happen. It might
To get the exact reason for an error, run:
-# [Azure CLI](#tab/cli)
+#### [Azure CLI](#tab/cli)
```azurecli az ml online-deployment get-logs -e <endpoint-name> -n <deployment-name> -l 100 ```
-# [Python SDK](#tab/python)
+#### [Python SDK](#tab/python)
```python ml_client.online_deployments.get_logs(
Make sure the model is registered to the same workspace as the deployment. Use t
- For example:
- # [Azure CLI](#tab/cli)
+ #### [Azure CLI](#tab/cli)
```azurecli az ml model show --name <model-name> --version <version> ```
- # [Python SDK](#tab/python)
+ #### [Python SDK](#tab/python)
```python ml_client.models.get(name="<model-name>", version=<version>)
You can also check if the blobs are present in the workspace storage account.
- If the blob is present, you can use this command to obtain the logs from the storage initializer:
- # [Azure CLI](#tab/cli)
+ #### [Azure CLI](#tab/cli)
```azurecli az ml online-deployment get-logs --endpoint-name <endpoint-name> --name <deployment-name> ΓÇô-container storage-initializer` ```
- # [Python SDK](#tab/python)
+ #### [Python SDK](#tab/python)
```python ml_client.online_deployments.get_logs(
When you access online endpoints with REST requests, the returned status codes a
| | | | | 200 | OK | Your model executed successfully, within your latency bound. | | 401 | Unauthorized | You don't have permission to do the requested action, such as score, or your token is expired. |
-| 404 | Not found | Your URL isn't correct. |
+| 404 | Not found | The endpoint doesn't have any valid deployment with positive weight. |
| 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config.| | 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
-| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow maximum `max_concurrent_requests_per_instance` * `instance_count` / `request_process_time (in seconds)` requests per second. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. Apart from enable auto-scaling, you could also increase the number of instances by using the below [code](#how-to-calculate-instance-count). |
+| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow maximum 2 * `max_concurrent_requests_per_instance` * `instance_count` / `request_process_time (in seconds)` requests per second. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`, respectively. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. Apart from enable auto-scaling, you could also increase the number of instances by using the below [code](#how-to-calculate-instance-count). |
| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints.| | 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
To increase the number of instances, you could calculate the required replicas f
```python from math import ceil # target requests per second
-target_qps = 20
+target_rps = 20
# time to process the request (in seconds) request_process_time = 10 # Maximum concurrent requests per instance
max_concurrent_requests_per_instance = 1
# The target CPU usage of the model container. 70% in this example target_utilization = .7
-concurrent_requests = target_qps * request_process_time / target_utilization
+concurrent_requests = target_rps * request_process_time / target_utilization
# Number of instance count instance_count = ceil(concurrent_requests / max_concurrent_requests_per_instance)
machine-learning Reference Yaml Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/schedule.schema.json. - [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)] ## YAML syntax
machine-learning Concept Mlflow V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-mlflow-v1.md
With MLflow Tracking, you can connect Azure Machine Learning as the back end of
+ [Track Azure Databricks training runs](../how-to-use-mlflow-azure-databricks.md).
-## Train MLflow projects (preview)
+## Train MLflow projects
+You can use MLflow Tracking to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning back-end support. You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud via [Azure Machine Learning compute](../how-to-create-attach-compute-cluster.md).
-You can use MLflow Tracking to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning back-end support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud via [Azure Machine Learning compute](../how-to-create-attach-compute-cluster.md).
-
-Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning (preview)](../how-to-train-mlflow-projects.md).
+Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning](../how-to-train-mlflow-projects.md).
## Deploy MLflow experiments
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
This quickstart demonstrates how to use the Azure CLI commands to create a clust
## Connect to your cluster
-Azure Managed Instance for Apache Cassandra does not create nodes with public IP addresses. To connect to your newly created Cassandra cluster, you must create another resource inside the virtual network. This resource can be an application, or a virtual machine with Apache's open-source query tool [CQLSH](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html) installed. You can use a [Resource Manager template](https://azure.microsoft.com/resources/templates/vm-simple-linux/) to deploy an Ubuntu virtual machine. After it's deployed, use SSH to connect to the machine and install CQLSH as shown in the following commands:
+Azure Managed Instance for Apache Cassandra does not create nodes with public IP addresses. To connect to your newly created Cassandra cluster, you must create another resource inside the virtual network. This resource can be an application, or a virtual machine with Apache's open-source query tool [CQLSH](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html) installed. You can use a [Resource Manager template](https://azure.microsoft.com/resources/templates/vm-simple-linux/) to deploy an Ubuntu virtual machine.
+
+### Connecting from CQLSH
+
+After the virtual machine is deployed, use SSH to connect to the machine and install CQLSH as shown in the following commands:
```bash # Install default-jre and default-jdk
initial_admin_password="Password provided when creating the cluster"
cqlsh $host 9042 -u cassandra -p $initial_admin_password --ssl ```
+### Connecting from an application
+
+As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL to be enabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started) and [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started). For Java, we highly recommend enabling [speculative execution policy](https://docs.datastax.com/en/developer/java-driver/4.10/manual/core/speculative_execution/). You can find a demo illustrating how this works and how to enable the policy [here](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-speculative-execution).
+
+Disabling certificate verification is recommended because certificate verification will not work unless you either store and validate against locally held copies of our certificates, or map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by either:
+
+- Storing our certificates locally and verifying against them. Our certificates are signed with Digicert - see [here](/azure/active-directory/fundamentals/certificate-authorities). You would need to ensure that you keep this up-to-date.
+- Adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes.
++ ## Troubleshooting If you encounter an error when applying permissions to your Virtual Network using Azure CLI, such as *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*, you can apply the same permission manually from the Azure portal. Learn how to do this [here](add-service-principal.md).
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
If you encounter an error when applying permissions to your Virtual Network usin
## Connecting to your cluster
-Azure Managed Instance for Apache Cassandra does not create nodes with public IP addresses, so to connect to your newly created Cassandra cluster, you will need to create another resource inside the VNet. This could be an application, or a Virtual Machine with Apache's open-source query tool [CQLSH](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html) installed. You can use a [template](https://azure.microsoft.com/resources/templates/vm-simple-linux/) to deploy an Ubuntu Virtual Machine. When deployed, use SSH to connect to the machine, and install CQLSH using the below commands:
+Azure Managed Instance for Apache Cassandra does not create nodes with public IP addresses, so to connect to your newly created Cassandra cluster, you will need to create another resource inside the VNet. This could be an application, or a Virtual Machine with Apache's open-source query tool [CQLSH](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html) installed. You can use a [template](https://azure.microsoft.com/resources/templates/vm-simple-linux/) to deploy an Ubuntu Virtual Machine.
++
+### Connecting from CQLSH
+
+After the virtual machine is deployed, use SSH to connect to the machine, and install CQLSH using the below commands:
```bash # Install default-jre and default-jdk
host=("<IP>")
initial_admin_password="Password provided when creating the cluster" cqlsh $host 9042 -u cassandra -p $initial_admin_password --ssl ```
+### Connecting from an application
+
+As with CQLSH, connecting from an application using one of the supported [Apache Cassandra client drivers](https://cassandra.apache.org/doc/latest/cassandra/getting_started/drivers.html) requires SSL to be enabled. See samples for connecting to Azure Managed Instance for Apache Cassandra using [Java](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-getting-started) and [.NET](https://github.com/Azure-Samples/azure-cassandra-mi-dotnet-core-getting-started). For Java, we highly recommend enabling [speculative execution policy](https://docs.datastax.com/en/developer/java-driver/4.10/manual/core/speculative_execution/). You can find a demo illustrating how this works and how to enable the policy [here](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-speculative-execution).
+
+Disabling certificate verification is recommended because certificate verification will not work unless you either store and validate against locally held copies of our certificates, or map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by either:
+
+- Storing our certificates locally and verifying against them. Our certificates are signed with Digicert - see [here](/azure/active-directory/fundamentals/certificate-authorities). You would need to ensure that you keep this up-to-date.
+- Adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes.
++ ## Clean up resources
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-monitoring.md
These metrics are available for Azure Database for MySQL:
|Total connections|total_connections|Count|The number of client connections to your Azure Database for MySQL Flexible server. Total Connections is sum of connections by clients using TCP/IP protocol over a selected period.| |Aborted Connections|aborted_connections|Count|Total number of failed attempts to connect to your MySQL server, for example, failed connection due to bad credentials. For more information on aborted connections, you can refer to this [documentation](https://dev.mysql.com/doc/refman/5.7/en/communication-errors.html).| |Queries|queries|Count|Total number of queries executed per minute on your server. Total count of queries per minute on your server from your database workload and Azure MySQL processes.|
+|Slow_queries|slow_queries|Count|The total count of slow queries on your server in the selected time range.|
+## Enhanced metrics
++
+### DML statistics
+
+|Metric display name|Metric|Unit|Description|
+|||||
+|Com_select|Com_select|Count|The total count of select statements that have been executed on your server in the selected time range.|
+|Com_update|Com_update|Count|The total count of update statements that have been executed on your server in the selected time range.|
+|Com_insert|Com_insert|Count|The total count of insert statements that have been executed on your server in the selected time range.|
+|Com_delete|Com_delete|Count|The total count of delete statements that have been executed on your server in the selected time range.|
++
+### DDL statistics
+
+|Metric display name|Metric|Unit|Description|
+|||||
+|Com_create_db|Com_create_db|Count|The total count of create database statements that have been executed on your server in the selected time range.|
+|Com_drop_db|Com_drop_db|Count|The total count of drop database statements that have been executed on your server in the selected time range.|
+|Com_create_table|Com_create_table|Count|The total count of create table statements that have been executed on your server in the selected time range.|
+|Com_drop_table|Com_drop_table|Count|The total count of drop table statements that have been executed on your server in the selected time range.|
+|Com_Alter|Com_Alter|Count|The total count of alter table statements that have been executed on your server in the selected time range.|
++
+### Innodb metrics
+
+|Metric display name|Metric|Unit|Description|
+|||||
+|Innodb_buffer_pool_reads|Innodb_buffer_pool_reads|Count|The total count of logical reads that InnoDB engine couldn't satisfy from the Innodb buffer pool, and had to be fetched from the disk.|
+|Innodb_buffer_pool_read_requests|Innodb_buffer_pool_read_requests|Count|The total count of logical read requests to read from the Innodb Buffer pool.|
+|Innodb_buffer_pool_pages_free|Innodb_buffer_pool_pages_free|Count|The total count of free pages in InnoDB buffer pool.|
+|Innodb_buffer_pool_pages_data|Innodb_buffer_pool_pages_data|Count|The total count of pages in the InnoDB buffer pool containing data. The number includes both dirty and clean pages.|
+|Innodb_buffer_pool_pages_dirty|Innodb_buffer_pool_pages_dirty|Count|The total count of pages in the InnoDB buffer pool containing dirty pages.|
+ ## Server logs In Azure Database for MySQL Server ΓÇô Flexible Server, users can configure and download server logs to assist with troubleshooting efforts. With this feature enabled, a flexible server starts capturing events of the selected log type and writes them to a file. You can then use the Azure portal and Azure CLI to download the files to work with them.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
- **Autoscale IOPS in Azure Database for MySQL - Flexible Server**
- You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature you pay only for the IO you use and no longer need to provision and pay for resources they arenΓÇÖt fully using, saving both time and money. In addition, mission-critical Tier-1 applications can achieve consistent performance by making additional IO available to the workload at any time. Auto scale IO eliminates the administration required to provide the best performance at the least cost for Azure Database for MySQL customers. [Learn more](./concepts-service-tiers-storage.md)
+ You can now scale IOPS on demand without having to pre-provision a certain amount of IOPS. With this feature, you can now enjoy worry free IO management in Azure Database for MySQL - Flexible Server because the server scales IOPs up or down automatically depending on workload needs. With this feature you, pay only for the IO you use and no longer need to provision and pay for resources they arenΓÇÖt fully using, saving both time and money. In addition, mission-critical Tier-1 applications can achieve consistent performance by making additional IO available to the workload at any time. Auto scale IO eliminates the administration required to provide the best performance at the least cost for Azure Database for MySQL customers. [Learn more](./concepts-service-tiers-storage.md)
+
+- **Perform Major version upgrade with minimal efforts for Azure Database for MySQL - Flexible Server (Preview)**
+
+ The major version upgrade feature allows you to perform in-place upgrades of existing instances of Azure Database for MySQL - Flexible Server from MySQL 5.7 to MySQL 8.0 with the click of a button, without any data movement or the need to make any application connection string changes. Take advantage of this functionality to efficiently perform major version upgrades on your instances of Azure Database for MySQL - Flexible Server and leverage the latest that MySQL 8.0 has to offer. Learn [more](./how-to-upgrade.md).
+
+- **MySQL extension for Azure Data Studio (Preview)**
+
+ When you’re working with multiple databases across data platforms and cloud deployment models, being able to perform the most common tasks on all your databases using a single tool enhances your productivity several fold. With the MySQL extension for Azure Data Studio, you can now connect to and modify MySQL databases along with your other databases, taking advantage of the modern editor experience and capabilities in Azure Data Studio, such as IntelliSense, code snippets, source control integration, native Jupyter Notebooks, an integrated terminal, and more. Use this new tooling with any MySQL server hosted on-premises, on virtual machines, on managed MySQL in other clouds, and on Azure Database for MySQL – Flexible Server. Learn [more](/sql/azure-data-studio/quickstart-mysql).
+
+- **Known issues**
+
+ - Change of compute size is not currently permitted after the Major version upgrade of your Azure Database for MySQL - Flexible Server. It is recommended to change the compute size of your Azure Database for MySQL - Flexible Server before the major version upgrade from version 5.7 to version 8.0.
+ ## September 2022
This release of Azure Database for MySQL - Flexible Server includes the followin
- **Support for Availability zone placement during server creation released**
- Customers can now specify their preferred Availability zone at the time of server creation. This functionality allows customers to collocate their applications hosted on Azure VM, virtual machine scale set, or AKS and database in the same Availability zones to minimize database latency and improve performance. [Learn more](quickstart-create-server-portal.md#create-an-azure-database-for-mysql-flexible-server).
+ Customers can now specify their preferred Availability zone at the time of server creation. This functionality allows customers to collocate their applications hosted on Azure VM, Virtual Machine Scale Set, or AKS and database in the same Availability zones to minimize database latency and improve performance. [Learn more](quickstart-create-server-portal.md#create-an-azure-database-for-mysql-flexible-server).
- **Performance fixes for issues when running flexible server in virtual network with private access**
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
# Tutorial: Monitor network communication between two virtual machine scale sets using the Azure portal > [!NOTE]
-> This tutorial cover Connection Monitor. Try the new and improved [Connection Monitor](connection-monitor-overview.md) to experience enhanced connectivity monitoring
+> This tutorial covers Connection Monitor. Try the new and improved [Connection Monitor](connection-monitor-overview.md) to experience enhanced connectivity monitoring
> [!IMPORTANT] > Starting 1 July 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the new Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before 29 February 2024.
Sign in to the [Azure portal](https://portal.azure.com).
## Create a virtual machine scale set
-Create a virtual machine scale set
+Create a virtual machine scale set.
## Create a load balancer
First, create a public Standard Load Balancer by using the portal. The name and
## Create virtual machine scale set
-You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
+You can deploy a scale set with a Windows Server image or Linux images such as RHEL, CentOS, Ubuntu, or SLES.
1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**. Select **Create** on the **Virtual machine scale sets** page, which will open the **Create a virtual machine scale set** page.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from the resource group list.
1. Type *myScaleSet* as the name for your scale set. 1. In **Region**, select a region that is close to your area. 1. Under **Orchestration**, ensure the *Uniform* option is selected for **Orchestration mode**. 1. Select a marketplace image for **Image**. In this example, we have chosen *Ubuntu Server 18.04 LTS*. 1. Enter your desired username, and select which authentication type you prefer.
- - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
+ - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lowercase character, one uppercase character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
- If you select a Linux OS disk image, you can instead choose **SSH public key**. Only provide your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
Complete the steps in [create a VM](./connection-monitor.md#create-the-first-vm)
|Step|Setting|Value| ||||
-| 1 | Select a version of **Ubuntu Server** | |
+| 1 | Select a version of the **Ubuntu Server** | |
| 3 | Name | myVm2 | | 3 | Authentication type | Paste your SSH public key or select **Password**, and enter a password. | | 3 | Resource group | Select **Use existing** and select **myResourceGroup**. |
In the Azure portal, to create a test group in a connection monitor, you specify
* **Disable traceroute**: This check box applies when the protocol is TCP or ICMP. Select this box to stop sources from discovering topology and hop-by-hop RTT. * **Destination port**: You can provide a destination port of your choice. * **Listen on port**: This check box applies when the protocol is TCP. Select this check box to open the chosen TCP port if it's not already open.
- * **Test Frequency**: In this list, specify how frequently sources will ping destinations on the protocol and port that you specified. You can choose 30 seconds, 1 minute, 5 minutes, 15 minutes, or 30 minutes. Select **custom** to enter another frequency that's between 30 seconds and 30 minutes. Sources will test connectivity to destinations based on the value that you choose. For example, if you select 30 seconds, sources will check connectivity to the destination at least once in every 30-second period.
+ * **Test Frequency**: In this list, specify how frequently sources will ping destinations on the protocol and port that you specified. You can choose 30 seconds, 1 minute, 5 minutes, 15 minutes, or 30 minutes. Select **custom** to enter another frequency that's between 30 seconds and 30 minutes. Sources will test connectivity to destinations based on the value that you choose. For example, if you select 30 seconds, sources will check connectivity to the destination at least once every 30 seconds period.
* **Success Threshold**: You can set thresholds on the following network parameters: * **Checks failed**: Set the percentage of checks that can fail when sources check connectivity to destinations by using the criteria that you specified. For the TCP or ICMP protocol, the percentage of failed checks can be equated to the percentage of packet loss. For HTTP protocol, this value represents the percentage of HTTP requests that received no response. * **Round trip time**: Set the RTT, in milliseconds, for how long sources can take to connect to the destination over the test configuration.
In the Azure portal, to create a test group in a connection monitor, you specify
* **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or Non-Azure endpoints. * For selected Azure VMs or Azure virtual machine scale sets and Non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for Non-Azure endpoints will be auto enablement once the creation of Connection Monitor begins.
- * In case the virtual machine scale set selected is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale set as endpoints. In-case the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
- * In the scenario mentioned above, user can consent to auto upgradation of virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgradation. This would eliminate the need for the user to manually upgrade the virtual machine scale set after installing the Network Watcher extension.
+ * In case the virtual machine scale set selected is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with the virtual machine scale set as endpoints. In case the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
+ * In the scenario mentioned above, user can consent to auto upgradation of the virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgradation. This would eliminate the need for the user to manually upgrade the virtual machine scale set after installing the Network Watcher extension.
:::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test groups and consent for auto-upgradation of VMSS in Connection Monitor.":::
In the Azure portal, to create alerts for a connection monitor, you specify valu
:::image type="content" source="./media/connection-monitor-2-preview/unified-enablement-create.png" alt-text="Screenshot that shows the Create alert tab in Connection Monitor.":::
-Once all the steps are completed, the process will proceed with unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by creation of Connection Monitor.
+Once all the steps are completed, the process will proceed with the unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by creation of Connection Monitor.
Once the creation process is successful, it will take ~ 5 mins for the connection monitor to show up on the dashboard. ## Virtual machine scale set coverage Currently, Connection Monitor provides default coverage for the scale set instances selected as endpoints. What this means is, only a default % of all the scale set instances added would be randomly selected to monitor connectivity from the scale set to the endpoint.
-As a best practice, to avoid loss of data due to downscaling of instances, it is advised to select ALL instances in a scale set while creating a test group instead of selecting particular few for monitoring your endpoints.
+As a best practice, to avoid loss of data due to downscaling of instances, it is advised to select ALL instances in a scale set while creating a test group instead of selecting a particular few for monitoring your endpoints.
## Scale limits
When no longer needed, delete the resource group and all of the resources it con
## Next steps
-In this tutorial, you learned how to monitor a connection between a virtual machine scale set and a VM. You learned that a network security group rule prevented communication to a VM. To learn about all of the different responses connection monitor can return, see [response types](network-watcher-connectivity-overview.md#response). You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address.
+In this tutorial, you learned how to monitor a connection between a virtual machine scale set and a VM. You learned that a network security group rule prevented communication to a VM. To learn about all of the different responses the connection monitor can return, see [response types](network-watcher-connectivity-overview.md#response). You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address.
* Learn [how to analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts). * Learn [how to diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network).
network-watcher Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor.md
Complete the steps in [Create the first VM](#create-the-first-vm) again, with th
|Step|Setting|Value| ||||
-| 1 | Select a version of the **Ubuntu Server** | |
+| 1 | Select a version of **Ubuntu Server** | |
| 3 | Name | myVm2 | | 3 | Authentication type | Paste your SSH public key or select **Password** and enter a password. | | 3 | Resource group | Select **Use existing** and select **myResourceGroup**. |
Create a connection monitor to monitor communication over TCP port 22 from *myVm
Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. A generated alert can automatically run one or more actions, such as to notify someone or start another process. When setting an alert rule, the resource that you target determines the list of available metrics that you can use to generate alerts.
-1. In the Azure portal, select the **Monitor** service, and then select **Alerts** > **New alert rule**.
+1. In Azure portal, select the **Monitor** service, and then select **Alerts** > **New alert rule**.
2. Click **Select target**, and then select the resources that you want to target. Select the **Subscription**, and set the **Resource type** to filter down to the Connection Monitor that you want to use. ![alert screen with target selected](./media/connection-monitor/set-alert-rule.png)
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
# Tutorial: Diagnose a communication problem between networks using the Azure portal
-A virtual network gateway connects an Azure virtual network to an on-premises, or other virtual network. In this tutorial, you learn how to:
+A virtual network gateway connects an Azure virtual network to an on-premises or other virtual network. In this tutorial, you learn how to:
> [!div class="checklist"] > * Diagnose a problem with a virtual network gateway with Network Watcher's VPN diagnostics capability
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
Azure automatically creates routes to default destinations. You may create custo
When you ran the test using 13.107.21.200 in [Use next hop](#use-next-hop), the route with the address prefix 0.0.0.0/0 was used to route traffic to the address since no other route includes the address. By default, all addresses not specified within the address prefix of another route are routed to the internet.
- When you ran the test using 172.31.0.100, however, the result informed you that there was no next hop type. As you can see in the previous picture, though there is a default route to the 172.16.0.0/12 prefix, which includes the 172.31.0.100 address, the **NEXT HOP TYPE** is **None**. Azure creates a default route to 172.16.0.0/12 but doesn't specify a next hop type until there is a reason to. If, for example, you added the 172.16.0.0/12 address range to the address space of the virtual network, Azure changes the **NEXT HOP TYPE** to **Virtual network** for the route. A check would then show the **Virtual network** as the **NEXT HOP TYPE**.
+ However, when you ran the test using 172.31.0.100, the result informed you that there was no next hop type. As you can see in the previous picture, though there is a default route to the 172.16.0.0/12 prefix, which includes the 172.31.0.100 address, the **NEXT HOP TYPE** is **None**. Azure creates a default route to 172.16.0.0/12 but doesn't specify a next hop type until there is a reason to. If, for example, you added the 172.16.0.0/12 address range to the address space of the virtual network, Azure changes the **NEXT HOP TYPE** to **Virtual network** for the route. A check would then show **Virtual network** as the **NEXT HOP TYPE**.
## Clean up resources
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
When you ran the `Test-AzNetworkWatcherIPFlow` command to test inbound communica
The **DenyAllInBound** rule is applied because, as shown in the output, no other higher priority rule exists in the output from the `Get-AzEffectiveNetworkSecurityGroup` command that allows port 80 inbound to the VM from 172.131.0.100. To allow the inbound communication, you could add a security rule with a higher priority that allows port 80 inbound from 172.131.0.100.
-The checks in this quickstart tested Azure configuration. If the checks return expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication.
+The checks in this quickstart tested Azure configuration. If the checks return the expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication.
## Clean up resources
network-watcher Network Watcher Nsg Flow Logging Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-portal.md
In this tutorial, you learn how to:
> * Enable a traffic flow log for an NSG, using Network Watcher's NSG flow log capability > * Download logged data > * View logged data+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
In this tutorial, you learn how to:
6. Select **Create**.
-The virtual machine takes a few minutes to create. Don't continue with remaining steps until the VM has finished creating. While the portal creates the virtual machine, it also creates a network security group with the name **myVM-nsg**, and associates it to the network interface for the VM.
+The virtual machine takes a few minutes to create. Don't continue with the remaining steps until the VM has finished creating. While the portal creates the virtual machine, it also creates a network security group with the name **myVM-nsg** and associates it with the network interface for the VM.
## Enable Network Watcher
NSG flow log data is written to an Azure Storage account. Complete the following
5. Select **Create**.
-The storage account may take around minute to create. Don't continue with remaining steps until the storage account is created. In all cases, the storage account must be in the same region as the NSG.
+The storage account may take around a minute to create. Don't continue with the remaining steps until the storage account is created. In all cases, the storage account must be in the same region as the NSG.
1. In the search box at the top of the portal, enter **Network Watcher**. Select **Network Watcher** in the search results.
The storage account may take around minute to create. Don't continue with remain
## View flow log
-The following example json displays data that you'll see in the PT1H.json file for each flow logged:
+The following example JSON displays data that you'll see in the PT1H.json file for each flow logged:
### Version 1 flow log event ```json
The following example json displays data that you'll see in the PT1H.json file f
} ```
-The value for **mac** in the previous output is the MAC address of the network interface that was created when the VM was created. The comma-separated information for **flowTuples**, is as follows:
+The value for **mac** in the previous output is the MAC address of the network interface that was created when the VM was created. The comma-separated information for **flowTuples** is as follows:
| Example data | What data represents | Explanation | | | | |
The value for **mac** in the previous output is the MAC address of the network i
| O | Direction | Whether the traffic was inbound (I) or outbound (O). | | A | Action | Whether the traffic was allowed (A) or denied (D). | C | Flow State **Version 2 Only** | Captures the state of the flow. Possible states are **B**: Begin, when a flow is created. Statistics aren't provided. **C**: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals. **E**: End, when a flow is ended. Statistics are provided. |
-| 30 | Packets sent - Source to destination **Version 2 Only** | The total number of TCP or UDP packets sent from source to destination since last update. |
-| 16978 | Bytes sent - Source to destination **Version 2 Only** | The total number of TCP or UDP packet bytes sent from source to destination since last update. Packet bytes include the packet header and payload. |
-| 24 | Packets sent - Destination to source **Version 2 Only** | The total number of TCP or UDP packets sent from destination to source since last update. |
-| 14008| Bytes sent - Destination to source **Version 2 Only** | The total number of TCP and UDP packet bytes sent from destination to source since last update. Packet bytes include packet header and payload.|
+| 30 | Packets sent - Source to destination **Version 2 Only** | The total number of TCP or UDP packets sent from source to destination since the last update. |
+| 16978 | Bytes sent - Source to destination **Version 2 Only** | The total number of TCP or UDP packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload. |
+| 24 | Packets sent - Destination to source **Version 2 Only** | The total number of TCP or UDP packets sent from destination to source since the last update. |
+| 14008| Bytes sent - Destination to source **Version 2 Only** | The total number of TCP and UDP packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload.|
## Next steps
In this tutorial, you learned how to:
* Enable NSG flow logging for an NSG * Download and view data logged in a file.
-The raw data in the json file can be difficult to interpret. To visualize Flow Logs data, you can use [Azure Traffic Analytics](traffic-analytics.md) and [Microsoft Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md).
+The raw data in the JSON file can be difficult to interpret. To visualize Flow Logs data, you can use [Azure Traffic Analytics](traffic-analytics.md) and [Microsoft Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md).
For alternate methods of enabling NSG Flow Logs, see [PowerShell](network-watcher-nsg-flow-logging-powershell.md), [Azure CLI](network-watcher-nsg-flow-logging-cli.md), [REST API](network-watcher-nsg-flow-logging-rest.md), and [Resource Manager templates](network-watcher-nsg-flow-logging-azure-resource-manager.md).
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
If there were issues with the deployment, see [Troubleshoot common Azure deploym
## Clean up resources
-You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in the complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
+You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
You also can disable an NSG flow log in the Azure portal:
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
If there were issues with the deployment, see [Troubleshoot common Azure deploym
## Clean up resources
-You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in the complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
+You can delete Azure resources by using complete deployment mode. To delete a flow logs resource, specify a deployment in complete mode without including the resource you want to delete. Read more about [complete deployment mode](../azure-resource-manager/templates/deployment-modes.md#complete-mode).
You also can disable an NSG flow log in the Azure portal:
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
Only Azure AD administrator users can create/enable users for Azure AD-based aut
## Prerequisites
-The below three steps are mandatory to use Azure Active Directory Authentication with Azure Database for PostgreSQL Flexible Server and must be run by tenant administrator or a user with tenant admin rights and this is one time activity per tenant.
+The below three steps are mandatory to use Azure Active Directory Authentication with Azure Database for PostgreSQL Flexible Server and must be run by `tenant administrator`or a user with tenant admin rights and this is one time activity per tenant.
Install AzureAD PowerShell: AzureAD Module
private-5g-core Statement Of Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/statement-of-compliance.md
All packet core network functions are compliant with Release 15 of the 3GPP spec
- TS 33.401: 3GPP System Architecture Evolution (SAE); Security architecture. - TS 36.413: Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1 Application Protocol (S1AP).
+### 5G handover procedures
+
+- TS 23.502: Procedures for the 5G System (5GS):
+ - 4.9.1.2: Xn based inter NG-RAN handover.
+ - 4.9.1.3: Inter NG-RAN node N2 based handover.
+
+### 4G handover procedures
+
+- TS 23.401: General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access:
+ - 5.5.1.1 X2-based handover.
+ ### Policy and charging control (PCC) framework - TS 23.503: Policy and charging control framework for the 5G System (5GS); Stage 2.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Storage account (Microsoft.Storage/storageAccounts) / File (file, file_secondary) | privatelink.file.core.windows.net | file.core.windows.net | | Storage account (Microsoft.Storage/storageAccounts) / Web (web, web_secondary) | privatelink.web.core.windows.net | web.core.windows.net | | Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) / Data Lake File System Gen2 (dfs, dfs_secondary) | privatelink.dfs.core.windows.net | dfs.core.windows.net |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Sql | privatelink.documents.azure.com | documents.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / MongoDB | privatelink.mongo.cosmos.azure.com | mongo.cosmos.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
+| Azure Cosmos DB (Microsoft.DocumentDb/databaseAccounts) / Sql | privatelink.documents.azure.com | documents.azure.com |
+| Azure Cosmos DB (Microsoft.DocumentDb/databaseAccounts) / MongoDB | privatelink.mongo.cosmos.azure.com | mongo.cosmos.azure.com |
+| Azure Cosmos DB (Microsoft.DocumentDb/databaseAccounts) / Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com |
+| Azure Cosmos DB (Microsoft.DocumentDb/databaseAccounts) / Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com |
+| Azure Cosmos DB (Microsoft.DocumentDb/databaseAccounts) / Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
| Azure Batch (Microsoft.Batch/batchAccounts) / batchAccount | privatelink.batch.azure.com | {region}.batch.azure.com | | Azure Batch (Microsoft.Batch/batchAccounts) / nodeManagement | privatelink.batch.azure.com | {region}.service.batch.azure.com | | Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) / postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
search Cognitive Search Debug Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-debug-session.md
A cached copy of the enriched document and skillset is loaded into the visual ed
If the enrichment pipeline does not have any errors, a debug session can be used to incrementally enrich a document, test and validate each change before committing the changes.
+## Managing the Debug Session state
+
+The debug session can be run again after its creation and its first run using the **Start** button. It may also be canceled while it is still executing using the **Cancel** button. Finally, it may be deleted using the **Delete** button.
++ ## AI Enrichments tab > Skill Graph The visual editor is organized into tabs and panes. This section introduces the components of the visual editor.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
A Debug Session works with all generally available [indexer data sources](search
The debug session begins by executing the indexer and skillset on the selected document. The document's content and metadata created will be visible and available in the session.
+A debug session can be canceled while it's executing using the **Cancel** button.
+ ## Start with errors and warnings Indexer execution history in the portal gives you the full error and warning list for all documents. In a debug session, the errors and warnings will be limited to one document. You'll work through this list, make your changes, and then return to the list to verify whether issues are resolved.
search Index Add Scoring Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-scoring-profiles.md
Title: Add scoring profiles to boost search scores
+ Title: Add scoring profiles
description: Boost search relevance scores for Azure Cognitive Search results by adding scoring profiles to a search index.
- Previously updated : 06/24/2022+ Last updated : 10/14/2022
-# Add scoring profiles to a search index
+# Add scoring profiles to boost search scores
-For full text search queries, the search engine computes a search score for each matching document, which allows results to be ranked from high to low. Azure Cognitive Search uses a default scoring algorithm to compute an initial score, but you can customize the calculation through a *scoring profile*.
+In this article, you'll learn how to define a scoring profile for boosting search scores based on criteria.
-Scoring profiles are embedded in index definitions and include properties for boosting the score of matches, where additional criteria found in the profile provides the boosting logic. For example, you might want to boost matches based on their revenue potential, promote newer items, or perhaps boost items that have been in inventory too long.
+Criteria can be a weighted field, such as when a match found in a "tags" field is more relevant than a match found in "descriptions". Criteria can also be a function, such as the `distance` function that favors results that are within a specified distance of the current location.
-Unfamiliar with relevance concepts? The following video segment fast-forwards to how scoring profiles work in Azure Cognitive Search, but the video also covers basic concepts. You might also want to review [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md) for more background.
+Scoring profiles are defined in a search index and invoked on query requests. You can create multiple profiles and then modify query logic to choose which one is used.
-> [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=463&end=970]
+> [!NOTE]
+> Unfamiliar with relevance concepts? The following video segment fast-forwards to how scoring profiles work in Azure Cognitive Search. You can also visit [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md) for more background.
+>
+> > [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=463&end=970]
+>
-## What is a scoring profile?
+## Scoring profile definition
-A scoring profile is part of the index definition and is composed of weighted fields, functions, and parameters. The purpose of a scoring profile is to boost or amplify matching documents based on criteria you provide.
+A scoring profile is part of the index definition and is composed of weighted fields, functions, and parameters.
The following definition shows a simple profile named 'geo'. This example boosts results that have the search term in the hotelName field. It also uses the `distance` function to favor results that are within 10 kilometers of the current location. If someone searches on the term 'inn', and 'inn' happens to be part of the hotel name, documents that include hotels with 'inn' within a 10 KM radius of the current location will appear higher in the search results.
The following definition shows a simple profile named 'geo'. This example boosts
] ```
-To use this scoring profile, your query is formulated to specify scoringProfile parameter in the request.
+Parameters are specified on invocation. To use this scoring profile, your query is formulated to specify scoringProfile parameter in the request.
```http POST /indexes/hotels/docs&api-version=2020-06-30
See the [Extended example](#bkmk_ex) to review a more detailed example of a scor
Scores are computed for full text search queries for the purpose of ranking the most relevant matches and returning them at the top of the response. The overall score for each document is an aggregation of the individual scores for each field, where the individual score of each field is computed based on the term frequency and document frequency of the searched terms within that field (known as [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) or term frequency-inverse document frequency).
-> [!TIP]
-> You can use the [featuresMode](index-similarity-and-scoring.md#featuresmode-parameter-preview) parameter to request additional scoring details with the search results (including the field level scores).
+You can use the [featuresMode (preview)](index-similarity-and-scoring.md#featuresmode-parameter-preview) parameter to request additional scoring details with the search results (including the field level scores).
## When to add scoring logic
You should create one or more scoring profiles when the default ranking behavior
Relevancy-based ordering in a search page is also implemented through scoring profiles. Consider search results pages youΓÇÖve used in the past that let you sort by price, date, rating, or relevance. In Azure Cognitive Search, scoring profiles can be used to drive the ΓÇÿrelevanceΓÇÖ option. The definition of relevance is user-defined, predicated on business objectives and the type of search experience you want to deliver.
-<a name="bkmk_ex"></a>
-
-## Extended example
-
-The following example shows the schema of an index with two scoring profiles (`boostGenre`, `newAndHighlyRated`). Any query against this index that includes either profile as a query parameter will use the profile to score the result set.
-
-The `boostGenre` profile uses weighted text fields, boosting matches found in albumTitle, genre, and artistName fields. The fields are boosted 1.5, 5, and 2 respectively. Why is genre boosted so much higher than the others? If search is conducted over data that is somewhat homogenous (as is the case with 'genre' in the musicstoreindex), you might need a larger variance in the relative weights. For example, in the musicstoreindex, 'rock' appears as both a genre and in identically phrased genre descriptions. If you want genre to outweigh genre description, the genre field will need a much higher relative weight.
-
-```json
-{
- "name": "musicstoreindex",
- "fields": [
- { "name": "key", "type": "Edm.String", "key": true },
- { "name": "albumTitle", "type": "Edm.String" },
- { "name": "albumUrl", "type": "Edm.String", "filterable": false },
- { "name": "genre", "type": "Edm.String" },
- { "name": "genreDescription", "type": "Edm.String", "filterable": false },
- { "name": "artistName", "type": "Edm.String" },
- { "name": "orderableOnline", "type": "Edm.Boolean" },
- { "name": "rating", "type": "Edm.Int32" },
- { "name": "tags", "type": "Collection(Edm.String)" },
- { "name": "price", "type": "Edm.Double", "filterable": false },
- { "name": "margin", "type": "Edm.Int32", "retrievable": false },
- { "name": "inventory", "type": "Edm.Int32" },
- { "name": "lastUpdated", "type": "Edm.DateTimeOffset" }
- ],
- "scoringProfiles": [
- {
- "name": "boostGenre",
- "text": {
- "weights": {
- "albumTitle": 1.5,
- "genre": 5,
- "artistName": 2
- }
- }
- },
- {
- "name": "newAndHighlyRated",
- "functions": [
- {
- "type": "freshness",
- "fieldName": "lastUpdated",
- "boost": 10,
- "interpolation": "quadratic",
- "freshness": {
- "boostingDuration": "P365D"
- }
- },
- {
- "type": "magnitude",
- "fieldName": "rating",
- "boost": 10,
- "interpolation": "linear",
- "magnitude": {
- "boostingRangeStart": 1,
- "boostingRangeEnd": 5,
- "constantBoostBeyondRange": false
- }
- }
- ]
- }
- ],
- "suggesters": [
- {
- "name": "sg",
- "searchMode": "analyzingInfixMatching",
- "sourceFields": [ "albumTitle", "artistName" ]
- }
- ]
-}
-```
- ## Steps for adding a scoring profile To implement custom scoring behavior, add a scoring profile to the schema that defines the index. You can have up to 100 scoring profiles within an index (see [Service Limits](search-limits-quotas-capacity.md)), but you can only specify one profile at time in any given query.
The following table provides several examples.
For more examples, see [XML Schema: Datatypes (W3.org web site)](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration).
+<a name="bkmk_ex"></a>
+
+## Extended example
+
+The following example shows the schema of an index with two scoring profiles (`boostGenre`, `newAndHighlyRated`). Any query against this index that includes either profile as a query parameter will use the profile to score the result set.
+
+The `boostGenre` profile uses weighted text fields, boosting matches found in albumTitle, genre, and artistName fields. The fields are boosted 1.5, 5, and 2 respectively. Why is genre boosted so much higher than the others? If search is conducted over data that is somewhat homogenous (as is the case with 'genre' in the musicstoreindex), you might need a larger variance in the relative weights. For example, in the musicstoreindex, 'rock' appears as both a genre and in identically phrased genre descriptions. If you want genre to outweigh genre description, the genre field will need a much higher relative weight.
+
+```json
+{
+ "name": "musicstoreindex",
+ "fields": [
+ { "name": "key", "type": "Edm.String", "key": true },
+ { "name": "albumTitle", "type": "Edm.String" },
+ { "name": "albumUrl", "type": "Edm.String", "filterable": false },
+ { "name": "genre", "type": "Edm.String" },
+ { "name": "genreDescription", "type": "Edm.String", "filterable": false },
+ { "name": "artistName", "type": "Edm.String" },
+ { "name": "orderableOnline", "type": "Edm.Boolean" },
+ { "name": "rating", "type": "Edm.Int32" },
+ { "name": "tags", "type": "Collection(Edm.String)" },
+ { "name": "price", "type": "Edm.Double", "filterable": false },
+ { "name": "margin", "type": "Edm.Int32", "retrievable": false },
+ { "name": "inventory", "type": "Edm.Int32" },
+ { "name": "lastUpdated", "type": "Edm.DateTimeOffset" }
+ ],
+ "scoringProfiles": [
+ {
+ "name": "boostGenre",
+ "text": {
+ "weights": {
+ "albumTitle": 1.5,
+ "genre": 5,
+ "artistName": 2
+ }
+ }
+ },
+ {
+ "name": "newAndHighlyRated",
+ "functions": [
+ {
+ "type": "freshness",
+ "fieldName": "lastUpdated",
+ "boost": 10,
+ "interpolation": "quadratic",
+ "freshness": {
+ "boostingDuration": "P365D"
+ }
+ },
+ {
+ "type": "magnitude",
+ "fieldName": "rating",
+ "boost": 10,
+ "interpolation": "linear",
+ "magnitude": {
+ "boostingRangeStart": 1,
+ "boostingRangeEnd": 5,
+ "constantBoostBeyondRange": false
+ }
+ }
+ ]
+ }
+ ],
+ "suggesters": [
+ {
+ "name": "sg",
+ "searchMode": "analyzingInfixMatching",
+ "sourceFields": [ "albumTitle", "artistName" ]
+ }
+ ]
+}
+```
+ ## See also + [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md)
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Title: Configure scoring algorithm
+ Title: Configure relevance scoring
description: Enable Okapi BM25 ranking to upgrade the search ranking and relevance behavior on older Azure Search services.
Previously updated : 06/22/2022 Last updated : 10/14/2022
-# Configure the scoring algorithm in Azure Cognitive Search
+# Configure relevance scoring
-Depending on the age of your search service, Azure Cognitive Search supports two [scoring algorithms](index-similarity-and-scoring.md) for assigning relevance to results in a full text search query:
+In this article, you'll learn how to configure the similarity scoring algorithm used by Azure Cognitive Search. The BM25 scoring model has defaults for weighting term frequency and document length. You can customize these properties if the defaults aren't suited to your content.
+
+Configuration changes are scoped to individual indexes, which means you can adjust relevance scoring based on the characteristics of each index.
+
+## Default scoring algorithm
+
+Depending on the age of your search service, Azure Cognitive Search supports two [similarity scoring algorithms](index-similarity-and-scoring.md) for assigning relevance to results in a full text search query:
+ An *Okapi BM25* algorithm, used in all search services created after July 15, 2020 + A *classic similarity* algorithm, used by all search services created before July 15, 2020
-BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size. For search services created after July 2020, BM25 is the sole scoring algorithm. If you try to set "similarity" to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
+BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size. For search services created after July 2020, BM25 is the only scoring algorithm. If you try to set "similarity" to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
For older services, classic similarity remains the default algorithm. Older services can [upgrade to BM25](#enable-bm25-scoring-on-older-services) on a per-index basis. When switching from classic to BM25, you can expect to see some differences how search results are ordered. ## Set BM25 parameters
-BM25 similarity adds two parameters to control the relevance score calculation. To set "similarity" parameters, issue a [Create or Update Index](/rest/api/searchservice/create-index) request as illustrated by the following example.
+BM25 similarity adds two parameters to control the relevance score calculation.
-```http
-PUT [service-name].search.windows.net/indexes/[index-name]?api-version=2020-06-30&allowIndexDowntime=true
-{
- "similarity": {
- "@odata.type": "#Microsoft.Azure.Search.BM25Similarity",
- "b" : 0.5,
- "k1" : 1.3
+1. Formulate a [Create or Update Index](/rest/api/searchservice/create-index) request as illustrated by the following example.
+
+ ```http
+ PUT [service-name].search.windows.net/indexes/[index-name]?api-version=2020-06-30&allowIndexDowntime=true
+ {
+ "similarity": {
+ "@odata.type": "#Microsoft.Azure.Search.BM25Similarity",
+ "b" : 0.75,
+ "k1" : 1.2
+ }
}
-}
-```
+ ```
+
+1. Set "b" and "k1" to custom values. See the property descriptions in the next section for details.
+
+1. If the index is live, append the "allowIndexDowntime=true" URI parameter on the request.
+
+ Because Cognitive Search won't allow updates to a live index, you'll need to take the index offline so that the parameters can be added. Indexing and query requests will fail while the index is offline. The duration of the outage is the amount of time it takes to update the index, usually no more than several seconds. When the update is complete, the index comes back automatically.
-Because Cognitive Search won't allow updates to a live index, you'll need to take the index offline so that the parameters can be added. Indexing and query requests will fail while the index is offline. The duration of the outage is the amount of time it takes to update the index, usually no more than several seconds. When the update is complete, the index comes back automatically. To take the index offline, append the "allowIndexDowntime=true" URI parameter on the request that sets the "similarity" property.
+1. Send the request.
-### BM25 property reference
+### BM25 property descriptions
| Property | Type | Description | |-||-|
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Previously updated : 06/22/2022 Last updated : 10/14/2022 # Relevance and scoring in Azure Cognitive Search
-This article describes relevance and the scoring algorithms used to compute search scores in Azure Cognitive Search. A relevance score applies to matches returned in [full text search](search-lucene-query-architecture.md), where the most relevant matches appear first. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked for relevance.
+This article explains the relevance and the scoring algorithms used to compute search scores in Azure Cognitive Search. A relevance score is computed for each match found in a [full text search](search-lucene-query-architecture.md), where the strongest matches are assigned higher search scores.
+
+Relevance applies to full text search only. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked for relevance.
In Azure Cognitive Search, you can tune search relevance and boost search scores through these mechanisms:
As long as the same `sessionId` is used, a best-effort attempt will be made to t
## Scoring profiles
-You can customize the way different fields are ranked by defining a *scoring profile*. Scoring profiles give you greater control over the ranking of items in search results. For example, you might want to boost items based on their revenue potential, promote newer items, or perhaps boost items that have been in inventory too long.
+You can customize the way different fields are ranked by defining a *scoring profile*. Scoring profiles provide criteria for boosting the search score of a match based on content characteristics. For example, you might want to boost matches based on their revenue potential, promote newer items, or perhaps boost items that have been in inventory too long.
A scoring profile is part of the index definition, composed of weighted fields, functions, and parameters. For more information about defining one, see [Scoring Profiles](index-add-scoring-profiles.md).
search Resource Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-tools.md
The following tools are built by engineers at Microsoft, but aren't part of the
| Tool name | Description | Source code | |--| |-|
-| [Azure Cognitive Search Lab readme](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | Connects to your search service with a Web UI that exercises the full REST API, including the ability to edit a live search index. | [https://github.com/Azure-Samples/azure-search-lab](https://github.com/Azure-Samples/azure-search-lab) |
+| [Azure Cognitive Search Lab readme](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | Connects to your search service with a Web UI that exercises the full REST API, including the ability to edit a live search index. | [https://github.com/Azure-Samples/azure-search-lab](https://github.com/Azure-Samples/azure-search-lab) |
| [Knowledge Mining Accelerator readme](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) |
-| [Back up and Restore readme](https://github.com/liamc) | Download a search index to your local device and then upload the index to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
-| [Performance testing readme](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md | This pipeline helps to load test Azure Cognitive Search, it leverages Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
+| [Back up and Restore readme](https://github.com/liamc) | Download a populated search index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
+| [Performance testing readme](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md) | This solution helps you load test Azure Cognitive Search. It uses Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
Previously updated : 07/25/2022 Last updated : 10/13/2022 # Index data from Azure SQL
Other approaches for creating an Azure SQL indexer include Azure SDKs or [Import
The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
-1. [Create or update a data source](/rest/api/searchservice/create-data-source) to set its definition:
+1. [Create data source](/rest/api/searchservice/create-data-source) or [Update data source](/rest/api/searchservice/update-data-source) to set its definition:
```http POST https://myservice.search.windows.net/datasources?api-version=2020-06-30
search Search Synapseml Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md
Title: Use Search with SynapseML
+ Title: 'Tutorial: Index at scale (Spark)'
-description: Add full text search to big data on Apache Spark that's been loaded and transformed through the open-source library, SynapseML. In this walkthrough, you'll load invoice files into data frames, apply machine learning through SynapseML, then send it into a generated search index.
+description: Search big data from Apache Spark that's been transformed by SynapseML. You'll load invoices into data frames, apply machine learning, and then send output to a generated search index.
- Previously updated : 08/23/2022+ Last updated : 10/13/2022
-# Add search to AI-enriched data from Apache Spark using SynapseML
+# Tutorial: Index large data from Apache Spark using SynapseML and Cognitive Search
-In this Azure Cognitive Search article, learn how to add data exploration and full text search to a SynapseML solution.
-
-[SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/) is an open source library that supports massively parallel machine learning over big data. In SynapseML, one of the ways in which machine learning is exposed is through *transformers* that perform specialized tasks. Transformers tap into a wide range of AI capabilities. In this article, we'll focus on just those that call Cognitive Services and Cognitive Search.
-
-In this walkthrough, you'll set up a workbook that includes the following actions:
+In this Azure Cognitive Search tutorial, learn how to index and query large data loaded from a Spark cluster. You'll set up a Jupyter Notebook that performs the following actions:
> [!div class="checklist"] > + Load various forms (invoices) into a data frame in an Apache Spark session > + Analyze them to determine their features > + Assemble the resulting output into a tabular data structure
-> + Write the output to a search index in Azure Cognitive Search
-> + Explore and search over the content you created
+> + Write the output to a search index hosted in Azure Cognitive Search
+> + Explore and query over the content you created
+
+This tutorial takes a dependency on [SynapseML](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/), an open source library that supports massively parallel machine learning over big data. In SynapseML, search indexing and machine learning are exposed through *transformers* that perform specialized tasks. Transformers tap into a wide range of AI capabilities. In this exercise, you'll use the **AzureSearchWriter** transformer that calls Cognitive Search for indexing, and other transformers that calls Cognitive Services for analysis and AI enrichment.
-Although Azure Cognitive Search has native [AI enrichment](cognitive-search-concept-intro.md), this walkthrough shows you how to access AI capabilities outside of Cognitive Search. By using SynapseML instead of indexers or skills, you're not subject to data limits or other constraints associated with those objects.
+Although Azure Cognitive Search has native [AI enrichment](cognitive-search-concept-intro.md), this tutorial shows you how to access AI capabilities outside of Cognitive Search. By using SynapseML instead of indexers or skills, you're not subject to data limits or other constraints associated with those objects.
> [!TIP]
-> Watch a short video of this demo at [https://www.youtube.com/watch?v=iXnBLwp7f88](https://www.youtube.com/watch?v=iXnBLwp7f88). The video expands on this walkthrough with more steps and visuals.
+> Watch a short video of this demo at [https://www.youtube.com/watch?v=iXnBLwp7f88](https://www.youtube.com/watch?v=iXnBLwp7f88). The video expands on this tutorial with more steps and visuals.
## Prerequisites
You'll need the `synapseml` library and several Azure resources. If possible, us
+ [Azure Cognitive Services](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>3</sup> + [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>4</sup>
-<sup>1</sup> This article includes instructions for loading the package.
+<sup>1</sup> This tutorial includes instructions for loading the package.
-<sup>2</sup> You can use the free tier for this walkthrough but [choose a higher tier](search-sku-tier.md) if data volumes are large. You'll need the [API key](search-security-api-keys.md#find-existing-keys) for this resource.
+<sup>2</sup> You can use the free tier but [choose a higher tier](search-sku-tier.md) if data volumes are large. You'll need the [API key](search-security-api-keys.md#find-existing-keys) for this resource.
-<sup>3</sup> This walkthrough uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#get-the-keys-for-your-resource) and the region, and it will work for both services.
+<sup>3</sup> This tutorial uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#get-the-keys-for-your-resource) and the region, and it will work for both services.
-<sup>4</sup> In this walkthrough, Azure Databricks provides the computing platform. You could also use Azure Synapse Analytics or any other computing platform supported by `synapseml`. The Azure Databricks article listed in the prerequisites includes multiple steps. For this walkthrough, follow only the instructions in "Create a workspace".
+<sup>4</sup> In this tutorial, Azure Databricks provides the computing platform. You could also use Azure Synapse Analytics or any other computing platform supported by `synapseml`. The Azure Databricks article listed in the prerequisites includes multiple steps. For this tutorial, follow only the instructions in "Create a workspace".
> [!NOTE]
-> All of the above Azure resources support security features in the Microsoft Identity platform. For simplicity, this walkthrough assumes key-based authentication, using endpoints and keys copied from the portal pages of each service. If you implement this workflow in a production environment, or share the solution with others, remember to replace hard-coded keys with integrated security or encrypted keys.
+> All of the above Azure resources support security features in the Microsoft Identity platform. For simplicity, this tutorial assumes key-based authentication, using endpoints and keys copied from the portal pages of each service. If you implement this workflow in a production environment, or share the solution with others, remember to replace hard-coded keys with integrated security or encrypted keys.
-## Create a Spark cluster and notebook
+## 1 - Create a Spark cluster and notebook
In this section, you'll create a cluster, install the `synapseml` library, and create a notebook to run the code.
In this section, you'll create a cluster, install the `synapseml` library, and c
:::image type="content" source="media/search-synapseml-cognitive-services/create-seven-cells.png" alt-text="Screenshot of the notebook with placeholder cells." border="true":::
-## Set up dependencies
+## 2 - Set up dependencies
Paste the following code into the first cell of your notebook. Replace the placeholders with endpoints and access keys for each resource. No other modifications are required, so run the code when you're ready.
-This code imports packages and sets up access to the Azure resources used in this workflow.
+This code imports multiple packages and sets up access to the Azure resources used in this workflow.
```python import os
search_key = "placeholder-search-service-api-key"
search_index = "placeholder-search-index-name" ```
-## Load data into Spark
+## 3 - Load data into Spark
Paste the following code into the second cell. No modifications are required, so run the code when you're ready.
df2 = (spark.read.format("binaryFile")
display(df2) ```
-## Add form recognition
+## 4 - Add form recognition
Paste the following code into the third cell. No modifications are required, so run the code when you're ready.
The output from this step should look similar to the next screenshot. Notice how
:::image type="content" source="media/search-synapseml-cognitive-services/analyze-forms-output.png" alt-text="Screenshot of the AnalyzeInvoices output." border="true":::
-## Restructure form recognition output
+## 5 - Restructure form recognition output
Paste the following code into the fourth cell and run it. No modifications are required.
Notice how this transformation recasts the nested fields into a table, which ena
:::image type="content" source="media/search-synapseml-cognitive-services/form-ontology-learner-output.png" alt-text="Screenshot of the FormOntologyLearner output." border="true":::
-## Add translations
+## 6 - Add translations
Paste the following code into the fifth cell. No modifications are required, so run the code when you're ready.
display(translated_df)
> > :::image type="content" source="media/search-synapseml-cognitive-services/translated-strings.png" alt-text="Screenshot of table output, showing the Translations column." border="true":::
-## Add a search index with AzureSearchWriter
+## 7 - Add a search index with AzureSearchWriter
Paste the following code in the sixth cell and then run it. No modifications are required.
You can check the search service pages in Azure portal to explore the index defi
<!-- > [!NOTE] > If you can't use default search index, you can provide an external custom definition in JSON, passing its URI as a string in the "indexJson" property. Generate the default index first so that you know which fields to specify, and then follow with customized properties if you need specific analyzers, for example. -->
-## Query the index
+## 8 - Query the index
Paste the following code into the seventh cell and then run it. No modifications are required, except that you might want to vary the syntax or try more examples to further explore your content:
You can find and manage resources in the portal, using the **All resources** or
## Next steps
-In this walkthrough, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Forms Recognizer transformers in SynapseML.
+In this tutorial, you learned about the [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#azuresearch) transformer in SynapseML, which is a new way of creating and loading search indexes in Azure Cognitive Search. The transformer takes structured JSON as an input. The FormOntologyLearner can provide the necessary structure for output produced by the Forms Recognizer transformers in SynapseML.
As a next step, review the other SynapseML tutorials that produce transformed content you might want to explore through Azure Cognitive Search:
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md
A [semantic answer](semantic-answers.md) will also be returned if you specified
:::image type="content" source="media/semantic-search-overview/semantic-vector-representation.png" alt-text="Vector representation for context" border="true":::
-1. The @search.rerankerScore is assigned to each document based on the semantic relevance of the caption.
+1. The @search.rerankerScore is assigned to each document based on the semantic relevance of the caption. Scores range from 4 to 0 (high to low), where a higher score indicates a stronger match.
1. After all documents are scored, they're listed in descending order by score and included in the query response payload. The payload includes answers, plain text and highlighted captions, and any fields that you marked as retrievable or specified in a select clause.
service-bus-messaging Enable Partitions Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-premium.md
Service Bus partitions enable queues and topics, or messaging entities, to be pa
> Some limitations may be encountered during public preview, which will be resolved before going into GA. > - It is currently not possible to use JMS on partitioned entities. > - Metrics are currently only available on an aggregated namespace level, not for individual partitions.
-> - This feature is rolling out during Ignite 2022, and will initially be available in East US and South Central US, with more regions following later.
+> - This feature is rolling out during Ignite 2022, and will initially be available in East US and North Europe, with more regions following later.
## Use Azure portal When creating a **namespace** in the Azure portal, set the **Partitioning** to **Enabled** and choose the number of partitions, as shown in the following image.
service-bus-messaging Service Bus Dead Letter Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dead-letter-queues.md
If you enable dead-lettering on filter evaluation exceptions, any errors that oc
## Application-level dead-lettering In addition to the system-provided dead-lettering features, applications can use the DLQ to explicitly reject unacceptable messages. They can include messages that can't be properly processed because of any sort of system issue, messages that hold malformed payloads, or messages that fail authentication when some message-level security scheme is used.
+This can be done by calling [QueueClient.DeadLetterAsync(Guid lockToken, string deadLetterReason, string deadLetterErrorDescription) method](/dotnet/api/microsoft.servicebus.messaging.queueclient.deadletterasync#microsoft-servicebus-messaging-queueclient-deadletterasync(system-guid-system-string-system-string)).
+
+It is recommended to include the type of the exception in the DeadLetterReason and the StackTrace of the exception in the DeadLetterDescription as this makes it easier to troubleshoot the cause of the problem resulting in messages being dead-lettered. Be aware that this may result in some messages exceeding [the 256KB quota limit for the Standard Tier of Azure Service Bus](/azure/service-bus-messaging/service-bus-quotas), further indicating that the Premium Tier is what should be used for production environments.
+ ## Dead-lettering in ForwardTo or SendVia scenarios Messages will be sent to the transfer dead-letter queue under the following conditions:
site-recovery Vmware Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture-modernized.md
If you're using a URL-based firewall proxy to control outbound connectivity, all
>High recovery point retention period may have an implication on the storage cost since more recovery points may need to be saved.
-2. Traffic replicates to Azure storage public endpoints over the internet. Alternately, you can use Azure ExpressRoute with [Microsoft peering](../expressroute/expressroute-circuit-peerings.md#microsoftpeering). Replicating traffic over a site-to-site virtual private network (VPN) from an on-premises site to Azure isn't supported.
+2. Traffic replicates to Azure storage public endpoints over the internet. Alternately, you can use Azure ExpressRoute with [Microsoft peering](../expressroute/expressroute-circuit-peerings.md#microsoftpeering). Replicating traffic over a site-to-site virtual private network (VPN) from an on-premises site to Azure is only supported when using [private endpoints](../private-link/private-endpoint-overview.md).
3. Initial replication operation ensures that entire data on the machine at the time of enable replication is sent to Azure. After initial replication finishes, replication of delta changes to Azure begins. Tracked changes for a machine are sent to the process server. 4. Communication happens as follows:
static-web-apps Custom Domain External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain-external.md
Title: Set up a custom domain with external providers in Azure Static Web Apps
-description: Use an external provider to manage your custom domain in Azure Static Web Apps
+description: Use an external provider to manage your custom domain in Azure Static Web Apps.
Previously updated : 02/10/2022 Last updated : 10/13/2022 # Set up a custom domain in Azure Static Web Apps
-By default, Azure Static Web Apps provides an auto-generated domain name for your website, but you can point a custom domain to your site. Free SSL/TLS certificates are automatically created for the auto-generated domain name and any custom domains you may add.
+By default, Azure Static Web Apps provides an auto-generated domain name for your website, but you can point a custom domain to your site. Free SSL/TLS certificates automatically get created for the auto-generated domain name and any custom domains that you might add. This article shows how to configure your domain name with the `www` subdomain, using an external provider.
-## Preparation
+> [!NOTE]
+> Static Web Apps doesn't support set-up of a custom domain with a private DNS server, hosted on-premises. Consider using an [Azure Private DNS zone](../dns/private-dns-privatednszone.md).
+## Prerequisites
-Before you begin, consider how you want to support your apex domain. Domain names without a subdomain are known as apex, root domains. For example, the domain `www.example.com` is the `www` subdomain joined with the `example.com` apex domain.
+- Consider how you want to support your apex domain. Domain names without a subdomain are known as apex root domains. For example, the domain `www.example.com` is the `www` subdomain joined with the `example.com` apex domain.
-Setting up an apex domain is a common scenario to configure once your domain name is set up. Creating an apex domain is achieved by configuring an `ALIAS` or `ANAME` record or through `CNAME` flattening. Some domain registrars like GoDaddy and Google don't support these DNS records. If your domain registrar doesn't support the all the DNS records you need, consider using [Azure DNS to configure your domain](custom-domain-azure-dns.md).
+- You create an apex domain by configuring an `ALIAS` or `ANAME` record or flattening `CNAME`. Some domain registrars, like GoDaddy and Google, don't support these DNS records. If your domain registrar doesn't support all the DNS records you need, consider using [Azure DNS to configure your domain](custom-domain-azure-dns.md).
> [!NOTE]
-> If your domain registrar doesn't support specialized DNS records and you don't want to use Azure DNS, you can forward your apex domain to the `www` subdomain. Refer to [Set up an apex domain in Azure Static Web Apps](apex-domain-external.md) for details.
-
-This guide demonstrates how to configure your domain name with the `www` subdomain.
+> If your domain registrar doesn't support specialized DNS records and you don't want to use Azure DNS, you can forward your apex domain to the `www` subdomain. For more information, see [Set up an apex domain in Azure Static Web Apps](apex-domain-external.md).
-## Walkthrough video
+## Watch the video
> [!VIDEO https://learn.microsoft.com/Shows/5-Things/Configuring-a-custom-domain-with-Azure-Static-Web-Apps/player?format=ny]
-## Get static web app URL
+## Get your static web app URL
1. Go to the [Azure portal](https://portal.azure.com).
This guide demonstrates how to configure your domain name with the `www` subdoma
## Create a CNAME record on your domain registrar account
-Domain registrars are the services that allow you to purchase and manage domain names. Common providers include GoDaddy, Namecheap, Google, Tucows, and the like.
+Domain registrars are the services you can use to purchase and manage domain names. Common providers include GoDaddy, Namecheap, Google, Tucows, and the like.
1. Open a new browser tab and sign in to your domain registrar account.
Domain registrars are the services that allow you to purchase and manage domain
1. Return to your static web app in the Azure portal.
-1. Under *Settings*, select **Custom domains**.
-
-2. Select **+ Add**.
+1. Under *Settings*, select **Custom domains** > **+ Add**. Select **Custom domain on other DNS**.
-3. In the *Enter domain* tab, enter your domain name prefixed with **www**.
+1. Select **+ Add**.
- For instance, if your domain name is `example.com`, enter `www.example.com` into this box.
+1. In the *Enter domain* tab, enter your domain name prefixed with **www**, and then select **Next**.
-4. Select **Next**.
+ For instance, if your domain name is `example.com`, enter `www.example.com`.
+ :::image type="content" source="media/custom-domain/add-domain.png" alt-text="Screenshot showing sequence of steps in add custom domain form.":::
-5. In the *Validate + Configure* tab, enter the following values.
+1. In the *Validate + Configure* tab, enter the following values.
| Setting | Value | ||| | Domain name | This value should match the domain name you entered in the previous step (with the `www` subdomain). | | Hostname record type | Select **CNAME**. |
-6. Select **Add**.
+1. Select **Add**.
- Your `CNAME` record is being created and the DNS settings are being updated. Since DNS settings need to propagate, this process can take up to an hour or longer to complete.
+ Azure creates your `CNAME` record and updates the DNS settings. Since DNS settings need to propagate, this process can take up to an hour or longer to complete.
-7. Once the domain settings are in effect, open a new browser tab and go to your domain with the `www` subdomain.
+1. When the update completes, open a new browser tab and go to your domain with the `www` subdomain.
- After the DNS records are updated, you should see your static web app in the browser. Also, inspect the location to verify that your site is served securely using `https`.
+ You should see your static web app in the browser. Also, inspect the location to verify that your site is served securely using `https`.
## Next steps
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Title: "Quickstart: Azure Blob Storage library v12 - Java"
-description: In this quickstart, you learn how to use the Azure Blob Storage client library version 12 for Java to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
+ Title: "Quickstart: Azure Blob Storage library - Java"
+description: In this quickstart, you learn how to use the Azure Blob Storage client library for Java to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
Previously updated : 12/01/2020 Last updated : 10/07/2022 ms.devlang: java
-# Quickstart: Manage blobs with Java v12 SDK
+# Quickstart: Azure Blob Storage client library for Java
-In this quickstart, you learn to manage blobs by using Java. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and list blobs, and you'll create and delete containers.
+Get started with the Azure Blob Storage client library for Java to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks.
-Additional resources:
--- [API reference documentation](/java/api/overview/azure/storage-blob-readme)-- [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob)-- [Package (Maven)](https://mvnrepository.com/artifact/com.azure/azure-storage-blob)-- [Samples](../common/storage-samples-java.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
+[API reference documentation](/jav?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#blob-samples)
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Storage account. [Create a storage account](../common/storage-account-create.md).
+- Azure account with an active subscription - [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- Azure Storage account - [create a storage account](../common/storage-account-create.md).
- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above. - [Apache Maven](https://maven.apache.org/download.cgi). ## Setting up
-This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for Java.
+This section walks you through preparing a project to work with the Azure Blob Storage client library for Java.
### Create the project
-Create a Java application named *blob-quickstart-v12*.
+Create a Java application named *blob-quickstart*.
-1. In a console window (such as cmd, PowerShell, or Bash), use Maven to create a new console app with the name *blob-quickstart-v12*. Type the following **mvn** command to create a "Hello world!" Java project.
+1. In a console window (such as PowerShell or Bash), use Maven to create a new console app with the name *blob-quickstart*. Type the following **mvn** command to create a "Hello world!" Java project.
# [PowerShell](#tab/powershell)
Create a Java application named *blob-quickstart-v12*.
mvn archetype:generate ` --define interactiveMode=n ` --define groupId=com.blobs.quickstart `
- --define artifactId=blob-quickstart-v12 `
+ --define artifactId=blob-quickstart `
--define archetypeArtifactId=maven-archetype-quickstart ` --define archetypeVersion=1.4 ```
Create a Java application named *blob-quickstart-v12*.
mvn archetype:generate \ --define interactiveMode=n \ --define groupId=com.blobs.quickstart \
- --define artifactId=blob-quickstart-v12 \
+ --define artifactId=blob-quickstart \
--define archetypeArtifactId=maven-archetype-quickstart \ --define archetypeVersion=1.4 ```
Create a Java application named *blob-quickstart-v12*.
[INFO] Using following parameters for creating project from Archetype: maven-archetype-quickstart:1.4 [INFO] - [INFO] Parameter: groupId, Value: com.blobs.quickstart
- [INFO] Parameter: artifactId, Value: blob-quickstart-v12
+ [INFO] Parameter: artifactId, Value: blob-quickstart
[INFO] Parameter: version, Value: 1.0-SNAPSHOT [INFO] Parameter: package, Value: com.blobs.quickstart [INFO] Parameter: packageInPathFormat, Value: com/blobs/quickstart [INFO] Parameter: version, Value: 1.0-SNAPSHOT [INFO] Parameter: package, Value: com.blobs.quickstart [INFO] Parameter: groupId, Value: com.blobs.quickstart
- [INFO] Parameter: artifactId, Value: blob-quickstart-v12
- [INFO] Project created from Archetype in dir: C:\QuickStarts\blob-quickstart-v12
+ [INFO] Parameter: artifactId, Value: blob-quickstart
+ [INFO] Project created from Archetype in dir: C:\QuickStarts\blob-quickstart
[INFO] [INFO] BUILD SUCCESS [INFO]
Create a Java application named *blob-quickstart-v12*.
[INFO] ```
-1. Switch to the newly created *blob-quickstart-v12* folder.
+1. Switch to the newly created *blob-quickstart* folder.
```console
- cd blob-quickstart-v12
+ cd blob-quickstart
```
-1. In side the *blob-quickstart-v12* directory, create another directory called *data*. This is where the blob data files will be created and stored.
+1. In side the *blob-quickstart* directory, create another directory called *data*. This folder is where the blob data files will be created and stored.
```console mkdir data ```
-### Install the package
+### Install the packages
+
+Open the `pom.xml` file in your text editor.
-Open the *pom.xml* file in your text editor. Add the following dependency element to the group of dependencies.
+Add **azure-sdk-bom** to take a dependency on the latest version of the library. In the following snippet, replace the `{bom_version_to_target}` placeholder with the version number. Using **azure-sdk-bom** keeps you from having to specify the version of each individual dependency. To learn more about the BOM, see the [Azure SDK BOM README](https://github.com/Azure/azure-sdk-for-jav).
+
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-sdk-bom</artifactId>
+ <version>{bom_version_to_target}</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+```
+
+Then add the following dependency elements to the group of dependencies. The **azure-identity** dependency is needed for passwordless connections to Azure services.
```xml <dependency> <groupId>com.azure</groupId> <artifactId>azure-storage-blob</artifactId>
- <version>12.13.0</version>
+</dependency>
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
</dependency> ``` ### Set up the app framework
-From the project directory:
+From the project directory, follow steps to create the basic structure of the app:
1. Navigate to the */src/main/java/com/blobs/quickstart* directory
-1. Open the *App.java* file in your editor
-1. Delete the `System.out.println("Hello world!");` statement
-1. Add `import` directives
+1. Open the `App.java` file in your editor
+1. Delete the line `System.out.println("Hello world!");`
+1. Add the necessary `import` directives
-Here's the code:
+The code should resemble this framework:
```java package com.blobs.quickstart; /**
- * Azure blob storage v12 SDK quickstart
+ * Azure Blob Storage quickstart
*/
+import com.azure.identity.*;
import com.azure.storage.blob.*; import com.azure.storage.blob.models.*; import java.io.*; public class App {
- public static void main( String[] args ) throws IOException
+ public static void main(String[] args) throws IOException
{
+ // Quickstart code goes here
} } ``` - ## Object model
-Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
+Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
- The storage account - A container in the storage account
Use the following Java classes to interact with these resources:
## Code examples
-These example code snippets show you how to perform the following with the Azure Blob Storage client library for Java:
+These example code snippets show you how to perform the following actions with the Azure Blob Storage client library for Java:
-- [Get the connection string](#get-the-connection-string)
+- [Authenticate the client](#authenticate-the-client)
- [Create a container](#create-a-container) - [Upload blobs to a container](#upload-blobs-to-a-container) - [List the blobs in a container](#list-the-blobs-in-a-container) - [Download blobs](#download-blobs) - [Delete a container](#delete-a-container)
+
+> [!IMPORTANT]
+> Make sure you have the correct dependencies in pom.xml and the necessary directives for the code samples to work, as described in the [setting up](#setting-up) section.
-### Get the connection string
+### Authenticate the client
-The code below retrieves the connection string for the storage account from the environment variable created in the [Configure your storage connection string](#configure-your-storage-connection-string) section.
+Application requests to Azure Blob Storage must be authorized. Using the `DefaultAzureCredential` class provided by the **azure-identity** client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage.
-Add this code inside the `Main` method:
+You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution. Developers must be diligent to never expose the access key in an unsecure location. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication. Both options are demonstrated in the following example.
-```java
-System.out.println("Azure Blob Storage v12 - Java quickstart sample\n");
-
-// Retrieve the connection string for use with the application. The storage
-// connection string is stored in an environment variable on the machine
-// running the application called AZURE_STORAGE_CONNECTION_STRING. If the environment variable
-// is created after the application is launched in a console or with
-// Visual Studio, the shell or application needs to be closed and reloaded
-// to take the environment variable into account.
-String connectStr = System.getenv("AZURE_STORAGE_CONNECTION_STRING");
-```
+### [Passwordless (Recommended)](#tab/managed-identity)
-### Create a container
+`DefaultAzureCredential` is a class provided by the Azure Identity client library for Java. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
-Decide on a name for the new container. The code below appends a UUID value to the container name to ensure that it is unique.
+The order and locations in which `DefaultAzureCredential` looks for credentials can be found in the [Azure Identity library overview](/java/api/overview/azure/identity-readme#defaultazurecredential).
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-Next, create an instance of the [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) class, then call the [create](/java/api/com.azure.storage.blob.blobcontainerclient.create) method to actually create the container in your storage account.
+For example, your app can authenticate using your Visual Studio Code sign-in credentials with when developing locally. Your app can then use a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) once it has been deployed to Azure. No code changes are required for this transition.
-Add this code to the end of the `Main` method:
+#### Assign roles to your Azure AD user account
-```java
-// Create a BlobServiceClient object which will be used to create a container client
-BlobServiceClient blobServiceClient = new BlobServiceClientBuilder().connectionString(connectStr).buildClient();
-//Create a unique name for the container
-String containerName = "quickstartblobs" + java.util.UUID.randomUUID();
+#### Sign-in and connect your app code to Azure using DefaultAzureCredential
-// Create the container and return a container client object
-BlobContainerClient containerClient = blobServiceClient.createBlobContainer(containerName);
-```
+You can authorize access to data in your storage account using the following steps:
-### Upload blobs to a container
+1. Make sure you're authenticated with the same Azure AD account you assigned the role to on your storage account. You can authenticate via the Azure CLI, Visual Studio Code, or Azure PowerShell.
-The following code snippet:
+ #### [Azure CLI](#tab/sign-in-azure-cli)
-1. Creates a text file in the local *data* directory.
-1. Gets a reference to a [BlobClient](/java/api/com.azure.storage.blob.blobclient) object by calling the [getBlobClient](/java/api/com.azure.storage.blob.blobcontainerclient.getblobclient) method on the container from the [Create a container](#create-a-container) section.
-1. Uploads the local text file to the blob by calling the [uploadFromFile](/java/api/com.azure.storage.blob.blobclient.uploadfromfile) method. This method creates the blob if it doesn't already exist, but will not overwrite it if it does.
+ Sign-in to Azure through the Azure CLI using the following command:
-Add this code to the end of the `Main` method:
+ ```azurecli
+ az login
+ ```
-```java
-// Create a local file in the ./data/ directory for uploading and downloading
-String localPath = "./data/";
-String fileName = "quickstart" + java.util.UUID.randomUUID() + ".txt";
-File localFile = new File(localPath + fileName);
+ #### [Visual Studio Code](#tab/sign-in-visual-studio-code)
-// Write text to the file
-FileWriter writer = new FileWriter(localPath + fileName, true);
-writer.write("Hello, World!");
-writer.close();
+ You'll need to [install the Azure CLI](/cli/azure/install-azure-cli) to work with `DefaultAzureCredential` through Visual Studio Code.
-// Get a reference to a blob
-BlobClient blobClient = containerClient.getBlobClient(fileName);
+ On the main menu of Visual Studio Code, navigate to **Terminal > New Terminal**.
-System.out.println("\nUploading to Blob storage as blob:\n\t" + blobClient.getBlobUrl());
+ Sign-in to Azure through the Azure CLI using the following command:
-// Upload the blob
-blobClient.uploadFromFile(localPath + fileName);
+ ```azurecli
+ az login
+ ```
+
+ #### [PowerShell](#tab/sign-in-powershell)
+
+ Sign-in to Azure using PowerShell via the following command:
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+2. To use `DefaultAzureCredential`, make sure that the **azure-identity** dependency is added in `pom.xml`:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ </dependency>
+ ```
+
+3. Add this code to the `Main` method. When the code runs on your local workstation, it will use the developer credentials of the prioritized tool you're logged into to authenticate to Azure, such as the Azure CLI or Visual Studio Code.
+
+ :::code language="java" source="~/azure-storage-snippets/blobs/quickstarts/Java/blob-quickstart/src/main/java/com/blobs/quickstart/App.java" id="Snippet_CreateServiceClientDAC":::
+
+4. Make sure to update the storage account name in the URI of your `BlobServiceClient`. The storage account name can be found on the overview page of the Azure portal.
+
+ :::image type="content" source="./media/storage-quickstart-blobs-java/storage-account-name.png" alt-text="A screenshot showing how to find the storage account name.":::
+
+ > [!NOTE]
+ > When deployed to Azure, this same code can be used to authorize requests to Azure Storage from an application running in Azure. However, you'll need to enable managed identity on your app in Azure. Then configure your storage account to allow that managed identity to connect. For detailed instructions on configuring this connection between Azure services, see the [Auth from Azure-hosted apps](/dotnet/azure/sdk/authentication-azure-hosted-apps) tutorial.
+
+### [Connection String](#tab/connection-string)
+
+A connection string includes the storage account access key and uses it to authorize requests. Always be careful to never expose the keys in an unsecure location.
+
+> [!NOTE]
+> If you plan to use connection strings, you'll need permissions for the following Azure RBAC action: [Microsoft.Storage/storageAccounts/listkeys/action](/azure/role-based-access-control/resource-provider-operations#microsoftstorage). The least privilege built-in role with permissions for this action is [Storage Account Key Operator Service Role](/azure/role-based-access-control/built-in-roles#storage-account-key-operator-service-role), but any role which includes this action will work.
++
+#### Configure your storage connection string
+
+After you copy the connection string, write it to a new environment variable on the local machine running the application. To set the environment variable, open a console window, and follow the instructions for your operating system. Replace `<yourconnectionstring>` with your actual connection string.
+
+**Windows**:
+
+```cmd
+setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"
```
-### List the blobs in a container
+After you add the environment variable in Windows, you must start a new instance of the command window.
-List the blobs in the container by calling the [listBlobs](/java/api/com.azure.storage.blob.blobcontainerclient.listblobs) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob.
+**Linux**:
+
+```bash
+export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
+```
+
+The code below retrieves the connection string for the storage account from the environment variable created earlier, and uses the connection string to construct a service client object.
Add this code to the end of the `Main` method: ```java
-System.out.println("\nListing blobs...");
+// Retrieve the connection string for use with the application.
+String connectStr = System.getenv("AZURE_STORAGE_CONNECTION_STRING");
+
+// Create a BlobServiceClient object using a connection string
+BlobServiceClient client = new BlobServiceClientBuilder()
+ .connectionString(connectStr)
+ .buildClient();
-// List the blob(s) in the container.
-for (BlobItem blobItem : containerClient.listBlobs()) {
- System.out.println("\t" + blobItem.getName());
-}
```
-### Download blobs
+> [!IMPORTANT]
+> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
-Download the previously created blob by calling the [downloadToFile](/java/api/com.azure.storage.blob.specialized.blobclientbase.downloadtofile) method. The example code adds a suffix of "DOWNLOAD" to the file name so that you can see both files in local file system.
++
+### Create a container
+
+Decide on a name for the new container. The code below appends a UUID value to the container name to ensure that it's unique.
+
+> [!IMPORTANT]
+> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
+
+Next, create an instance of the [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) class, then call the [create](/java/api/com.azure.storage.blob.blobcontainerclient.create) method to actually create the container in your storage account.
Add this code to the end of the `Main` method:
-```java
-// Download the blob to a local file
-// Append the string "DOWNLOAD" before the .txt extension so that you can see both files.
-String downloadFileName = fileName.replace(".txt", "DOWNLOAD.txt");
-File downloadedFile = new File(localPath + downloadFileName);
-System.out.println("\nDownloading blob to\n\t " + localPath + downloadFileName);
+### Upload blobs to a container
-blobClient.downloadToFile(localPath + downloadFileName);
-```
+Add this code to the end of the `Main` method:
-### Delete a container
-The following code cleans up the resources the app created by removing the entire container using the [delete](/java/api/com.azure.storage.blob.blobcontainerclient.delete) method. It also deletes the local files created by the app.
+The code snippet completes the following steps:
-The app pauses for user input by calling `System.console().readLine()` before it deletes the blob, container, and local files. This is a good chance to verify that the resources were created correctly, before they are deleted.
+1. Creates a text file in the local *data* directory.
+1. Gets a reference to a [BlobClient](/java/api/com.azure.storage.blob.blobclient) object by calling the [getBlobClient](/java/api/com.azure.storage.blob.blobcontainerclient.getblobclient) method on the container from the [Create a container](#create-a-container) section.
+1. Uploads the local text file to the blob by calling the [uploadFromFile](/java/api/com.azure.storage.blob.blobclient.uploadfromfile) method. This method creates the blob if it doesn't already exist, but won't overwrite it if it does.
+
+### List the blobs in a container
+
+List the blobs in the container by calling the [listBlobs](/java/api/com.azure.storage.blob.blobcontainerclient.listblobs) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob.
Add this code to the end of the `Main` method:
-```java
-// Clean up
-System.out.println("\nPress the Enter key to begin clean up");
-System.console().readLine();
-System.out.println("Deleting blob container...");
-containerClient.delete();
+### Download blobs
-System.out.println("Deleting the local source and downloaded files...");
-localFile.delete();
-downloadedFile.delete();
+Download the previously created blob by calling the [downloadToFile](/java/api/com.azure.storage.blob.specialized.blobclientbase.downloadtofile) method. The example code adds a suffix of "DOWNLOAD" to the file name so that you can see both files in local file system.
-System.out.println("Done");
-```
+Add this code to the end of the `Main` method:
-## Run the code
-This app creates a test file in your local folder and uploads it to Blob storage. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files.
+### Delete a container
-Navigate to the directory containing the *pom.xml* file and compile the project by using the following `mvn` command.
+The following code cleans up the resources the app created by removing the entire container using the [delete](/java/api/com.azure.storage.blob.blobcontainerclient.delete) method. It also deletes the local files created by the app.
-```console
-mvn compile
-```
+The app pauses for user input by calling `System.console().readLine()` before it deletes the blob, container, and local files. This is a good chance to verify that the resources were created correctly, before they're deleted.
-Then, build the package.
+Add this code to the end of the `Main` method:
-```console
-mvn package
-```
+
+## Run the code
+
+This app creates a test file in your local folder and uploads it to Blob storage. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files.
-Run the following `mvn` command to execute the app.
+Follow steps to compile, package, and run the code
-```console
-mvn exec:java -Dexec.mainClass="com.blobs.quickstart.App" -Dexec.cleanupDaemonThreads=false
-```
+1. Navigate to the directory containing the `pom.xml` file and compile the project by using the following `mvn` command:
+ ```console
+ mvn compile
+ ```
+1. Package the compiled code in its distributable format:
+ ```console
+ mvn package
+ ```
+1. Run the following `mvn` command to execute the app:
+ ```console
+ mvn exec:java -D exec.mainClass=com.blobs.quickstart.App -D exec.cleanupDaemonThreads=false
+ ```
+ To simplify the run step, you can add `exec-maven-plugin` to `pom.xml` and configure as shown below:
+ ```xml
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>exec-maven-plugin</artifactId>
+ <version>1.4.0</version>
+ <configuration>
+ <mainClass>com.blobs.quickstart.App</mainClass>
+ <cleanupDaemonThreads>false</cleanupDaemonThreads>
+ </configuration>
+ </plugin>
+ ```
+ With this configuration, you can execute the app with the following command:
+ ```console
+ mvn exec:java
+ ```
+
-The output of the app is similar to the following example:
+The output of the app is similar to the following example (UUID values omitted for readability):
```output
-Azure Blob Storage v12 - Java quickstart sample
+Azure Blob Storage - Java quickstart sample
Uploading to Blob storage as blob:
- https://mystorageacct.blob.core.windows.net/quickstartblobsf9aa68a5-260e-47e6-bea2-2dcfcfa1fd9a/quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65.txt
+ https://mystorageacct.blob.core.windows.net/quickstartblobsUUID/quickstartUUID.txt
Listing blobs...
- quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65.txt
+ quickstartUUID.txt
Downloading blob to
- ./data/quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65DOWNLOAD.txt
+ ./data/quickstartUUIDDOWNLOAD.txt
Press the Enter key to begin clean up
Deleting the local source and downloaded files...
Done ```
-Before you begin the clean up process, check your *data* folder for the two files. You can open them and observe that they are identical.
+Before you begin the clean-up process, check your *data* folder for the two files. You can open them and observe that they're identical.
After you've verified the files, press the **Enter** key to delete the test files and finish the demo.
In this quickstart, you learned how to upload, download, and list blobs using Ja
To see Blob storage sample apps, continue to: > [!div class="nextstepaction"]
-> [Azure Blob Storage SDK v12 Java samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob/src/samples/java/com/azure/storage/blob)
+> [Azure Blob Storage SDK for Java samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob/src/samples/java/com/azure/storage/blob)
- To learn more, see the [Azure SDK for Java](https://github.com/Azure/azure-sdk-for-jav). - For tutorials, samples, quickstarts, and other documentation, visit [Azure for Java cloud developers](/azure/developer/java/).
storage Elastic San Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md
Remove-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName
# [Azure CLI](#tab/azure-cli) ```azurecli
-az elastic-san delete -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName
+az elastic-san volume delete -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName
```
Remove-AzElasticSanVolumeGroup -ResourceGroupName $resourceGroupName -ElasticSan
# [Azure CLI](#tab/azure-cli) ```azurecli
-az elastic-san delete -e $sanName -g $resourceGroupName -n $volumeGroupName
+az elastic-san volume-group delete -e $sanName -g $resourceGroupName -n $volumeGroupName
```
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
This error occurs when the Azure file share is not accessible. To troubleshoot:
1. [Verify the storage account exists.](#troubleshoot-storage-account) 2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
+3. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
If the Azure file share was deleted, you need to create a new file share and then recreate the sync group.
synapse-analytics Get Started Analyze Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-spark.md
Previously updated : 03/24/2021 Last updated : 10/10/2022 # Analyze with Apache Spark
In this tutorial, you'll learn the basic steps to load and analyze data with Apa
## Understanding serverless Apache Spark pools
-A serverless Spark pool is a way of indicating how a user wants to work with Spark. When you start using a pool, a Spark session is created if needed. The pool controls how many Spark resources will be used by that session and how long the session will last before it automatically pauses. You pay for spark resources used during that session not for the pool itself. In this way a Spark pool lets you work with Spark, without having to worry managing clusters. This is similar to how a serverless SQL pool works.
+A serverless Spark pool is a way of indicating how a user wants to work with Spark. When you start using a pool, a Spark session is created if needed. The pool controls how many Spark resources will be used by that session and how long the session will last before it automatically pauses. You pay for spark resources used during that session and not for the pool itself. This way a Spark pool lets you use Apache Spark without managing clusters. This is similar to how a serverless SQL pool works.
## Analyze NYC Taxi data with a Spark pool
Data is available via the dataframe named **df**. Load it into a Spark database
``` 1. Run the cell to show the NYC Taxi data we loaded into the **nyctaxi** Spark database.
-1. Create a new code cell and enter the following code. We will analyze this data and save the results into a table called **nyctaxi.passengercountstats**.
+1. Create a new code cell and enter the following code. We'll analyze this data and save the results into a table called **nyctaxi.passengercountstats**.
```py %%pyspark
synapse-analytics On Demand Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/on-demand-workspace-overview.md
Previously updated : 01/19/2022 Last updated : 10/11/2022
-# Serverless SQL pool in Azure Synapse Analytics
+# Serverless SQL pool in Azure Synapse Analytics
Every Azure Synapse Analytics workspace comes with serverless SQL pool endpoints that you can use to query data in the [Azure Data Lake](query-data-storage.md) ([Parquet](query-data-storage.md#query-parquet-files), [Delta Lake](query-delta-lake-format.md), [delimited text](query-data-storage.md#query-csv-files) formats), [Azure Cosmos DB](query-cosmos-db-analytical-store.md?toc=%2Fazure%2Fsynapse-analytics%2Ftoc.json&bc=%2Fazure%2Fsynapse-analytics%2Fbreadcrumb%2Ftoc.json&tabs=openrowset-key), or Dataverse. Serverless SQL pool is a query service over the data in your data lake. It enables you to access your data through the following functionalities:
-
-- A familiar [T-SQL syntax](overview-features.md) to query data in place without the need to copy or load data into a specialized store. -- Integrated connectivity via the T-SQL interface that offers a wide range of business intelligence and ad-hoc querying tools, including the most popular drivers. +
+- A familiar [T-SQL syntax](overview-features.md) to query data in place without the need to copy or load data into a specialized store. To learn more, see the [T-SQL support](#t-sql-support) section.
+- Integrated connectivity via the T-SQL interface that offers a wide range of business intelligence and ad-hoc querying tools, including the most popular drivers. To learn more, see the [Client tools](#client-tools) section.
Serverless SQL pool is a distributed data processing system, built for large-scale data and computational functions. Serverless SQL pool enables you to analyze your Big Data in seconds to minutes, depending on the workload. Thanks to built-in query execution fault-tolerance, the system provides high reliability and success rates even for long-running queries involving large data sets.
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
Title: Azure Virtual Desktop for Azure Stack HCI (preview) - Azure
-description: A brief overview of Azure Virtual Desktop for Azure Stack HCI (preview).
-
+ Title: Azure Virtual Desktop for Azure Stack HCI (preview) overview
+description: Overview of Azure Virtual Desktop for Azure Stack HCI (preview).
+ Previously updated : 11/02/2021- Last updated : 10/14/2022++ # Azure Virtual Desktop for Azure Stack HCI (preview)
-> [!IMPORTANT]
-> Azure Virtual Desktop for Azure Stack HCI is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Azure Virtual Desktop for Azure Stack HCI (preview) lets you deploy Azure Virtual Desktop session hosts on your on-premises Azure Stack HCI infrastructure. You manage your session hosts from the Azure portal.
+
+## Overview
-Azure Virtual Desktop for Azure Stack HCI (preview) lets you deploy Azure Virtual Desktop session hosts to your on-premises Azure Stack HCI infrastructure. You can also use Azure Virtual Desktop for Azure Stack HCI to manage your session hosts from the Azure portal. If you already have an existing on-premises Virtual Desktop Infrastructure (VDI) deployment, Azure Virtual Desktop for Azure Stack HCI can improve your administrator and end-user experience. If you're already using Azure Virtual Desktop in the cloud, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
+If you already have an existing on-premises Virtual Desktop Infrastructure (VDI) deployment, Azure Virtual Desktop for Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop in the cloud, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
-Azure Virtual Desktop for Azure Stack HCI is currently in public preview. Azure Stack HCI doesn't currently support certain important Azure Virtual Desktop features. Because of these limitations, we don't recommend using this feature for production workloads yet.
+Azure Virtual Desktop for Azure Stack HCI is currently in public preview. As such, it doesn't currently support certain important Azure Virtual Desktop features. Because of these limitations, we don't recommend using this feature for production workloads yet.
-## Key benefits
+> [!IMPORTANT]
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta.
-We've established what Azure Virtual Desktop for Azure Stack HCI is. The question remains: what can it do for you?
+> [!NOTE]
+> Azure Virtual Desktop for Azure Stack HCI is not an Azure Arc-enabled service. As such, it is not supported outside of Azure, in a multi-cloud environment, or on Azure Arc-enabled servers besides Azure Stack HCI virtual machines as described in this article.
+
+## Benefits
With Azure Virtual Desktop for Azure Stack HCI, you can:
With Azure Virtual Desktop for Azure Stack HCI, you can:
- Simplify your VDI deployment and management compared to traditional on-premises VDI solutions by using the Azure portal.
+- Achieve best performance by leveraging [RDP Shortpath](rdp-shortpath.md?tabs=managed-networks) for low-latency user access.
+
+- Deploy the latest fully patched images quickly and easily using [Azure Marketplace images](/azure-stack/hci/manage/virtual-machine-image-azure-marketplace?tabs=azurecli).
++
+## Supported platforms
+
+Azure Virtual Desktop for Azure Stack HCI supports the same [Remote Desktop clients](user-documentation/index.yml) as Azure Virtual Desktop, and supports the following x64 operating system images:
+
+- Windows 11 Enterprise multi-session
+- Windows 11 Enterprise
+- Windows 10 Enterprise multi-session, version 21H2
+- Windows 10 Enterprise, version 21H2
+- Windows Server 2022
+- Windows Server 2019
+ ## Pricing The following things affect how much it costs to run Azure Virtual Desktop for Azure Stack HCI:+
+ - **Infrastructure costs.** You'll pay monthly service fees for Azure Stack HCI. Learn more at [Azure Stack HCI pricing](https://azure.microsoft.com/pricing/details/azure-stack/hci/).
-- User access rights for Azure Virtual Desktop. The same licenses that grant access to Azure Virtual Desktop in the cloud also apply to Azure Virtual Desktop for Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+- **User access rights.** The same licenses that grant access to Azure Virtual Desktop in the cloud also apply to Azure Virtual Desktop for Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+
+- **Hybrid service fee.** This fee requires you to pay for each active virtual CPU (vCPU) of Azure Virtual Desktop session hosts you're running on Azure Stack HCI. This fee will become active once the preview period ends.
-- The Azure Virtual Desktop hybrid service fee. This fee requires you to pay for each active virtual CPU (vCPU) of Azure Virtual Desktop session hosts you're running on Azure Stack HCI. This fee will become active once the preview period ends.
+## Data storage
+
+Azure Virtual Desktop for Azure Stack HCI doesn't guarantee that all data is stored on-premises. You can choose to store user data on-premises by locating session host virtual machines (VMs) and associated services such as file servers on-premises. However, some customer data, diagnostic data, and service-generated data are still stored in Azure. For more information on how Azure Virtual Desktop stores different kinds of data, seeΓÇ»[Data locations for Azure Virtual Desktop](data-locations.md).
## Known issues and limitations
-We're aware of the following issues affecting the public preview version of Azure Virtual Desktop for Azure Stack HCI:
+The following issues affect the preview version of Azure Virtual Desktop for Azure Stack HCI:
+
+- Templates may show failures in certain cases at the domain-joining step. To proceed, you can manually join the session hosts to the domain. For more information, see [VM provisioning through Azure portal on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
- Azure Stack HCI host pools don't currently support the following Azure Virtual Desktop features: - [Azure Monitor for Azure Virtual Desktop](azure-monitor.md) - [Session host scaling with Azure Automation](set-up-scaling-script.md)
- - [Autoscale (preview)](autoscale-scaling-plan.md)
- - [Start VM on connect](start-virtual-machine-connect.md)
+ - [Autoscale plan](autoscale-scaling-plan.md)
+ - [Start VM On Connect](start-virtual-machine-connect.md)
- [Multimedia redirection (preview)](multimedia-redirection.md) - [Per-user access pricing](./remote-app-streaming/licensing.md) -- The Azure Virtual Desktop tab in the Azure portal can't create new virtual machines directly on Azure Stack HCI infrastructure. Instead, admins must create on-premises virtual machines separately, then register them with an Azure Virtual Desktop host pool.--- When connecting to a Windows 10 or 11 Enterprise multi-session virtual desktop, users may see activation issues, such as a desktop watermark saying "Activate Windows," even if they have an eligible license.- - Azure Virtual Desktop for Azure Stack HCI doesn't currently support host pools containing both cloud and on-premises session hosts. Each host pool in the deployment must have only one type of host pool.
+- When connecting to a Windows 10 or Windows 11 Enterprise multi-session virtual desktop, users may see activation issues, such as a desktop watermark saying "Activate Windows", even if they have an eligible license.
+ - Session hosts on Azure Stack HCI don't support certain cloud-only Azure services. - Because Azure Stack HCI supports so many types of hardware and on-premises networking capabilities that performance and user density may vary widely between session hosts running in the Azure cloud. Azure Virtual Desktop's [virtual machine sizing guidelines](/windows-server/remote/remote-desktop-services/virtual-machine-recs) are broad, so you should only use them for initial performance estimates.
-If there are any issues you encounter during the preview that aren't on this list, we encourage you to report them.
- ## Next steps
-Now that youΓÇÖre familiar with Azure Virtual Desktop for Azure Stack HCI, learn how to deploy this feature at [Set up Azure Virtual Desktop for Azure Stack HCI (preview)](azure-stack-hci.md).
+[Set up Azure Virtual Desktop for Azure Stack HCI (preview)](azure-stack-hci.md).
virtual-desktop Azure Stack Hci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci.md
Title: Set up Azure Virtual Desktop for Azure Stack HCI (preview) - Azure description: How to set up Azure Virtual Desktop for Azure Stack HCI (preview).-+ Previously updated : 11/02/2021- Last updated : 10/14/2022++ # Set up Azure Virtual Desktop for Azure Stack HCI (preview)
+With Azure Virtual Desktop for Azure Stack HCI (preview), you can use Azure Virtual Desktop session hosts in your on-premises Azure Stack HCI infrastructure. For more information, see [Azure Virtual Desktop for Azure Stack HCI (preview)](azure-stack-hci-overview.md).
+ > [!IMPORTANT] > Azure Virtual Desktop for Azure Stack HCI is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-With Azure Virtual Desktop for Azure Stack HCI (preview), you can use Azure Virtual Desktop session hosts in your on-premises Azure Stack HCI infrastructure. For more information, see [Azure Virtual Desktop for Azure Stack HCI (preview)](azure-stack-hci-overview.md).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply.
-## Requirements
+## Prerequisites
In order to use Azure Virtual Desktop for Azure Stack HCI, you'll need the following things: -- An [Azure Stack HCI cluster registered with Azure](/azure-stack/hci/deploy/register-with-azure).
+- An Azure subscription for Azure Virtual Desktop session host pool creation with all required admin permissions. For more information, see [Built-in Azure RBAC roles for Azure Virtual Desktop](rbac.md).
-- An Azure subscription for Azure Virtual Desktop session host pool creation with all required admin permissions.
+- An [Azure Stack HCI cluster registered with Azure](/azure-stack/hci/deploy/register-with-azure) in the same subscription.
-- [An on-premises Active Directory (AD) synced with Azure Active Directory](/azure/architecture/reference-architectures/identity/azure-ad).
+- Azure Arc VM management should be set up on the Azure Stack HCI cluster. For more information, see [VM provisioning through Azure portal on Azure Stack HCI (preview)](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+
+- [An on-premises Active Directory (AD) synced with Azure Active Directory](/azure/architecture/reference-architectures/identity/azure-ad). The AD domain should resolve using DNS. For more information, see [Prerequisites for Azure Virtual Desktop](prerequisites.md#network).
- A stable connection to Azure from your on-premises network. - Access from your on-premises network to all the required URLs listed in Azure Virtual Desktop's [required URL list](safe-url-list.md) for virtual machines.
-## Configure Azure Virtual Desktop for Azure Stack HCI
-
-To set up Azure Virtual Desktop for Azure Stack HCI:
-
-1. Create a new host pool with no virtual machines by following the instructions in [Begin the host pool setup process](create-host-pools-azure-marketplace.md#begin-the-host-pool-setup-process). At the end of that section, come back to this article and start on step 2.
-
-2. Configure the newly created host pool to be a validation host pool by following the steps in [Define your host pool as a validation host pool](create-validation-host-pool.md#define-your-host-pool-as-a-validation-host-pool) to enable the Validation environment property.
-
-3. Follow the instructions in [Workspace information](create-host-pools-azure-marketplace.md#workspace-information) to create a workspace for yourself.
-
-4. Deploy a new virtual machine on your Azure Stack HCI infrastructure by following the instructions in [Create a new VM](/azure-stack/hci/manage/vm#create-a-new-vm). Deploy a VM with a supported OS and join it to a domain.
-
- >[!NOTE]
- >Install the Remote Desktop Session Host (RDSH) role if the VM is running a Windows Server OS.
-
-5. Enable Azure to manage the new virtual machine through Azure Arc by installing the Connected Machine agent to it. Follow the directions in [Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md) to install the Windows agent to the virtual machine.
-
-6. Add the virtual machine to the Azure Virtual Desktop host pool you created earlier by installing the [Azure Virtual Desktop Agent](agent-overview.md). After that, follow the instructions in [Register the VMs to the Azure Virtual Desktop host pool](create-host-pools-powershell.md#register-the-virtual-machines-to-the-azure-virtual-desktop-host-pool) to register the VM to the Azure Virtual Desktop service.
+- There should be at least one Windows OS image available on the cluster. For more information, see [Create Azure Stack HCI VM image using Azure Marketplace images](/azure-stack/hci/manage/virtual-machine-image-azure-marketplace?tabs=azurecli) and [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/).
-7. Follow the directions in [Create app groups and manage user assignments](manage-app-groups.md) to create an app group for testing and assign user access to it.
+## Set up and configuration
-8. Go to [the web client](./user-documentation/connect-web.md) and grant your users access to the new deployment.
+Follow the steps below for a simplified process of setting up Azure Virtual Desktop on Azure Stack HCI. This deployment is based on an Azure Resource Manager template that automates creating the host pool and workspace, creating the session hosts on the HCI cluster, joining the domain, downloading and installing the Azure Virtual Desktop agents, and then registering them to the host pool.
-## Optional configurations
+1. Sign into the Azure Portal.
-Now that you've set up Azure Virtual Desktop for Azure Stack HCI, here are a few extra things you can do depending on your deployment's needs.
+1. On the Azure portal menu or from the Home page, select **Azure Stack HCI**.
-### Create a profile container using a file share on Azure Stack HCI
+1. Select your Azure Stack HCI cluster.
-To create a profile container using a file share:
+ :::image type="content" source="media/azure-virtual-desktop-hci/azure-portal.png" alt-text="Screenshot of Azure portal." lightbox="media/azure-virtual-desktop-hci/azure-portal.png":::
-1. Deploy a file share on a single or clustered Windows Server VM deployment. The Windows Server VMs with file server role can also be colocated on the same cluster where the session host VMs are deployed.
+1. On the **Overview** page, select the **Get Started** tab.
-2. Connect to the virtual machine with the credentials you provided when creating the virtual machine.
-3. On the virtual machine, launch **Control Panel** and select **System**.
+1. Select the **Deploy button** on the **Azure Virtual Desktop** tile - the **Custom deployment** page will open.
-4. Select Computer name, select **Change settings**, and then select **Change…**.
+ :::image type="content" source="media/azure-virtual-desktop-hci/custom-template.png" alt-text="Screenshot of custom deployment template." lightbox="media/azure-virtual-desktop-hci/custom-template.png":::
-5. Select **Domain**, then enter the Active Directory domain on the virtual network.
+1. Select the correct subscription under **Project details**.
-6. Authenticate with a domain account that has privileges to domain-join machines.
+1. Select either **Create new** to create a new resource group or select an existing resource group from the drop-down menu.
-7. Follow the directions in [Prepare the VM to act as a file share](create-host-pools-user-profile.md#prepare-the-virtual-machine-to-act-as-a-file-share-for-user-profiles) to prepare your VM for deployment.
+1. Select the Azure region for the host pool thatΓÇÖs right for you and your customers.
-8. Follow the directions in [Configure the FSLogix profile container](create-host-pools-user-profile.md#configure-the-fslogix-profile-container) to configure your profile container for use.
+1. Enter a unique name for your host pool.
-### Download supported OS images from Azure Marketplace
+1. In **Location**, enter a region where Host Pool, Workspace, and VMs machines will be created. The metadata for these objects is stored in the geography associated with the region. *Example: East US*.
-You can run any OS images that both Azure Virtual Desktop and Azure Stack HCI support on your deployment. To learn which OSes Azure Virtual Desktop supports, see [Supported VM OS images](prerequisites.md#operating-systems-and-licenses).
+ > [!NOTE]
+ > This location must match the Azure region you selected in step 8 above.
-You have two options to download an image:
+1. In **Custom Location Id**, enter the resource ID of the deployment target for creating VMs, which is associated with an Azure Stack HCI cluster.
+*Example: /subscriptions/My_subscriptionID/resourcegroups/Contoso-rg/providers/microsoft.extendedlocation/customlocations/Contoso-CL*.
-- Deploy a VM with your preferred OS image, then follow the instructions in [Download a Windows VHD from Azure](../virtual-machines/windows/download-vhd.md).-- Download a Windows Virtual Hard Disk (VHD) from Azure without deploying a VM.
+1. Enter a value for **Virtual Processor Count** (vCPU) and for **Memory GB** for your VM. Defaults are 4 vCPU and 8GB respectively.
-Downloading a Windows VHD without deploying a VM has several extra steps. To download a VHD from Azure without deploying a VM, you'll need to complete the instructions in the following sections in order.
+1. Enter a unique name for **Workspace Name**.
-### Requirements to download a VHD without a VM
+1. Enter local administrator credentials for **Vm Administrator Account Username** and **Vm Administrator Account Password**.
-Before you begin, make sure you're connected to Azure and are running [Azure Cloud Shell](../cloud-shell/quickstart.md) in either a command prompt or in the bash environment. You can also run CLI reference commands via the Azure CLI.
+1. Enter the **OU Path** value for domain join. *Example: OU=unit1,DC=contoso,DC=com*.
-If you're using a local installation, run the [az login](/cli/azure/reference-index#az-login) command to sign into Azure.
+1. Enter the **Domain** name to join your session hosts to the required domain.
-After that, follow any other prompts you see to finish signing in. For additional sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+1. Enter domain administrator credentials for **Domain Administrator Username** and **Domain Administrator Password** to join your session hosts to the domain. These are mandatory fields.
-If this is your first time using Azure CLI, install any required extensions by following the instructions in [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+1. Enter the number of VMs to be created for **Vm Number of Instances**. Default is 1.
-Finally, run the [az version](/cli/azure/reference-index?#az-version) command to make sure your client is up to date. If it's out of date, run the [az upgrade](/cli/azure/reference-index?#az-upgrade) command to upgrade to the latest version.
+1. Enter a prefix for the VMs for **Vm Name Prefix**.
-### Search Azure Marketplace for Azure Virtual Desktop images
+1. Enter the **Image Id** of the image to be used. This can be a custom image or an Azure Marketplace image. *Example: /subscriptions/My_subscriptionID/resourceGroups/Contoso-rg/providers/microsoft.azurestackhci/marketplacegalleryimages/Contoso-Win11image*.
-You can find the image you're looking for by using the **Search** function in Azure Marketplace in the Azure portal. To find images specifically for Azure Virtual Desktop, you can run one of the following example queries.
+1. Enter the **Virtual Network Id** of the virtual network. *Example: /subscriptions/My_subscriptionID/resourceGroups/Contoso-rg/providers/Microsoft.AzureStackHCI/virtualnetworks/Contoso-virtualnetwork*.
-If you're looking for Windows 10 multi-session, you can run a search with this criteria:
+1. Enter the **Token Expiration Time**. If left blank, the default will be the current UTC time.
-```azurecli
-az vm image list --all --publisher "microsoftwindowsdesktop" --offer "windows-10" --sku "21h1-evd-g2"
-```
+1. Enter values for **Tags**. *Example format: { "CreatedBy": "name", "Test": "Test2ΓÇ¥ }*
-This command should return the following URN:
+1. Enter the **Deployment Id**. A new GUID will be created by default.
-```output
-MicrosoftWindowsDesktop:Windows-10:21h1-evd-g2:latest
-```
+1. Select the **Validation Environment** - it's **false** by default.
-If you're looking for Windows Server 2019 datacenter, you can run the following criteria in your Azure CLI:
+> [!NOTE]
+> For more session host configurations, use the Full Configuration [(CreateHciHostpoolTemplate.json)](https://github.com/Azure/RDS-Templates/blob/master/ARM-wvd-templates/HCI/CreateHciHostpoolTemplate.json) template, which offers all the features that can be used to deploy Azure Virtual Desktop on Azure Stack HCI.
-```azurecli
-az vm image list --all --publisher "microsoftwindowsserver" --offer "WindowsServer" --sku "2019-Datacenter-gen2"
-```
+## Optional configuration
-This command should return the following URN:
+Now that you've set up Azure Virtual Desktop for Azure Stack HCI, here are a few optional things you can do depending on your deployment needs:
+
+### Add session hosts
-```output
-MicrosoftWindowsServer:windowsserver-gen2preview:2019-datacenter-gen2:latest
-```
+You can add new session hosts to an existing host pool that was created either manually or using the custom template. Use the **Quick Deploy** [(AddHciVirtualMachinesQuickDeployTemplate.json)](https://github.com/Azure/RDS-Templates/blob/master/ARM-wvd-templates/HCI/QuickDeploy/AddHciVirtualMachinesQuickDeployTemplate.json) template to get started.
->[!IMPORTANT]
->Make sure to only use generation 2 ("gen2") images. Azure Virtual Desktop for Azure Stack HCI doesn't support creating a VM with a first-generation ("gen1") image. Avoid SKUs with a "-g1" suffix.
+For information on how to deploy a custom template, see [Quickstart: Create and deploy ARM templates by using the Azure portal](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal).
-### Create a new Azure managed disk from the image
+> [!NOTE]
+> For more session host configurations, use the **Full Configuration** [(AddHciVirtualMachinesTemplate.json)](https://github.com/Azure/RDS-Templates/blob/master/ARM-wvd-templates/HCI/AddHciVirtualMachinesTemplate.json) template, which offers all the features that can be used to deploy Azure Virtual Desktop on Azure Stack HCI. Learn more at [RDS-Templates](https://github.com/Azure/RDS-Templates/blob/master/ARM-wvd-templates/HCI/Readme.md).
-Next, you'll need to create an Azure managed disk from the image you downloaded from the Azure Marketplace.
+### Create a profile container
-To create an Azure managed disk:
+To create a profile container using a file share on Azure Stack HCI, do the following:
-1. Run the following commands in an Azure command-line prompt to set the parameters of your managed disk. Make sure to replace the items in brackets with the values relevant to your scenario.
+1. Deploy a file share on a single or clustered Windows Server VM deployment. The Windows Server VMs with file server role can also be co-located on the same cluster where the session host VMs are deployed.
-```console
-$urn = <URN of the Marketplace image> #Example: ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥
-$diskName = <disk name> #Name for new disk to be created
-$diskRG = <resource group> #Resource group that contains the new disk
-```
+1. Connect to the VM with the credentials you provided when creating the VM.
-2. Run these commands to create the disk and generate a Serial Attached SCSI (SAS) access URL.
+3. Join the VM to an Active Directory domain.
-```azurecli
-az disk create -g $diskRG -n $diskName --image-reference $urn
-$sas = az disk grant-access --duration-in-seconds 36000 --access-level Read --name $diskName --resource-group $diskRG
-$diskAccessSAS = ($sas | ConvertFrom-Json)[0].accessSas
-```
-
-### Export a VHD from the managed disk to Azure Stack HCI cluster
-
-After that, you'll need to export the VHD you created from the managed disk to your Azure Stack HCI cluster, which will let you create new VMs. You can use the following method in a regular web browser or Storage Explorer.
-
-To export the VHD:
-
-1. Open a browser and go to the SAS URL of the managed disk you generated in [Create a new Azure managed disk from the image](#create-a-new-azure-managed-disk-from-the-image). You can download the VHD image for the image you downloaded at the Azure Marketplace at this URL.
-
-2. Download the VHD image. The downloading process may take several minutes, so be patient. Make sure the image has fully downloaded before going to the next section.
-
->[!NOTE]
->If you're running azcopy, you may need to skip the md5check by running this command:
->
-> ```azurecli
-> azcopy copy ΓÇ£$sas" "destination_path_on_cluster" --check-md5 NoCheck
-> ```
-
-### Clean up the managed disk
-
-When you're done with your VHD, you'll need to free up space by deleting the managed disk.
-
-To delete the managed disk you created, run these commands:
-
-```azurecli
-az disk revoke-access --name $diskName --resource-group $diskRG
-az disk delete --name $diskName --resource-group $diskRG --yes
-```
-
-This command may take a few minutes to finish, so be patient.
-
->[!NOTE]
->Optionally, you can also convert the download VHD to a dynamic VHDx by running this command:
->
-> ```powershell
-> Convert-VHD -Path " destination_path_on_cluster\file_name.vhd" -DestinationPath " destination_path_on_cluster\file_name.vhdx" -VHDType Dynamic
-> ```
+7. Follow the instructions in [Create a profile container for a host pool using a file share](create-host-pools-user-profile.md) to prepare your VM and configure your profile container.
## Next steps
-If you need to refresh your memory about the basics or pricing information, go to [Azure Virtual Desktop for Azure Stack HCI](azure-stack-hci-overview.md).
-
-If you have additional questions, check out our [FAQ](azure-stack-hci-faq.yml).
+For an overview and pricing information, see [Azure Virtual Desktop for Azure Stack HCI](azure-stack-hci-overview.md).
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 10/12/2022 Last updated : 10/13/2022
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared-enable.md
description: Configure an Azure managed disk with shared disks so that you can s
Previously updated : 07/19/2022 Last updated : 10/14/2022
Shared disks support several operating systems. See the [Windows](./disks-shared
To deploy a managed disk with the shared disk feature enabled, use the new property `maxShares` and define a value greater than 1. This makes the disk shareable across multiple VMs. > [!IMPORTANT]
+> Host caching isn't supported for shared disks.
+>
> The value of `maxShares` can only be set or changed when a disk is unmounted from all VMs. See the [Disk sizes](#disk-sizes) for the allowed values for `maxShares`. # [Portal](#tab/azure-portal)
Before using the following template, replace `[parameters('dataDiskName')]`, `[r
To deploy a managed disk with the shared disk feature enabled, use the new property `maxShares` and define a value greater than 1. This makes the disk shareable across multiple VMs. > [!IMPORTANT]
+> Host caching isn't supported for shared disks.
+>
> The value of `maxShares` can only be set or changed when a disk is unmounted from all VMs. See the [Disk sizes](#disk-sizes) for the allowed values for `maxShares`. # [Portal](#tab/azure-portal)
Before using the following template, replace `[parameters('dataDiskName')]`, `[r
To share an existing disk, or update how many VMs it can mount to, set the `maxShares` parameter with either the Azure PowerShell module or Azure CLI. You can also set `maxShares` to 1, if you want to disable sharing. > [!IMPORTANT]
+> Host caching isn't supported for shared disks.
+>
> The value of `maxShares` can only be set or changed when a disk is unmounted from all VMs. See the [Disk sizes](#disk-sizes) for the allowed values for `maxShares`. > Before detaching a disk, record the LUN ID for when you re-attach it.
az disk update --name mySharedDisk --max-shares 5
Once you've deployed a shared disk with `maxShares>1`, you can mount the disk to one or more of your VMs. > [!NOTE]
+> Host caching isn't supported for shared disks.
+>
> If you are deploying an ultra disk, make sure it matches the necessary requirements. See [Using Azure ultra disks](disks-enable-ultra-ssd.md) for details. ```azurepowershell-interactive
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 07/19/2022 Last updated : 10/14/2022
virtual-machines Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows.md
New-AzVM -VM $VirtualMachine -ResourceGroupName "MyVirtualMachineResourceGroup"
## Enable encryption on a newly added data disk You can [add a new disk to a Windows VM using PowerShell](attach-disk-ps.md), or [through the Azure portal](attach-managed-disk-portal.md).
+ >[!NOTE]
+ > Newly added data disk encryption must be enabled via Powershell, or CLI only. Currently, the Azure portal does not support enabling encryption on new disks.
+ ### Enable encryption on a newly added disk with Azure PowerShell When using PowerShell to encrypt a new disk for Windows VMs, a new sequence version should be specified. The sequence version has to be unique. The script below generates a GUID for the sequence version. In some cases, a newly added data disk might be encrypted automatically by the Azure Disk Encryption extension. Auto encryption usually occurs when the VM reboots after the new disk comes online. This is typically caused because "All" was specified for the volume type when disk encryption previously ran on the VM. If auto encryption occurs on a newly added data disk, we recommend running the Set-AzVmDiskEncryptionExtension cmdlet again with new sequence version. If your new data disk is auto encrypted and you do not wish to be encrypted, decrypt all drives first then re-encrypt with a new sequence version specifying OS for the volume type.
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| Appliance Templates | Link | | -- | : |
-| **SAP S/4HANA 2021 FPS01, Fully-Activated Appliance** April 26 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) ||
| **SAP S/4HANA 2021 FPS02, Fully-Activated Appliance** July 19 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) |
+| **SAP S/4HANA 2021 FPS01, Fully-Activated Appliance** April 26 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) ||
| **SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition** May 11 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured you can start directly implementing your scenarios. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577) |
-| **SAP Business One 10.0 PL02, version for SAP HANA** August 04 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=371edc8c-56c6-4d21-acb4-2d734722c712&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|Trusted by over 70,000 small and midsize businesses in 170+ countries, SAP Business One is a flexible, affordable, and scalable ERP solution with the power of SAP HANA. The solution is pre-configured using a 31-day trial license and has a demo database of your choice pre-installed. See the getting started guide to learn about the scope of the solution and how to easily add new demo databases. To secure your system against the CVE-2021-44228 vulnerability, apply SAP Support Note 3131789. For more information, see the Getting Started Guide of this solution (check the "Security Aspects" chapter). | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/371edc8c-56c6-4d21-acb4-2d734722c712) |
-| **SAP Product Lifecycle Costing 4.0 SP4 Hotfix 3** August 10 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=61af97ea-be7e-4531-ae07-f1db561d0847&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|SAP Product Lifecycle Costing is a solution to calculate costs and other dimensions for new products or product related quotations in an early stage of the product lifecycle, to quickly identify cost drivers and to easily simulate and compare alternatives. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/61af97ea-be7e-4531-ae07-f1db561d0847) |
+| **SAP BusinessObjects BI Platform 4.3 SP02** October 05 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=f4626c2f-d9d8-49e0-b1ce-59371e0f8749&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|This appliance contains SAP BusinessObjects BI Platform 4.3 Support Package 2 Patch 4: (i) On the Linux instance, the BI Platorm, Web Intelligence and Live Data Connect servers are running on the default installed Tomcat (ii) On the Windows instance, you can use the SAP BI SP2 Patch 4 version of the clients tools to connect to the server: Web Intelligence Rich Client, Information Design Tool, Universe Design Tool. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4626c2f-d9d8-49e0-b1ce-59371e0f8749) |
+| **SAP Solution Manager 7.2 SP15 & Focused Solutions SP10 (Baseline)** October 03 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=65c36516-6779-46e5-9c30-61b5d145209f&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|This template contains a partly configured SAP Solution Manager 7.2 SP15 (incl. Focused Build and Focused Insights 2.0 SP10). Only the Mandatory Configuration and Focused Build configuration are performed. The system is clean and does not contain pre-defined demo scenarios. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/65c36516-6779-46e5-9c30-61b5d145209f) |
| **SAP NetWeaver 7.5 SP15 on SAP ASE** January 20 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |SAP NetWeaver 7.5 SP15 on SAP ASE | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) |
virtual-machines Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations-storage.md
Some guiding principles in selecting your storage configuration for HANA can be
- Decide on the type of storage based on [Azure Storage types for SAP workload](./planning-guide-storage.md) and [Select a disk type](../../disks-types.md) - The overall VM I/O throughput and IOPS limits in mind when sizing or deciding for a VM. Overall VM storage throughput is documented in the article [Memory optimized virtual machine sizes](../../sizes-memory.md). - When deciding for the storage configuration, try to stay below the overall throughput of the VM with your **/hana/data** volume configuration. SAP HANA writing savepoints, HANA can be aggressive issuing I/Os. It's easily possible to push up to throughput limits of your **/hana/data** volume when writing a savepoint. If your disk(s) that build the **/hana/data** volume have a higher throughput than your VM allows, you could run into situations where throughput utilized by the savepoint writing is interfering with throughput demands of the redo log writes. A situation that can impact the application throughput-- If you're considering using HANA System Replication, you need to use exactly the same type of Azure storage for **/hana/data** and **/hana/log** for all the VMs participating in the HANA System Replication configuration. For example, using Azure premium storage for **/hana/data** with one VM and Azure Ultra disk for **/hana/log** in another VM within the same HANA System replication configuration, isn't supported
+- If you're considering using HANA System Replication, the storage used for **/hana/data** on each replica must be same and the storage type used for **/hana/log** on each replica must be same. For example, using Azure premium storage for **/hana/data** with one VM and Azure Ultra disk for **/hana/data** in another VM running a replica of the same HANA System replication configuration, isn't supported
> [!IMPORTANT]
virtual-machines Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-premium-ssd-v2.md
For general considerations around stripe sizes when using LVM, HANA data volume
The major difference of Premium SSD v2 to the existing netWeaver and HANA certified storages can be listed like: - With Premium SSD v2, you pay the exact deployed capacity. Unlike with premium disk and Ultra disk, where brackets of sizes are being taken to determine the costs of capacity-- Every Premium SSD v2 storage disk comes with 3,000 IOPS and 125MBps on throughput that is included in the capacity pricing
+- Every Premium SSD v2 storage disk comes with 3,000 IOPS and 125 MBps on throughput that is included in the capacity pricing
- Extra IOPS and throughput on top of the default ones that come with each disk can be provisioned at any point in time and are charged separately-- Changes to the provisioned IOPS and throughput can be executed ones in 6 hours
+- Changes to the provisioned IOPS and throughput can be executed once in 6 hours
- Latency of Premium SSD v2 is lower than premium storage, but higher than Ultra disk. But is submilliseconds, so, that it passes the SAP HANA KPIs without the help of any other functionality, like Azure Write Accelerator - **Like with Ultra disk, you can use Premium SSD v2 for /hana/data and /hana/log volumes without the need of any accelerators or other caches**. - Like Ultra disk, Azure Premium SSD doesn't offer caching options as premium storage does
Not having Azure Write Accelerator support or support by other caches makes the
## Production recommended storage solution based on Azure premium storage > [!NOTE]
-> The configurations we suggest below keep the HANA minimum KPIs, as we listed them in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) in mind. Our tests so far gave no indications that with the values listed, SAP HCMT tests would fail in throughput or latency. That stated, we did not test all possible combinations and possibilities around stripe sets stretched across multiple disks or different stripe sizes. Thests we condcuted with striped volumes across multiple disks were done with the stripe sizes we documented in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
+> The configurations suggested below keep the HANA minimum KPIs, as listed in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) in mind. Our tests so far gave no indications that with the values listed, SAP HCMT tests would fail in throughput or latency. That stated, not all variations possible and combinations around stripe sets stretched across multiple disks or different stripe sizes were tested. Tests condcuted with striped volumes across multiple disks were done with the stripe sizes documented in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
> [!NOTE]
Not having Azure Write Accelerator support or support by other caches makes the
When you look up the price list for Azure managed disks, then it becomes apparent that the cost scheme introduced with Premium SSD v2, gives you two general paths to pursue: -- You try to simplify your storage architecture by using a single disk for **/hana/data** and **/hana/log** and pay for more IOPS and throughput as needed to achieve the levels we recommend below. With the awareness that a single disk has a throughput level of 1,200MBps and 80,000 IOPS.
+- You try to simplify your storage architecture by using a single disk for **/hana/data** and **/hana/log** and pay for more IOPS and throughput as needed to achieve the levels we recommend below. With the awareness that a single disk has a throughput level of 1,200 MBps and 80,000 IOPS.
- You want to benefit of the 3,000 IOPS and 125MBps that come for free with each disk. To do so, you would build multiple smaller disks that sum up to the capacity you need and then build a striped volume with a logical volume manager across these multiple disks. Striping across multiple disks would give you the possibility to reduce the IOPS and throughput cost factors. But would result in some more efforts in automating deployments and operating such solutions.
-Since we don't want to define which direction you should go, we are leaving the decision to you on whether to take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing storage throughput are going to grow over time. And that HANA savepoints are extremely critical and demand high throughput for the **/hana/data** volume
+Since we don't want to define which direction you should go, we're leaving the decision to you on whether to take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing storage throughput are going to grow over time. And that HANA savepoints are extremely critical and demand high throughput for the **/hana/data** volume
**Recommendation: The recommended configurations with Azure premium storage for production scenarios look like:**
Configuration for SAP **/hana/data** volume:
| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | /hana/data capacity | /hana/data throughput| /hana/data IOPS | | | | | | | | |
-| E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 192 MB | 425 MBps | 3,000 |
-| E20(d)s_v5| 160 GiB | 750 MBps | 32,000 | 192 MB | 425 MBps | 3,000 |
-| E32ds_v4 | 256 GiB | 768 MBps | 51,200| 304 MB | 425 MBps | 3,000 |
-| E32ds_v5 | 256 GiB | 865 MBps | 51,200| 304 MB | 425 MBps | 3,000 |
-| E48ds_v4 | 384 GiB | 1,152 MBps | 76,800 | 464 MB |425 MBps | 3,000 |
-| E48ds_v4 | 384 GiB | 1,315 MBps | 76,800 | 464 MB |425 MBps | 3,000 |
-| E64ds_v4 | 504 GiB | 1,200 MBps | 80,000 | 608 MB | 425 MBps | 3,000 |
-| E64(d)s_v5 | 512 GiB | 1,735 MBps | 80,000 | 608 MB | 425 MBps | 3,000 |
-| E96(d)s_v5 | 672 GiB | 2,600 MBps | 80,000 | 800 MB | 425 MBps | 3,000 |
-| M32ts | 192 GiB | 500 MBps | 20,000 | 224 MB | 425 MBps | 3,000|
-| M32ls | 256 GiB | 500 MBps | 20,000 | 304 MB | 425 MBps | 3,000 |
-| M64ls | 512 GiB | 1,000 MBps | 40,000 | 608 MB | 425 MB | 3,000 |
-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 30,000 | 1056 MB | 425 MBps | 3,000 |
-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 1232 MB | 680 MBps | 5,000 |
-| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 50,000 | 2144 MB | 600 MBps | 5,000 |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 2464 MB | 800 MBps | 12,000|
-| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 80,000| 2464 MB | 800 MBps | 12,000|
-| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 4672 MB | 800 MBps | 12,000 |
-| M192ims, M192idms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 4912 MB | 800 MBps | 12,000 |
-| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 3424 MB | 1,000 MBps| 15,000 |
-| M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 6,848 MB | 1,000 MBps | 15,000 |
-| M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 6,848 MB | 1,200 MBps| 17,000 |
-| M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 13,680 MB | 1,200 MBps| 25,000 |
+| E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 192 GB | 425 MBps | 3,000 |
+| E20(d)s_v5| 160 GiB | 750 MBps | 32,000 | 192 GB | 425 MBps | 3,000 |
+| E32ds_v4 | 256 GiB | 768 MBps | 51,200| 304 GB | 425 MBps | 3,000 |
+| E32ds_v5 | 256 GiB | 865 MBps | 51,200| 304 GB | 425 MBps | 3,000 |
+| E48ds_v4 | 384 GiB | 1,152 MBps | 76,800 | 464 GB |425 MBps | 3,000 |
+| E48ds_v4 | 384 GiB | 1,315 MBps | 76,800 | 464 GB |425 MBps | 3,000 |
+| E64ds_v4 | 504 GiB | 1,200 MBps | 80,000 | 608 GB | 425 MBps | 3,000 |
+| E64(d)s_v5 | 512 GiB | 1,735 MBps | 80,000 | 608 GB| 425 MBps | 3,000 |
+| E96(d)s_v5 | 672 GiB | 2,600 MBps | 80,000 | 800 GB | 425 MBps | 3,000 |
+| M32ts | 192 GiB | 500 MBps | 20,000 | 224 GB | 425 MBps | 3,000|
+| M32ls | 256 GiB | 500 MBps | 20,000 | 304 GB | 425 MBps | 3,000 |
+| M64ls | 512 GiB | 1,000 MBps | 40,000 | 608 GB | 425 MBps | 3,000 |
+| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 30,000 | 1056 GB | 425 MBps | 3,000 |
+| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 1232 GB | 680 MBps | 5,000 |
+| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 50,000 | 2144 GB | 600 MBps | 5,000 |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 2464 GB | 800 MBps | 12,000|
+| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 80,000| 2464 GB | 800 MBps | 12,000|
+| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 4672 GB | 800 MBps | 12,000 |
+| M192ims, M192idms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 4912 GB | 800 MBps | 12,000 |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 3424 GB | 1,000 MBps| 15,000 |
+| M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 6,848 GB | 1,000 MBps | 15,000 |
+| M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 6,848 GB | 1,200 MBps| 17,000 |
+| M416ms_v2 | 11,400 GiB | 2,000 MBps | 80,000 | 13,680 GB | 1,200 MBps| 25,000 |
For the **/hana/log** volume. the configuration would look like:
-| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | **/hana/log** capacity | **/hana/log** throughput** | **/hana/log** IOPS | **/hana/shared** capacity <br />using default IOPS <br /> and throughput |
+| VM SKU | RAM | Max. VM I/O<br /> Throughput | Max VM IOPS | **/hana/log** capacity | **/hana/log** throughput | **/hana/log** IOPS | **/hana/shared** capacity <br />using default IOPS <br /> and throughput |
| | | | | | | | | E20ds_v4 | 160 GiB | 480 MBps | 32,000 | 80 GB | 275 MBps | 3,000 | 160 GB | | E20(d)s_v5 | 160 GiB | 750 MBps | 2,000 | 80 GB | 275 MBps | 3,000 | 160 GB |
For the **/hana/log** volume. the configuration would look like:
Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase either IOPS, and/or throughput on the individual disks you're using.
-A few examples on how combining multiple Premium SSD v2 disk with a stripe set could impact the requirement to provision more IOPS or throughput for **/hana/data** is displayed in this table:
+A few examples on how combining multiple Premium SSD v2 disks with a stripe set could impact the requirement to provision more IOPS or throughput for **/hana/data** is displayed in this table:
-| VM SKU | RAM | number of <br />disks | individual disk<br /> size | Proposed IOPS | Default IOPS provisioned | Extra IOPS <br />provisioned | Proposed throughput<br /> for volume | Default throughput provisioned | Extra throughput <br />provisioned |
+| VM SKU | RAM | number of <br />disks | individual disk<br /> capacity | Proposed IOPS | Default IOPS provisioned | Extra IOPS <br />provisioned | Proposed throughput<br /> for volume | Default throughput provisioned | Extra throughput <br />provisioned |
| | | | | | | | | | | | E32(d)s_v5 | 256 GiB | 1 | 304 GB | 3,000 | 3,000 | 0 | 425 MBps | 125 MBps | 300 MBps | | E32(d)s_v5 | 256 GiB | 2 | 152 GB | 3,000 | 6,000 | 0 | 425 MBps | 250 MBps | 175 MBps |
A few examples on how combining multiple Premium SSD v2 disk with a stripe set c
| E96(d)s_v5 | 672 GiB | 4 | 76 GB | 3,000 | 12,000 | 0 | 425 MBps | 500 MBps | 0 MBps | | M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 1 | 2,464 GB | 12,000 | 3,000 | 9,000 | 800 MBps | 125 MBps | 675 MBps | | M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2 | 1,232 GB | 12,000 | 6,000 | 6,000 | 800 MBps | 250 MBps | 550 MBps |
-| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 4 | 2,464 GB | 12,000 | 12,000 | 0 | 800 MBps | 500 MBps | 300 MBps |
+| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 4 | 616 GB | 12,000 | 12,000 | 0 | 800 MBps | 500 MBps | 300 MBps |
| M416ms_v2 | 11,400 GiB | 1 | 13,680 | 25,000 | 3,000 | 22,000 | 1,200 MBps | 125 MBps | 1,075 MBps |
-| M416ms_v2 | 11,400 GiB | 2 | 13,680 | 25,000 | 6,000 | 19,000 | 1,200 MBps | 250 MBps | 950 MBps |
-| M416ms_v2 | 11,400 GiB | 4 | 13,680 | 25,000 | 12,000 | 13,000 | 1,200 MBps | 500 MBps | 700 MBps |
+| M416ms_v2 | 11,400 GiB | 2 | 6,840 | 25,000 | 6,000 | 19,000 | 1,200 MBps | 250 MBps | 950 MBps |
+| M416ms_v2 | 11,400 GiB | 4 | 3,420 | 25,000 | 12,000 | 13,000 | 1,200 MBps | 500 MBps | 700 MBps |
For **/hana/log**, a similar approach of using two disks could look like:
-| VM SKU | RAM | number of <br />disks | individual disk<br /> size | Proposed IOPS | Default IOPS provisioned | Extra IOPS <br />provisioned | Proposed throughput<br /> for volume | Default throughput provisioned | Extra throughput <br />provisioned |
+| VM SKU | RAM | number of <br />disks | individual disk<br /> capacity | Proposed IOPS | Default IOPS provisioned | Extra IOPS <br />provisioned | Proposed throughput<br /> for volume | Default throughput provisioned | Extra throughput <br />provisioned |
| | | | | | | | | | | | E32(d)s_v5 | 256 GiB | 1 | 128 GB | 3,000 | 3,000 | 0 | 275 MBps | 125 MBps | 150 MBps | | E32(d)s_v5 | 256 GiB | 2 | 64 GB | 3,000 | 6,000 | 0 | 275 MBps | 250 MBps | 25 MBps | | E96(d)s_v5 | 672 GiB | 1 | 512 GB | 3,000 | 3,000 | 0 | 275 MBps | 125 MBps | 150 MBps |
-| E96(d)s_v5 | 672 GiB | 2 | 512 GB | 3,000 | 6,000 | 0 | 275 MBps | 250 MBps | 25 MBps |
+| E96(d)s_v5 | 672 GiB | 2 | 256 GB | 3,000 | 6,000 | 0 | 275 MBps | 250 MBps | 25 MBps |
| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 1 | 512 GB | 4,000 | 3,000 | 1,000 | 300 MBps | 125 MBps | 175 MBps | | M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2 | 256 GB | 4,000 | 6,000 | 0 | 300 MBps | 250 MBps | 50 MBps | | M416ms_v2 | 11,400 GiB | 1 | 512 GB | 5,000 | 3,000 | 2,000 | 400 MBps | 125 MBps | 275 MBps |
virtual-machines Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide-storage.md
Before going into the details, we're presenting the summary and recommendations
| DBMS log volume non-HANA non-M/Mv2 VM families | Not supported | restricted suitable (non-prod) | Suitable for up to medium workload | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
-<sup>1</sup> With usage of [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes
-<sup>2</sup> Using ANF requires /hana/data and /hana/log to be on ANF
-<sup>3</sup> So far tested on SLES only
+<sup>1</sup> With usage of [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes
+
+<sup>2</sup> Using ANF requires /hana/data and /hana/log to be on ANF
+
+<sup>3</sup> So far tested on SLES only
Characteristics you can expect from the different storage types list like:
Characteristics you can expect from the different storage types list like:
<sup>2</sup> Costs depend on provisioned IOPS and throughput
-<sup>3</sup> Creation of different ANF capacity pools doesn't guarantee deployment of capacity pools onto different storage units
+<sup>3</sup> Creation of different ANF capacity pools doesn't guarantee deployment of capacity pools onto different storage units
> [!IMPORTANT]
This type of storage is targeting DBMS workloads, storage traffic that requires
- The storage is organized in ranges. For example, a disk in the range 513 GiB to 1024 GiB capacity share the same capabilities and the same monthly costs - The IOPS per GiB aren't tracking linear across the size categories. Smaller disks below 32 GiB have higher IOPS rates per GiB. For disks beyond 32 GiB to 1024 GiB, the IOPS rate per GiB is between 4-5 IOPS per GiB. For larger disks up to 32,767 GiB, the IOPS rate per GiB is going below 1-- The I/O throughput for this storage isn't linear with the size of the disk category. For smaller disks, like the category between 65 GiB and 128 GiB capacity, the throughput is around 780KB per GiB. Whereas for the extreme large disks like a 32,767 GiB disk, the throughput is around 28KB per GiB-- The IOPS and throughput SLAs cannot be changed without changing the capacity of the disk
+- The I/O throughput for this storage isn't linear with the size of the disk category. For smaller disks, like the category between 65 GiB and 128 GiB capacity, the throughput is around 780 KB per GiB. Whereas for the extreme large disks like a 32,767 GiB disk, the throughput is around 28 KB per GiB
+- The IOPS and throughput SLAs can't be changed without changing the capacity of the disk
The capability matrix for SAP workload looks like:
The capability matrix for SAP workload looks like:
| Costs | Higher than Premium storage | - |
-**Summary:** Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). Ultra disk isn't supporting storage snapshots. In opposite to all other storage, Ultra disk cannot be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
+**Summary:** Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). Ultra disk isn't supporting storage snapshots. In opposite to all other storage, Ultra disk can't be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
## Azure NetApp files (ANF)
Other built-in functionality of ANF storage:
**Summary**: Azure NetApp Files is a HANA certified low latency storage that allows to deploy NFS and SMB volumes or shares. The storage comes with three different service levels that provide different throughput and IOPS in a linear manner per GiB capacity of the volume. The ANF storage is enabling to deploy SAP HANA scale-out scenarios with a standby node. The storage is suitable for providing file shares as needed for /sapmnt or SAP global transport directory. ANF storage come with functionality availability that is available as native NetApp functionality. ## Azure Premium Files
-[Azure Premium Files](../../../storage/files/storage-files-planning.md) is a shared storage that offers SMB and NFS for a moderate price and sufficient latency to handle shares of the SAP application layer. On top, Azure premium Files offers synchronous zonal replication of the shares with an automatism that in case one replica fails, another replica in another zone can take over. In opposite to Azure NetApp Files, there are no performance tiers. There also is no need for a capacity pool. Charging is based on the real provisioned capacity of the different shares. Azure Premium Files have not been tested as DBMS storage for SAP workload at all. But instead the usage scenario for SAP workload focused on all types of SMB and NFS shares as they're used on the SAP application layer. Azure Premium Files is also suited for the usage for **/hana/shared**.
+[Azure Premium Files](../../../storage/files/storage-files-planning.md) is a shared storage that offers SMB and NFS for a moderate price and sufficient latency to handle shares of the SAP application layer. On top, Azure premium Files offers synchronous zonal replication of the shares with an automatism that in case one replica fails, another replica in another zone can take over. In opposite to Azure NetApp Files, there are no performance tiers. There also is no need for a capacity pool. Charging is based on the real provisioned capacity of the different shares. Azure Premium Files haven't been tested as DBMS storage for SAP workload at all. But instead the usage scenario for SAP workload focused on all types of SMB and NFS shares as they're used on the SAP application layer. Azure Premium Files is also suited for the usage for **/hana/shared**.
> [!NOTE] > So far no SAP DBMS workloads are supported on shared volumes based on Azure Premium Files.
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
Title: Upgrading a basic public IP address to standard SKU - Guidance description: Overview of upgrade options and guidance for migrating basic public IP to standard public IP for future basic public IP address retirement -++ Last updated 09/19/2022
We recommend the following approach to upgrade to Standard SKU public IP address
1. Create a migration plan for planned downtime. 1. Depending on the resource associated with your Basic SKU public IP addresses, perform the upgrade based on the following table:
- | Resource using Basic SKU public IP addresses | Decision path |
- | | |
- | Virtual Machine or Virtual Machine Scale Sets | Use the [following upgrade options](#upgrade-using-portal-powershell-and-azure-cli). |
- | Load Balancer (Basic) | Use this [guidance to upgrade from Basic to Standard Load Balancer](../../load-balancer/load-balancer-basic-upgrade-guidance.md). |
- | VPN Gateway (Basic) | Cannot dissociate and upgrade. Create a [new VPN gateway with a SKU type other than Basic](../../vpn-gateway/tutorial-create-gateway-portal.md). |
- | Application Gateway (v1) | Cannot dissociate and upgrade. Use this [migration script to migrate from v1 to v2](../../application-gateway/migrate-v1-v2.md). |
-1. Verify your application and workloads are receiving traffic through the Standard SKU public IP address. Then delete your Basic SKU public IP address resource.
+ | Resource using Basic SKU public IP addresses | Decision path |
+ | | |
+ | Virtual Machine or Virtual Machine Scale Sets (flex model) | Disassociate IP(s) and utilize the upgrade options detailed after the table. |
+ | Load Balancer (Basic SKU) | New LB SKU required. Use the upgrade scripts for [virtual machines](../../load-balancer/upgrade-basic-standard.md) or [Virtual Machine Scale Sets (without IPs per VM)](../../load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md) to upgrade to Standard Load Balancer |
+ | VPN Gateway (Basic SKU or VpnGw1-5 SKU using Basic IPs) | New VPN Gateway SKU required. Create a [new Virtual Network Gateway with a Standard SKU IP](../../vpn-gateway/tutorial-create-gateway-portal.md). |
+ | ExpressRoute Gateway (using Basic IPs) | New ExpressRoute Gateway required. Create a [new ExpressRoute Gateway with a Standard SKU IP](../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md). |
+ | Application Gateway (v1 SKU) | New AppGW SKU required. Use this [migration script to migrate from v1 to v2](../../application-gateway/migrate-v1-v2.md). |
+
+> [!NOTE]
+> If you have a virtual machine scale set (uniform model) with public IP configurations per instance, note these are not Public IP resources and as such cannot be upgraded; a new virtual machine scale set is required. You can use the SKU property to specify that Standard IP configurations are required for each VMSS instance as shown [here](../../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine).
## Basic SKU vs. Standard SKU
virtual-wan About Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md
Title: 'Virtual WAN virtual hub routing preference - Preview'
+ Title: 'Virtual WAN virtual hub routing preference'
description: Learn about Virtual WAN Virtual virtual hub routing preference.
Last updated 05/31/2022
-# Virtual hub routing preference (Preview)
+# Virtual hub routing preference
A Virtual WAN virtual hub connects to virtual networks (VNets) and on-premises using connectivity gateways, such as site-to-site (S2S) VPN gateway, ExpressRoute (ER) gateway, point-to-site (P2S) gateway, and SD-WAN Network Virtual Appliance (NVA). The virtual hub router provides central route management and enables advanced routing scenarios using route propagation, route association, and custom route tables. The virtual hub router takes routing decisions using built-in route selection algorithm. To influence routing decisions in virtual hub router towards on-premises, we now have a new Virtual WAN hub feature called **Hub routing preference** (HRP). When a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections for a destination route-prefix in on-premises, the virtual hub routerΓÇÖs route selection algorithm will adapt based on the hub routing preference configuration and selects the best routes. You can now configure **Hub routing preference** using the Azure Preview Portal. For steps, see [How to configure virtual hub routing preference](howto-virtual-hub-routing-preference.md).
-> [!IMPORTANT]
-> The Virtual WAN feature **Hub routing preference** is currently in public preview. If you are interested in trying this feature, please follow the documentation below.
-This public preview is provided without a service-level agreement and shouldn't be used for production workloads. Certain features might not be supported, might have constrained capabilities, or might not be available in all Azure locations. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
+ ## <a name="selection"></a>Route selection algorithm for virtual hub
This section explains the route selection algorithm in a virtual hub along with
* If all the routes are local to the virtual hub, then choose routes from ER connections. * If all the routes are through remote virtual hubs, then choose routes from S2S VPN connections.
-1. If there are still multiple routes, load-balance across all paths using equal-cost multi-path (ECMP) routing.
- **Things to note:** * When there are multiple virtual hubs in a Virtual WAN scenario, a virtual hub selects the best routes using the route selection algorithm described above, and then advertises them to the other virtual hubs in the virtual WAN.
-* **Limitation:** If a route-prefix is reachable via ER or VPN connections, and via virtual hub SD-WAN NVA, then the latter route is ignored by the route-selection algorithm. Therefore, the flows to prefixes reachable only via virtual hub SD-WAN NVA will take the route through the NVA. This is a limitation during the Preview phase of the **Hub routing preference** feature.
- ## Routing scenarios Virtual WAN hub routing preference is beneficial when multiple on-premises are advertising routes to same destination prefixes, which can happen in customer Virtual WAN scenarios that use any of the following setups.
virtual-wan Howto Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-virtual-hub-routing-preference.md
Title: 'Configure virtual hub routing preference - Preview'
+ Title: 'Configure virtual hub routing preference'
description: Learn how to configure Virtual WAN virtual hub routing preference.
Last updated 05/30/2022
-# Configure virtual hub routing preference (Preview)
+# Configure virtual hub routing preference
The following steps help you configure virtual hub routing preference settings. For information about this feature, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md).
-> [!IMPORTANT]
-> The Virtual WAN feature **Hub routing preference** is currently in public preview. If you are interested in trying this feature, please follow the documentation below.
-This public preview is provided without a service-level agreement and shouldn't be used for production workloads. Certain features might not be supported, might have constrained capabilities, or might not be available in all Azure locations. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
## Configure
virtual-wan Scenario Bgp Peering Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-bgp-peering-hub.md
The virtual hub router now also exposes the ability to peer with it, thereby exc
||| | Number of routes each BGP peer can advertise to the virtual hub.| The hub can only accept a maximum number of 10,000 routes (total) from its connected resources. For example, if a virtual hub has a total of 6000 routes from the connected virtual networks, branches, virtual hubs etc., then when a new BGP peering is configured with an NVA, the NVA can only advertise up to 4000 routes. | * Routes from NVA in a virtual network that are more specific than the virtual network address space, when advertised to the virtual hub through BGP are not propagated further to on-premises.
-* Currently we only support 1,000 routes from the NVA to the virtual hub.
+* Currently we only support 4,000 routes from the NVA to the virtual hub.
* Traffic destined for addresses in the virtual network directly connected to the virtual hub cannot be configured to route through the NVA using BGP peering between the hub and NVA. This is because the virtual hub automatically learns about system routes associated with addresses in the spoke virtual network when the spoke virtual network connection is created. These automatically learned system routes are preferred over routes learned by the hub through BGP. * This feature is not supported for setting up BGP peering between an NVA in a spoke VNet and a virtual hub with Azure Firewall. * In order for the NVA to exchange routes with VPN and ER connected sites, branch to branch routing must be turned on.
+* When configuring BGP peering with the hub, you will see two IP addresses. Peering with both these addresses is required. Not peering with both addresses can cause routing issues.
+
+* The next hop IP address on the routes being advertised from the NVA to the virtual HUB route server has to be the same as the IP address of the NVA, the IP address configured on the BGP peer. Having a different IP address advertised as next hop IS NOT supported on virtual WAN at the moment.
+ ## BGP peering scenarios This section describes scenarios where BGP peering feature can be utilized to configure routing.