Updates from: 04/19/2022 07:27:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Administrators can assign a Conditional Access policy to the following cloud app
- [Office 365](#office-365) - Azure Analysis Services - Azure DevOps
+- [Azure Data Explorer](/azure/data-explorer/security-conditional-access)
- Azure Event Hubs - Azure Service Bus - [Azure SQL Database and Azure Synapse Analytics](../../azure-sql/database/conditional-access-configure.md)
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
By default the policy will provide an option to exclude the current user from th
![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png)
-If you do find yourself locked out[What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
+If you do find yourself locked out, see [What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
## Next steps
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
> | ASP.NET Core|[Use the Conditional Access auth context to perform step\-up authentication](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | &#8226; MSAL.NET <br/> &#8226; Microsoft.Identity.Web | Authorization code | > | ASP.NET Core|[Active Directory FS to Azure AD migration](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | MSAL.NET | &#8226; SAML <br/> &#8226; OpenID connect | > | ASP.NET | &#8226; [Microsoft Graph Training Sample](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) <br/> &#8226; [Sign in users and call Microsoft Graph with admin restricted scope](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) <br/> &#8226; [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | MSAL.NET | &#8226; OpenID connect <br/> &#8226; Authorization code |
-> | Java </p> Spring |Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | &#8226; MSAL Java <br/> &#8226; Azure AD Boot Starter | Authorization code |
-> | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java | Authorization code |
-> | Java | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-webapp)| MSAL Java | Authorization code |
-> | Java </p> Spring| Sign in users and call Microsoft Graph via OBO </p> &#8226; [Web API](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | &#8226; Authorization code <br/> &#8226; On-Behalf-Of (OBO) |
+> | Java </p> Spring |Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) <br/> &#8226; [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | &#8226; MSAL Java <br/> &#8226; Azure AD Boot Starter | Authorization code |
+> | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) | MSAL Java | Authorization code |
> | Node.js </p> Express | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) <br/> &#8226; [Web app that sign in users](https://github.com/Azure-Samples/ms-identity-node) | MSAL Node | Authorization code | > | Python </p> Flask | Flask Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>&#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | MSAL Python | Authorization code | > | Python </p> Django | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | Authorization code |
The following samples show how to protect a web API with the Microsoft identity
> | -- | -- |-- |-- | > | ASP.NET | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapi-onbehalfof) | MSAL.NET | On-Behalf-Of (OBO) | > | ASP.NET Core | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | MSAL.NET | On-Behalf-Of (OBO) |
-> | Java | [Sign in users](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | On-Behalf-Of (OBO) |
+> | Java | [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | MSAL Java | On-Behalf-Of (OBO) |
> | Node.js | &#8226; [Protect a Node.js web API](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2) <br/> &#8226; [Protect a Node.js Web API with Azure AD B2C](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | MSAL Node | Authorization bearer | ## Desktop
The following samples show public client desktop applications that access the Mi
> | .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code | > | .NET | &#8226; [Call Microsoft Graph with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/2-Call-OwnApi/README.md) | MSAL.NET | Authorization code with PKCE | > | .NET | [Invoke protected API with integrated Windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated Windows authentication |
-> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-desktop/) | MSAL Java | Integrated Windows authentication |
+> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | MSAL Java | Integrated Windows authentication |
> | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE | > | PowerShell | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | MSAL.NET | Resource owner password credentials | > | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Resource owner password credentials |
The following samples show an application that accesses the Microsoft Graph API
> | -- | -- |-- |-- | > |.NET Core| &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)<br/> &#8226; [Call own web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop) <br/> &#8226; [Using managed identity and Azure key vault](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/3-Using-KeyVault)| MSAL.NET | Client credentials grant| > | ASP.NET|[Multi-tenant with Microsoft identity platform endpoint](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp) | MSAL.NET | Client credentials grant|
-> | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-daemon)| MSAL Java| Client credentials grant|
+> | Java | &#8226; [Call Microsoft Graph with Secret](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-secret) <br/> &#8226; [Call Microsoft Graph with Certificate](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-certificate)| MSAL Java | Client credentials grant|
> | Node.js | [Sign in users and call web API](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | MSAL Node | Client credentials grant | > | Python | &#8226; [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> &#8226; [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | MSAL Python| Client credentials grant|
The following sample shows a public client application running on a device witho
> | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- | > | .NET core | [Invoke protected API from text-only device](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) | MSAL.NET | Device code|
-> | Java | [Sign in users and invoke protected API](https://github.com/Azure-Samples/ms-identity-java-devicecodeflow) | MSAL Java | Device code |
+> | Java | [Sign in users and invoke protected API from text-only device](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Device-Code-Flow) | MSAL Java | Device code |
> | Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | MSAL Python | Device code | ## Microsoft Teams applications
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
It's not supported to use this extension on Azure Kubernetes Service (AKS) clust
If you choose to install and use the CLI locally, you must be running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+> [!NOTE]
+> This is functionality is also available for [Azure Arc-enabled servers](../../azure-arc/servers/ssh-arc-overview.md).
+ ## Requirements for login with Azure AD using openSSH certificate-based authentication To enable Azure AD login using SSH certificate-based authentication for Linux VMs in Azure, ensure the following network, virtual machine, and client (ssh client) requirements are met.
For customers who are using previous version of Azure AD login for Linux that wa
```azurecli az vm extension delete -g MyResourceGroup --vm-name MyVm -n AADLoginForLinux ```
-> [!NOTE]
-> The extension uninstall can fail if there are any Azure AD users currently logged in on the VM. Make sure all users are logged off first.
+ > [!NOTE]
+ > The extension uninstall can fail if there are any Azure AD users currently logged in on the VM. Make sure all users are logged off first.
1. Enable system-assigned managed identity on your VM.
Solution 1: Upgrade the Azure CLI client to version 2.21.0 or higher.
After the user has successfully signed in using az login, connection to the VM using `az ssh vm -ip <addres>` or `az ssh vm --name <vm_name> -g <resource_group>` fails with *Connection closed by <ip_address> port 22*.
-Cause 1: The user isnΓÇÖt assigned to the either the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
+Cause 1: The user isnΓÇÖt assigned to either of the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
Solution 1: Add the user to the either of the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
After a root domain is added to Azure Active Directory (Azure AD), all subsequen
In the Azure AD portal, when the parent domain is federated and the admin tries to verify a managed subdomain on the **Custom domain names** page, you'll get a 'Failed to add domain' error with the reason "One or more properties contains invalid values." If you try to add this subdomain from the Microsoft 365 admin center, you will receive a similar error. For more information about the error, see [A child domain doesn't inherit parent domain changes in Office 365, Azure, or Intune](/office365/troubleshoot/administration/child-domain-fails-inherit-parent-domain-changes).
-## How to verify a custom subdomain
- Because subdomains inherit the authentication type of the root domain by default, you must promote the subdomain to a root domain in Azure AD using the Microsoft Graph so you can set the authentication type to your desired type.
-### Add the subdomain and view its authentication type
+## Add the subdomain
1. Use PowerShell to add the new subdomain, which has its root domain's default authentication type. The Azure AD and Microsoft 365 admin centers don't yet support this operation.
Because subdomains inherit the authentication type of the root domain by default
}, ```
-### Use Microsoft Graph API to make this a root domain
+## Change subdomain to a root domain
Use the following command to promote the subdomain:
Use the following command to promote the subdomain:
POST https://graph.microsoft.com/v1.0/domains/foo.contoso.com/promote ```
-#### Promote command error conditions
+### Promote command error conditions
Scenario | Method | Code | Message -- | | - | -
active-directory Zscaler Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-provisioning-tutorial.md
Previously updated : 03/27/2019 Last updated : 04/01/2022
This section guides you through the steps to configure the Azure AD provisioning
> You may also choose to enable SAML-based single sign-on for Zscaler, following the instructions provided in the [Zscaler single sign-on tutorial](zscaler-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other. > [!NOTE]
-> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships.
+> When users and groups are provisioned or de-provisioned we recommend to periodically restart provisioning to ensure that group memberships are properly updated. Doing a restart will force our service to re-evaluate all the groups and update the memberships. Please be aware that the restart can take time if you are syncing all users and groups in your tenant or have assigned large groups with 50K+ members.
### To configure automatic user provisioning for Zscaler in Azure AD:
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Use [az aks update][az-aks-update] with the `--enable-azure-keyvault-kms` and `-
az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $KEY_ID ```
+Use below command to update all secrets. Otherwise, the old secrets are not encrypted.
+
+```azurecli-interactive
+kubectl get secrets --all-namespaces -o json | kubectl replace -f -
+```
+ <!-- LINKS - Internal --> [aks-support-policies]: support-policies.md [aks-faq]: faq.md
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-import-restrictions.md
# API import restrictions and known issues When importing an API, you might encounter some restrictions or need to identify and rectify issues before you can successfully import. In this article, you'll learn:+ * API Management's behavior during OpenAPI import. * OpenAPI import limitations and how OpenAPI export works. * Requirements and limitations for WSDL and WADL import.
When importing an API, you might encounter some restrictions or need to identify
## API Management during OpenAPI import During OpenAPI import, API Management:+ * Checks specifically for query string parameters marked as required. * Converts the query string parameters to template parameters. If you prefer a different behavior, you can either: + * Manually change via form-based editor, or * Remove the "required" attribute from the OpenAPI definition, thus not converting them to template parameters. ## <a name="open-api"> </a>OpenAPI/Swagger import limitations If you receive errors while importing your OpenAPI document, make sure you've validated it beforehand by either:+ * Using the designer in the Azure portal (Design > Front End > OpenAPI Specification Editor), or * With a third-party tool, such as <a href="https://editor.swagger.io">Swagger Editor</a>.
If you receive errors while importing your OpenAPI document, make sure you've va
**Supported versions** API Management only supports:+ * OpenAPI version 2. * OpenAPI version 3.0.x (up to version 3.0.3).-
-OpenAPI version 3.1 is currently not supported in API Management.
+* OpenAPI version 3.1 (import only)
**Size limitations**
OpenAPI version 3.1 is currently not supported in API Management.
| **Size limit doesn't apply** | When an OpenAPI document is provided via a URL to a location accessible from your API Management service. | #### Supported extensions+ The only supported extensions are: | Extension | Description |
The only supported extensions are:
| **`x-servers`** | A backport of the [OpenAPI 3 `servers` object](https://swagger.io/docs/specification/api-host-and-base-path/) for OpenAPI 2. | #### Unsupported extensions+ | Extension | Description | | -- | -- | | **`Recursion`** | API Management doesn't support definitions defined recursively.<br />For example, schemas referring to themselves. |
The only supported extensions are:
| **`Produces` keyword** | Describes MIME types returned by an API. <br />Not supported. | #### Custom extensions-- Are ignored on import.-- Aren't saved or preserved for export.+
+* Are ignored on import.
+* Aren't saved or preserved for export.
#### Unsupported definitions + Inline schema definitions for API operations aren't supported. Schema definitions:-- Are defined in the API scope.-- Can be referenced in API operations request or response scopes.+
+* Are defined in the API scope.
+* Can be referenced in API operations request or response scopes.
#### Ignored definitions+ Security definitions are ignored.
+#### Definition restrictions
+
+<!-- Ref: 1851786 Query parameter handling -->
+When importing query parameters, only the default array serialization method (`style: form`, `explode: true`) is supported. For more details on query parameters in OpenAPI specifications, refer to [the serialization specification](https://swagger.io/docs/specification/serialization/).
+
+<!-- Ref: 1795433 Parameter limitations -->
+Parameters [defined in cookies](https://swagger.io/docs/specification/describing-parameters/#cookie-parameters) are not supported. You can still use policy to decode and validate the contents of cookies.
+ ### <a name="open-api-v2"> </a>OpenAPI version 2 OpenAPI version 2 support is limited to JSON format only.
-### <a name="open-api-v3"> </a>OpenAPI version 3.0.x
+<!-- Ref: 1795433 Parameter limitations -->
+["Form" type parameters](https://swagger.io/specification/v2/#parameter-object) are not supported. You can still use policy to decode and validate `application/x-www-form-urlencoded` and `application/form-data` payloads.
+
+### <a name="open-api-v3"> </a>OpenAPI version 3.x
-The latest supported OpenAPI version 3.0 is 3.0.3.
+API Management supports the following specification versions:
+
+* [OpenAPI 3.0.3](https://swagger.io/specification/)
+* [OpenAPI 3.1.0](https://spec.openapis.org/oas/v3.1.0) (import only)
#### HTTPS URLs-- If multiple `servers` are specified, API Management will use the first HTTPS URL it finds. -- If there aren't any HTTPS URLs, the server URL will be empty.+
+* If multiple `servers` are specified, API Management will use the first HTTPS URL it finds.
+* If there aren't any HTTPS URLs, the server URL will be empty.
#### Supported+ - `example` #### Unsupported+ The following fields are included in [OpenAPI version 3.0.x](https://swagger.io/specification/), but are not supported: | Object | Field |
The following fields are included in [OpenAPI version 3.0.x](https://swagger.io/
### <a name="open-import-export-general"> </a>General API definitions exported from an API Management service are:+ * Primarily intended for external applications that need to call the API hosted in API Management service. * Not intended to be imported into the same or different API Management service.
app-service App Service Configure Premium Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configure-premium-tier.md
If your app runs in an App Service deployment where **PremiumV3** isn't availabl
![Screenshot showing how to clone your app.](media/app-service-configure-premium-tier/clone-app.png) In the **Clone app** page, you can create an App Service plan using **PremiumV3** in the region you want, and specify the app settings and configuration that you want to clone.-
-If you are
+
## Moving from Premium Container to Premium V3 SKU
New-AzAppServicePlan -ResourceGroupName <resource_group_name> `
* [Scale up an app in Azure](manage-scale-up.md) * [Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md)
-* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
+* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 4/11/2022 Last updated : 4/15/2022
The App Service platform will review your App Service Environment to confirm mig
If your App Service Environment doesn't pass the validation checks or you try to perform a migration step in the incorrect order, you may see one of the following error messages:
-|Error message |Description |
-|||
-|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the migration feature. |
-|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |
-|Migration cannot be called on this ASE, please contact support for help migrating. |Support will need to be engaged for migrating this App Service Environment. This is potentially due to custom settings used by this environment. |
-|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2s that are zone pinned can't be migrated using the migration feature at this time. |
-|Migrate cannot be called if IP SSL is enabled on any of the sites|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |
-|Migrate is not available for this kind|App Service Environment v1 can't be migrated using the migration feature at this time. |
-|Full migration cannot be called before IP addresses are generated|You'll see this error if you attempt to migrate before finishing the pre-migration steps. |
-|Migration to ASEv3 is not allowed for this ASE|You won't be able to migrate using the migration feature. |
-|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](/azure/azure-resource-manager/management/azure-subscription-service-limits#app-service-limits) has been met. You'll need to remove unneeded environments or contact support to review your options.|
-|`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location|You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |
+|Error message |Description |Recommendation |
+|||-|
+|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
+|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to be available in your region. |
+|Migration cannot be called on this ASE, please contact support for help migrating. |Support will need to be engaged for migrating this App Service Environment. This is potentially due to custom settings used by this environment. |Engage support to resolve your issue. |
+|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2s that are zone pinned can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
+|Migrate cannot be called if IP SSL is enabled on any of the sites|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
+|Migrate is not available for this kind|App Service Environment v1 can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
+|Full migration cannot be called before IP addresses are generated|You'll see this error if you attempt to migrate before finishing the pre-migration steps. |Ensure you've completed all pre-migration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). |
+|Migration to ASEv3 is not allowed for this ASE|You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
+|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](/azure/azure-resource-manager/management/azure-subscription-service-limits#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. |
+|`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location|You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
## Overview of the migration process using the migration feature
app-service Cli Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-ftp.md
Title: 'CLI: Deploy app files with FTP'
description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to create an app and deploy files with FTP. tags: azure-service-management- ms.devlang: azurecli Previously updated : 12/12/2017 Last updated : 04/15/2022
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-ftp/deploy-ftp.sh "Create an app and deploy files with FTP")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Cli Deploy Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-github.md
Title: 'CLI: Deploy an app from GitHub'
-description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to deploy an app from GitHub.
+description: Learn how to use the Azure CLI to automate deployment and management of your App Service app. This sample shows how to create an app and deploy it from GitHub.
tags: azure-service-management ms.assetid: 0205c991-0989-4ca3-bb41-237dcc964460 ms.devlang: azurecli Previously updated : 12/11/2017 Last updated : 04/15/2022
This sample script creates an app in App Service with its related resources. It
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/app-service/deploy-github/deploy-github.sh?highlight=3 "Create an app with deployment from GitHub")]
+
+### Run the script
++
+## Clean up resources
+
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
# Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure.
-When you are finished, you will have a [Spring Boot](https://projects.spring.io/spring-boot/) application storing data in [Azure Cosmos DB](../cosmos-db/index.yml) running on [Azure App Service on Linux](overview.md).
+When you are finished, you will have a [Spring Boot](https://spring.io/projects/spring-boot) application storing data in [Azure Cosmos DB](../cosmos-db/index.yml) running on [Azure App Service on Linux](overview.md).
![Spring Boot application storing data in Azure Cosmos DB](./media/tutorial-java-spring-cosmosdb/spring-todo-app-running-locally.jpg)
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
# Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
-In this tutorial, you will deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. The Python app is hosted in a fully managed **[Azure App Service](./overview.md#app-service-on-linux)** which supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment. You can start with a basic pricing tier that can be scaled up at any later time.
+In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. The Python app is hosted in a fully managed **[Azure App Service](./overview.md#app-service-on-linux)** which supports [Python 3.7 or higher](https://www.python.org/downloads/) in a Linux server environment. You can start with a basic pricing tier that can be scaled up at any later time.
:::image type="content" border="False" source="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture-240px.png" lightbox="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture.png" alt-text="An architecture diagram showing an App Service with a PostgreSQL database in Azure."::: **To complete this tutorial, you'll need:**
-* An Azure account with an active subscription exists. If you do not have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
+* An Azure account with an active subscription exists. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
* Knowledge of Python with Flask development or [Python with Django development](/learn/paths/django-create-data-driven-websites/) * [Python 3.7 or higher](https://www.python.org/downloads/) installed locally. * [PostgreSQL](https://www.postgresql.org/download/) installed locally.
Install the dependencies:
pip install -r requirements.txt ```
+> [!NOTE]
+> If you are following along with this tutorial with your own app, look at the *requirements.txt* file description in each project's *README.md* file ([Flask](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md), [Django](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/README.md)) to see what packages you'll need.
+ Set environment variables to specify how to connect to a local PostgreSQL instance. This sample application requires an *.env* file describing how to connect to your local PostgreSQL instance. Create an *.env* file using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. This tutorial assumes the database name is *restaurant*. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
-If you want to run SQLite locally instead, follow the instructions in the comments of the *settings.py* file.
+For Django, you can use SQLite locally instead of PostgreSQL by following the instructions in the comments of the [*settings.py*](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app/blob/main/azureproject/settings.py) file.
Create the `restaurant` and `review` database tables:
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
| [!INCLUDE [A screenshot showing the location of the Create button on the Azure Database for PostgreSQL Flexible server deployment option page in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-3.png" alt-text="A screenshot showing the location of the Create Flexible Server button on the Azure Database for PostgreSQL deployment option page in the Azure portal." ::: | | [!INCLUDE [A screenshot showing how to fill out the form to create a new Azure Database for PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-4.png" alt-text="A screenshot showing how to fill out the form to create a new Azure Database for PostgreSQL in the Azure portal." ::: | | [!INCLUDE [A screenshot showing how to select and configure the compute and storage for PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-5.png" alt-text="A screenshot showing how to select and configure the basic database service plan in the Azure portal." ::: |
-| [!INCLUDE [A screenshot showing creating administrator account information for the PostgreSQL Flexible server in in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.png" alt-text="Creating administrator account information for the PostgreSQL Flexible server in in the Azure portal." ::: |
+| [!INCLUDE [A screenshot showing creating administrator account information for the PostgreSQL Flexible server in in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-6.png" alt-text="Creating administrator account information for the PostgreSQL Flexible server in the Azure portal." ::: |
| [!INCLUDE [A screenshot showing adding current IP as a firewall rule for the PostgreSQL Flexible server in the Azure portal](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7-240px.png" lightbox="./media/tutorial-python-postgresql-app/create-postgres-service-azure-portal-7.png" alt-text="A screenshot showing adding current IP as a firewall rule for the PostgreSQL Flexible server in the Azure portal." ::: | [!INCLUDE [A screenshot showing creating the restaurant database in the Azure Cloud Shell](<./includes/tutorial-python-postgresql-app/create-postgres-service-azure-portal-8.md>)]
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
After the Azure Database for PostgreSQL server is created, configure access to the server from the web app by adding a firewall rule. This can be done through the Azure portal or the Azure CLI.
-If you are working in VS Code, right-click the database server and select **Open in Portal** to go to the Azure portal. Or, go to the [Azure Cloud Shell](https://shell.azure.com) and run the Azure CLI commands.
+If you're working in VS Code, right-click the database server and select **Open in Portal** to go to the Azure portal. Or, go to the [Azure Cloud Shell](https://shell.azure.com) and run the Azure CLI commands.
### [Azure portal](#tab/azure-portal-access) | Instructions | Screenshot |
To deploy a web app from VS Code, you must have the [Azure Tools extension pack]
|:-|--:| | [!INCLUDE [VS Code deploy step 1](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension.png" alt-text="A screenshot showing how to locate the Azure Tools extension in VS Code." ::: | | [!INCLUDE [VS Code deploy step 2](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-1-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-1.png" alt-text="A screenshot showing how to deploy a web app in VS Code." ::: |
-| [!INCLUDE [VS Code deploy step 3](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2.png" alt-text="A screenshot showing how to deploy a web app in VS Code: selecting the code to deploy." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box to confirm deploy." ::: |
+| [!INCLUDE [VS Code deploy step 3](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-2.png" alt-text="A screenshot showing how to deploy a web app in VS Code: selecting the code to deploy." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-3.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box to confirm deployment." ::: |
| [!INCLUDE [VS Code deploy step 4](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-4-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-4.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box to choose to always deploy to the app service." ::: | | [!INCLUDE [VS Code deploy step 5](<./includes/tutorial-python-postgresql-app/deploy-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-5-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-5.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box with choice to browse to website." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-6-240px.png" lightbox="./media/tutorial-python-postgresql-app/deploy-web-app-visual-studio-code-6.png" alt-text="A screenshot showing how to deploy a web app in VS Code: a dialog box with choice to view deployment details." ::: |
Having issues? Refer first to the [Troubleshooting guide](configure-language-pyt
## 7 - Migrate app database
-With the code deployed and the database in place, the app is almost ready to use. The only piece that remains is to establish the necessary schema in the database itself. You do this by "migrating" the data models in the Django app to the database.
+With the code deployed and the database in place, the app is almost ready to use. First, you need to establish the necessary schema in the database itself. You do this by "migrating" the data models in the Django app to the database.
**Step 1.** Create SSH session and connect to web app server.
In the **App Service** section of the Azure Tools extension:
### [Flask](#tab/flask)
-When deploying the Flask sample app to Azure App Service, the database tables are created automatically in Azure PostgreSQL. If the tables aren't created, try the following command:
+When you deploy the Flask sample app to Azure App Service, the database tables are created automatically in Azure PostgreSQL. If the tables aren't created, try the following command:
```bash # Create database tables
python manage.py migrate
-If you encounter any errors related to connecting to the database, check the values of the application settings of the App Service created in the previous section, namely `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`. Without those settings, the migrate command cannot communicate with the database.
+If you encounter any errors related to connecting to the database, check the values of the application settings of the App Service created in the previous section, namely `DBHOST`, `DBNAME`, `DBUSER`, and `DBPASS`. Without those settings, the migrate command can't communicate with the database.
> [!TIP] > In an SSH session, for Django you can also create users with the `python manage.py createsuperuser` command like you would with a typical Django app. For more information, see the documentation for [django django-admin and manage.py](https://docs.djangoproject.com/en/1.8/ref/django-admin/). Use the superuser account to access the `/admin` portion of the web site. For Flask, use an extension such as [Flask-admin](https://github.com/flask-admin/flask-admin) to provide the same functionality.
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. It can take a minute or two for the app to start, so if you see a default app page, wait a minute and refresh the browser.
-When you see your sample web app, it is running in a Linux container in App Service using a built-in image **Congratulations!** You've deployed your Python app to App Service.
+When you see your sample web app, it's running in a Linux container in App Service using a built-in image **Congratulations!** You've deployed your Python app to App Service.
### [Flask](#tab/flask)
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
You can leave the app and database running as long as you want for further development work and skip ahead to [Next steps](#next-steps).
-However, when you are finished with the sample app, you can remove all of the resources for the app from Azure to ensure you do not incur other charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
+However, when you're finished with the sample app, you can remove all of the resources for the app from Azure to ensure you don't incur other charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
### [Azure portal](#tab/azure-portal)
application-gateway Proxy Buffers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/proxy-buffers.md
# Configure Request and Response Proxy Buffers
-Azure Application Gateway Standard v2 and WAF v2 SKUs support buffering Requests (from clients) or Responses (from the backend servers). Based on the processing capabilities of the clients that interact with your Application Gateway, you can use these buffers to configure the speed of packet delivery.
+Azure Application Gateway Standard v2 SKU supports buffering Requests (from clients) or Responses (from the backend servers). Based on the processing capabilities of the clients that interact with your Application Gateway, you can use these buffers to configure the speed of packet delivery.
## Response Buffer
You can change this setting by using GlobalConfiguration in the ARM template as
} ``` For reference, visit [Azure SDK for .NET](/dotnet/api/microsoft.azure.management.network.models.applicationgatewayglobalconfiguration)+
+## Limitations
+- API version 2020-01-01 or later should be used to configure buffers.
+- Currently, these changes are supported only through ARM templates.
+- Request and Response Buffers cannot be disabled for WAF v2 SKU.
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
For example, say you have the following header rewrite rule for the header `"Acc
Here, with only header rewrite configured, the WAF evaluation will be done on `"Accept" : "text/html"`. But when you configure URL rewrite or host header rewrite, then the WAF evaluation will be done on `"Accept" : "image/png"`.
->[!NOTE]
-> URL rewrite operations may cause a minor increase in the compute utilization of your WAF Application Gateway. In application gateway v1 deployments, it is recommended that you monitor the [CPU utilization metric](high-traffic-support.md) for a brief period of time after enabling the URL rewrite rules on your WAF Application Gateway.
- ### Common scenarios for header rewrite #### Remove port information from the X-Forwarded-For header
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 03/17/2022 Last updated : 04/15/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.12 - October 2021
+
+### Fixed
+
+- Improved reliability when validating signatures of extension packages.
+- `azcmagent_proxy remove` command on Linux now correctly removes environment variables on Red Hat Enterprise Linux and related distributions.
+- `azcmagent logs` now includes the computer name and timestamp to help disambiguate log files.
## Version 1.11 - September 2021 ### Fixed
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 03/17/2022 Last updated : 04/18/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.17 - April 2022
+
+### New features
+
+- The default resource name for AWS EC2 instances is now the instance ID instead of the hostname. To override this behavior, use the `--resource-name PreferredResourceName` parameter to specify your own resource name when connecting a server to Azure Arc.
+- The network connectivity check during onboarding now verifies private endpoint configuration if you specify a private link scope. You can run the same check anytime by running [azcmagent check](manage-agent.md#check) with the new `--use-private-link` parameter.
+- You can now disable the extension manager with the [local agent security controls](security-overview.md#local-agent-security-controls).
+
+### Fixed
+
+- If you attempt to run `azcmagent connect` on a server that is already connected to Azure, the resource ID is now printed to the console to help you locate the resource in Azure.
+- The `azcmagent connect` timeout has been extended to 10 minutes.
+- `azcmagent show` no longer prints the private link scope ID. You can check if the server is associated with an Azure Arc private link scope by reviewing the machine details in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/servers), [CLI](/cli/azure/connectedmachine?view=azure-cli-latest#az-connectedmachine-show), [PowerShell](/powershell/module/az.connectedmachine/get-azconnectedmachine), or [REST API](/rest/api/hybridcompute/machines/get).
+- `azcmagent logs` collects only the 2 most recent logs for each service to reduce ZIP file size.
+- `azcmagent logs` collects Guest Configuration logs again.
+ ## Version 1.16 - March 2022
+### Known issues
+
+- `azcmagent logs` doesn't collect Guest Configuration logs in this release. You can locate the log directories in the [agent installation details](deployment-options.md#agent-installation-details).
+ ### New features - You can now granularly control which extensions are allowed to be deployed to your server and whether or not Guest Configuration should be enabled. See [local agent controls to enable or disable capabilities](security-overview.md#local-agent-security-controls) for more information.
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached. - Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services.
-## Version 1.12 - October 2021
-
-### Fixed
--- Improved reliability when validating signatures of extension packages.-- `azcmagent_proxy remove` command on Linux now correctly removes environment variables on Red Hat Enterprise Linux and related distributions.-- `azcmagent logs` now includes the computer name and timestamp to help disambiguate log files.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 03/17/2022 Last updated : 04/15/2022
When running a network connectivity check, you must provide the name of the Azur
`azcmagent check --location <regionName> --verbose`
+If you expect your server to communicate with Azure through an Azure Arc Private Link Scope, use the `--use-private-link` parameter to run additional tests that verify the hostnames and IP addresses resolved for the Azure Arc services are private endpoints.
+
+`azcmagent check --location <regionName> --use-private-link --verbose`
+ ### connect This parameter specifies a resource in Azure Resource Manager and connects it to Azure Arc. You must specify the subscription and resource group of the resource to connect. Data about the machine is stored in the Azure region specified by the `--location` setting. The default resource name is the hostname of the machine unless otherwise specified.
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
Title: Security overview description: Security information about Azure Arc-enabled servers. Previously updated : 03/17/2022 Last updated : 04/15/2022 # Azure Arc-enabled servers security overview
azcmagent config set guestconfiguration.enabled false
When Guest Configuration is disabled, any Guest Configuration policies assigned to the machine in Azure will report as non-compliant. Consider [creating an exemption](../../governance/policy/concepts/exemption-structure.md) for these machines or [changing the scope](../../governance/policy/concepts/assignment-structure.md#excluded-scopes) of your policy assignments if you don't want to see these machines reported as non-compliant.
+### Enable or disable the extension manager
+
+The extension manager is responsible for installing, updating, and removing [VM Extensions](manage-vm-extensions.md) on your server. You can disable the extension manager to prevent managing any extensions on your server, but we recommend using the [allow and blocklists](#extension-allowlists-and-blocklists) instead for more granular control.
+
+```bash
+azcmagent config set extensions.enabled false
+```
+
+Disabling the extension manager will not remove any extensions already installed on your server. Extensions that are hosted in their own Windows or Linux services, such as the Log Analytics Agent, may continue to run even if the extension manager is disabled. Other extensions that are hosted by the extension manager itself, like the Azure Monitor Agent, will not run if the extension manger is disabled. You should [remove any extensions](manage-vm-extensions-portal.md#remove-extensions) before disabling the extension manager to ensure no extensions continue to run on the server.
+ ### Locked down machine best practices When configuring the Azure Connected Machine agent with a reduced set of capabilities, it is important to consider the mechanisms that someone could use to remove those restrictions and implement appropriate controls. Anybody capable of running commands as an administrator or root user on the server can change the Azure Connected Machine agent configuration. Extensions and guest configuration policies execute in privileged contexts on your server, and as such may be able to change the agent configuration. If you apply these security controls to lock down the agent, Microsoft recommends the following best practices to ensure only local server admins can update the agent configuration:
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
+
+ Title: (Preview) SSH access to Azure Arc-enabled servers
+description: Leverage SSH remoting to access and manage Azure Arc-enabled servers.
Last updated : 03/25/2022++++
+# SSH access to Azure Arc-enabled servers
+SSH for Arc-enabled servers enables SSH based connections to Arc-enabled servers without requiring a public IP address or additional open ports.
+This functionality can be used interactively, automated, or with existing SSH based tooling,
+allowing existing management tools to have a greater impact on Azure Arc-enabled servers.
+
+> [!IMPORTANT]
+> SSH for Arc-enabled servers is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Key benefits
+SSH access to Arc-enabled servers provides the following key benefits:
+ - No public IP address or open SSH ports required
+ - Access to Windows and Linux machines
+ - Ability to log-in as a local user or an [Azure user (Linux only)](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md)
+ - Support for other OpenSSH based tooling with config file support
+
+## Prerequisites
+To leverage this functionality, please ensure the following:
+ - Ensure the Arc-enabled server has a hybrid agent version of "1.13.21320.014" or higher.
+ - Run: ```azcmagent show``` on your Arc-enabled Server.
+ - Ensure the Arc-enabled server has the "sshd" service enabled.
+ - Ensure you have the Virtual Machine Local User Login role assigned (role ID: 602da2baa5c241dab01d5360126ab525)
+
+### Availability
+SSH access to Arc-enabled servers is currently supported in the following regions:
+- eastus2euap, eastus, eastus2, westus2, southeastasia, westeurope, northeurope, westcentralus, southcentralus, uksouth, australiaeast, francecentral, japaneast, eastasia, koreacentral, westus3, westus, centralus, northcentralus.
+
+### Supported operating systems
+ - Windows: Windows 7+ and Windows Server 2012+
+ - Linux:
+ - CentOS: CentOS 7, CentOS 8
+ - RedHat Enterprise Linux (RHEL): RHEL 7.4 to RHEL 7.10, RHEL 8.3+
+ - SUSE Linux Enterprise Server (SLES): SLES 12, SLES 15.1+
+ - Ubuntu Server: Ubuntu Server 16.04 to Ubuntu Server 20.04
+
+## Getting started
+### Register the HybridConnectivity resource provider
+> [!NOTE]
+> This is a one-time operation that needs to be performed on each subscription.
+
+Check if the HybridConnectivity resource provider (RP) has been registered:
+
+```az provider show -n Microsoft.HybridConnectivity```
+
+If the RP has not been registered, run the following:
+
+```az provider register -n Microsoft.HybridConnectivity```
+
+This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered.
+
+### Install az CLI extension
+This functionality is currently package in an az CLI extension.
+In order to install this extension, run:
+
+```az extension add --name ssh```
+
+If you already have the extension installed, it can be updated by running:
+
+```az extension update --name ssh```
+
+> [!NOTE]
+> The Azure CLI extension version must be greater than 1.0.1.
+
+### Create default connectivity endpoint
+> [!NOTE]
+> The following actions must be completed for each Arc-enabled server.
+
+Run the following commands:
+ ```az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{\"properties\": {\"type\": \"default\"}}'```
+
+ ```az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```
++
+### Enable functionality on your Arc-enabled server
+In order to use the SSH connect feature, you must enable connections on the hybrid agent.
+
+> [!NOTE]
+> The following actions must be completed in an elevated terminal session.
+
+View your current incoming connections:
+
+```azcmagent config list```
+
+If you have existing ports, you will need to include them in the following command.
+
+To add access to SSH connections, run the following:
+
+```azcmagent config set incomingconnections.ports 22<,other open ports,...>```
+
+> [!NOTE]
+> If you are using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command.
+
+## Examples
+To view examples of using the ```az ssh vm``` command, view the az CLI documentation page for [az ssh](/cli/azure/ssh).
azure-arc Ssh Arc Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md
+
+ Title: Troubleshoot SSH access to Azure Arc-enabled servers issues
+description: This article tells how to troubleshoot and resolve issues with the SSH access to Arc-enabled servers.
Last updated : 03/21/2022+++
+# Troubleshoot SSH access to Azure Arc enabled servers
+
+This article provides information on troubleshooting and resolving issues that may occur while attempting to connect to Azure Arc enabled servers via SSH.
+For general information, see [SSH access to Arc enabled servers overview](./ssh-arc-overview.md).
+
+> [!IMPORTANT]
+> SSH for Arc-enabled servers is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Client-side issues
+These issues are due to errors that occur on the machine that the user is connecting from.
+
+### Incorrect Azure subscription
+This occurs when the active subscription for Azure CLI isn't the same as the server that is being connected to.
+Possible errors:
+ - "Unable to determine the target machine type as Azure VM or Arc Server"
+ - "Unable to determine that the target machine is an Arc Server"
+ - "Unable to determine that the target machine is an Azure VM"
+ - "The resource \<name\> in the resource group \<resource group\> was not found"
+
+Resolution:
+ - Run ```az account set -s <AzureSubscriptionId>``` where "AzureSubscriptionId" corresponds to the subscription that contains the target resource.
+
+### Unable to locate client binaries
+This issue occurs when the client side SSH binaries required to connect cannot be found.
+Error:
+ - "Failed to create ssh key file with error: \<ERROR\>."
+ - "Failed to run ssh command with error: \<ERROR\>."
+ - "Failed to get certificate info with error: \<ERROR\>."
+
+Resolution:
+ - Provide the path to the folder that contains the SSH client executables by using the ```--ssh-client-folder``` parameter.
+
+## Server-side issues
+### SSH traffic is not allowed on the server
+This issue occurs when SSHD isn't running on the server, or SSH traffic isn't allowed on the server.
+Possible errors:
+ - {"level":"fatal","msg":"sshproxy: error copying information from the connection: read tcp 192.168.1.180:60887-\u003e40.122.115.96:443: wsarecv: An existing connection was forcibly closed by the remote host.","time":"2022-02-24T13:50:40-05:00"}
+
+Resolution:
+ - Ensure that the SSHD service is running on the Arc-enabled server
+ - Ensure that port 22 (or other non-default port) is listed in allowed incoming connections. Run ```azcmagent config list``` on the Arc-enabled server in an elevated session
+
+## Azure permissions issues
+
+### Incorrect role assignments
+This issue occurs when the current user does not have the proper role assignment on the target resource, specifically a lack of "read" permissions.
+Possible errors:
+ - "Unable to determine the target machine type as Azure VM or Arc Server"
+ - "Unable to determine that the target machine is an Arc Server"
+ - "Unable to determine that the target machine is an Azure VM"
+ - "Permission denied (publickey)."
+ - "Request for Azure Relay Information Failed: (AuthorizationFailed) The client '\<user name\>' with object id '\<ID\>' does not have authorization to perform action 'Microsoft.HybridConnectivity/endpoints/listCredentials/action' over scope '/subscriptions/\<Subscription ID\>/resourceGroups/\<Resource Group\>/providers/Microsoft.HybridCompute/machines/\<Machine Name\>/providers/Microsoft.HybridConnectivity/endpoints/default' or the scope is invalid. If access was recently granted, please refresh your credentials."
+
+Resolution:
+ - Ensure that you have Contributor or Owner permissions on the resource you are connecting to.
+ - If using Azure AD login, ensure you have the Virtual Machine User Login or the Virtual Machine Administrator Login roles
+
+### HybridConnectiviry RP was not registered
+This issue occurs when the HybridConnectivity RP has not been registered for the subscription.
+Error:
+ - Request for Azure Relay Information Failed: (NoRegisteredProviderFound) Code: NoRegisteredProviderFound
+
+Resolution:
+ - Run ```az provider register -n Microsoft.HybridConnectivity```
+ - Confirm success by running ```az provider show -n Microsoft.HybridConnectivity```, verify that "registrationState" is set to "Registered"
+ - Restart the hybrid agent on the Arc-enabled server
+
+ ## Disable SSH to Arc-enabled servers
+ This functionality can be disabled by completing the following actions:
+ - Remove the SSH port from the allowedincoming ports: ```azcmagent config set incomingconnections.ports <other open ports,...>```
+ - Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
$Total = ($Outputs | Measure-Object -Sum).Sum
Invoke-DurableActivity -FunctionName 'F3' -Input $Total ```
-The fan-out work is distributed to multiple instances of the `F2` function. Please note the usage of the `NoWait` switch on the `F2` function invocation: this switch allows the orchestrator to proceed invoking `F2` without for activity completion. The work is tracked by using a dynamic list of tasks. The `Wait-ActivityFunction` command is called to wait for all the called functions to finish. Then, the `F2` function outputs are aggregated from the dynamic task list and passed to the `F3` function.
+The fan-out work is distributed to multiple instances of the `F2` function. Please note the usage of the `NoWait` switch on the `F2` function invocation: this switch allows the orchestrator to proceed invoking `F2` without waiting for activity completion. The work is tracked by using a dynamic list of tasks. The `Wait-ActivityFunction` command is called to wait for all the called functions to finish. Then, the `F2` function outputs are aggregated from the dynamic task list and passed to the `F3` function.
The automatic checkpointing that happens at the `Wait-ActivityFunction` call ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
You can install this version of the extension in your function app by registerin
# [Extension 4.x+ (preview)](#tab/extensionv4/csharp-script)
-This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
+This extension version is available from the preview extension bundle v4 by adding the following lines in your `host.json` file:
```json { "version": "2.0", "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.0.0, 5.0.0)"
} } ```
This extension version is available from the extension bundle v3 by adding the f
The Cosmos DB is part of an [extension bundle], which is specified in your host.json project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle].
-# [Bundle v2.x](#tab/functionsv2)
+# [Bundle v2.x and v3.x](#tab/functionsv2)
-You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x or 3.x.
-# [Bundle v3.x](#tab/extensionv4)
+
+# [Bundle v4.x (Preview)](#tab/extensionv4)
This version of the bundle contains a preview version of the Cosmos DB bindings extension (version 4.x) that introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
This version of the bundle contains a preview version of the Cosmos DB bindings
[!INCLUDE [functions-cosmosdb-extension-java-note](../../includes/functions-cosmosdb-extension-java-note.md)] ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-java,programming-language-powershell"
-You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
+
+You can add this version of the extension from the preview extension bundle v4 by adding or replacing the following code in your `host.json` file:
```json { "version": "2.0", "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.0.0, 5.0.0)"
} } ```
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
The Event Grid extension is part of an [extension bundle], which is specified in
# [Bundle v3.x](#tab/extensionv3)
-This version of the extension supports updated Event Grid binding parameter types of [Azure.Messaging.CloudEvent](/dotnet/api/azure.messaging.cloudevent) and [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+You can add this version of the extension from the extension bundle v3 by adding or replacing the following configuration in your `host.json` file:
-You can add this version of the extension from the extension bundle v3 by adding or replacing the following code in your `host.json` file:
-
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
-}
-```
To learn more, see [Update your extensions].
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
The functionality of the extension varies depending on the extension version:
# [Extension 5.x+](#tab/extensionv5/in-process)
-Version 5.x of the Service Bus bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). This extension version also changes the types that you can bind to, replacing the types from `Microsoft.ServiceBus.Messaging` and `Microsoft.Azure.ServiceBus` with newer types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
+
+This version allows you to bind to types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
This extension version is available by installing the [NuGet package], version 5.x or later.
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
# [Extension 5.x+](#tab/extensionv5/isolated-process) +
+This version allows you to bind to types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
+ Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.ServiceBus), version 5.x. # [Functions 2.x+](#tab/functionsv2/isolated-process)
Functions version 1.x doesn't support isolated process.
# [Extension 5.x+](#tab/extensionv5/csharp-script)
-Version 5.x of the Service Bus bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). This extension version also changes the types that you can bind to, replacing the types from `Microsoft.ServiceBus.Messaging` and `Microsoft.Azure.ServiceBus` with newer types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
-This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
+This version allows you to bind to types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus).
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
-}
-```
+This extension is available from the extension bundle v3 by adding the following lines in your `host.json` file:
+ To learn more, see [Update your extensions].
The Service Bus binding is part of an [extension bundle], which is specified in
# [Bundle v3.x](#tab/extensionv3)
-Version 3.x of the extension bundle contains version 5.x of the Service Bus bindings extension, which introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
-You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
+You can add this version of the extension from the extension bundle v3 by adding or replacing the following code in your `host.json` file:
-```json
-{
- "version": "3.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
-}
-```
To learn more, see [Update your extensions].
When you set the `isSessionsEnabled` property or attribute on [the trigger](func
| **webProxy**| n/a | The proxy to use for communicating with Service Bus over web sockets. A proxy cannot be used with the `amqpTcp` transport. | |**autoCompleteMessages**|`true`|Determines whether or not to automatically complete messages after successful execution of the function and should be used in place of the `autoComplete` configuration setting.| |**maxAutoLockRenewalDuration**|`00:05:00`|The maximum duration within which the message lock will be renewed automatically. This setting only applies for functions that receive a single message at a time.|
-|**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that the should be initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.|
+|**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that should be initiated per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.|
|**maxConcurrentSessions**|`8`|The maximum number of sessions that can be handled concurrently per scaled instance. This setting only applies for functions that receive a single message at a time.| |**maxMessageBatchSize**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.| |**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.|
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
See the [Image resize with Event Grid](../event-grid/resize-images-on-storage-bl
#### Storage Extension 5.x and higher
-When using the preview storage extension, there is built-in support for Event Grid in the Blob trigger, which requires setting the `source` parameter to Event Grid in your existing Blob trigger.
+When using the storage extension, there is built-in support for Event Grid in the Blob trigger, which requires setting the `source` parameter to Event Grid in your existing Blob trigger.
For more information on how to use the Blob Trigger based on Event Grid, refer to the [Event Grid Blob Trigger guide](./functions-event-grid-blob-trigger.md).
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
The functionality of the extension varies depending on the extension version:
# [Extension 5.x and higher](#tab/extensionv5/in-process)
-A Blob-specific version of the Storage bindings extension is available. With this version, you can [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the tutorial [creating a function app with identity-based connections](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about these new types are different and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
-This extension version is available by installing the [NuGet package], version 5.x.
+This version allows you to bind to types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about how these new types are different from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
+
+This extension is available by installing the [Microsoft.Azure.WebJobs.Extensions.Storage.Blobs NuGet package], version 5.x.
+
+Using the .NET CLI:
+
+```dotnetcli
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage.Blobs --version 5.0.0
+```
+ # [Functions 2.x and higher](#tab/functionsv2/in-process)
-Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install NuGet package, version 3.x. The package is used for .NET class libraries while the extension bundle is used for all other application types.
+Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [Microsoft.Azure.WebJobs.Extensions.Storage NuGet package, version 4.x]. The package is used for .NET class libraries while the extension bundle is used for all other application types.
# [Functions 1.x](#tab/functionsv1/in-process)
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
-Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage), version 5.x.
+
+This version allows you to bind to types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about how these new types are different from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
+
+Add the extension to your project by installing the [Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs NuGet package], version 5.x.
+
+Using the .NET CLI:
+
+```dotnetcli
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs--version 5.0.0
+```
+ # [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage), version 4.x.
+Add the extension to your project by installing the [Microsoft.Azure.Functions.Worker.Extensions.Storage NuGet package, version 4.x].
# [Functions 1.x](#tab/functionsv1/isolated-process)
Functions version 1.x doesn't support isolated process.
# [Extension 5.x and higher](#tab/extensionv5/csharp-script) + This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
-}
-```
To learn more, see [Update your extensions].
-You can install this version of the extension in your function app by registering the [extension bundle], version 3.x.
# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
The Blob storage binding is part of an [extension bundle], which is specified in
# [Bundle v3.x](#tab/extensionv3)
-You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
-}
-```
+You can add this version of the extension from the extension bundle v3 by adding or replacing the following code in your `host.json` file:
+ To learn more, see [Update your extensions]. + # [Bundle v2.x](#tab/extensionv2) You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
Functions 1.x apps automatically have a reference to the extension.
::: zone-end ## host.json settings
This section describes the function app configuration settings available for fun
[core tools]: ./functions-run-local.md [extension bundle]: ./functions-bindings-register.md#extension-bundles
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
+[Microsoft.Azure.WebJobs.Extensions.Storage.Blobs NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage.Blobs
+[Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs
+[Microsoft.Azure.WebJobs.Extensions.Storage NuGet package, version 4.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/4.0.5
+[Microsoft.Azure.Functions.Worker.Extensions.Storage NuGet package, version 4.x]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage/4.0.4
[Update your extensions]: ./functions-bindings-register.md [Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
The functionality of the extension varies depending on the extension version:
# [Extension 5.x+](#tab/extensionv5/in-process) <a name="storage-extension-5x-and-higher"></a>
-A new version of the Storage bindings extension is available in preview. It introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues).
-This extension version is available by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage.Queues), version 5.x.
+
+This version allows you to bind to types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues).
+
+This extension is available by installing the [Microsoft.Azure.WebJobs.Extensions.Storage.Queues NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage.Queues), version 5.x.
+
+Using the .NET CLI:
+
+```dotnetcli
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage.Queues --version 5.0.0
+```
[!INCLUDE [functions-bindings-storage-extension-v5-tables-note](../../includes/functions-bindings-storage-extension-v5-tables-note.md)]
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
# [Extension 5.x+](#tab/extensionv5/isolated-process) +
+This version allows you to bind to types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues).
+ Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues), version 5.x. +
+Using the .NET CLI:
+
+```dotnetcli
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues --version 5.0.0
+```
+ # [Functions 2.x+](#tab/functionsv2/isolated-process)
Functions version 1.x doesn't support isolated process.
# [Extension 5.x+](#tab/extensionv5/csharp-script) + This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file:
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
-}
-```
To learn more, see [Update your extensions].
-You can install this version of the extension in your function app by registering the [extension bundle], version 3.x.
- # [Functions 2.x+](#tab/functionsv2/csharp-script)
The Blob storage binding is part of an [extension bundle], which is specified in
# [Bundle v3.x](#tab/extensionv3) + You can add this version of the extension from the preview extension bundle v3 by adding or replacing the following code in your `host.json` file:
-```json
-{
- "version": "3.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.0, 4.0.0)"
- }
-}
-```
To learn more, see [Update your extensions]. # [Bundle v2.x](#tab/extensionv2)
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
For more information about how to use CloudTable, see [Get started with Azure Ta
If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
-# [Table API extension (preview)](#tab/table-api/in-process)
+# [Table API extension](#tab/table-api/in-process)
The following example shows a [C# function](./functions-dotnet-class-library.md) that reads a single table row. For every message sent to the queue, the function will be triggered.
To return a specific entity by key, use a binding parameter that derives from [T
To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
-# [Table API extension (preview)](#tab/table-api/in-process)
+# [Table API extension](#tab/table-api/in-process)
To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
The following types are supported for `out` parameters and return types:
You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
-# [Table API extension (preview)](#tab/table-api/in-process)
+# [Table API extension](#tab/table-api/in-process)
The following types are supported for `out` parameters and return types:
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
The process for installing the extension varies depending on the extension versi
<a name="storage-extension"></a> <a name="table-api-extension"></a>
-# [Combined Azure Storage extension](#tab/storage-extension/in-process)
--
-Working with the bindings requires that you reference the appropriate NuGet package. Tables are included in a combined package for Azure Storage. Install the [Microsoft.Azure.WebJobs.Extensions.Storage NuGet package][storage-4.x], version 3.x or 4.x.
-
-> [!NOTE]
-> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x of the extension NuGet package or additionally include the [Table API extension](#table-api-extension) when using version 5.x.
+# [Table API extension](#tab/table-api/in-process)
-# [Table API extension (preview)](#tab/table-api/in-process)
-A new Table API extension is now in preview. The new version introduces the ability to use Cosmos DB Table APIs and to [connect to Azure Storage using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the tutorial [creating a function app with identity-based connections](./functions-identity-based-connections-tutorial.md). For .NET applications, the new extension version also changes the types that you can bind to, replacing the types from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` with newer types from [Azure.Data.Tables](/dotnet/api/azure.data.tables).
+This version allows you to bind to types from [Azure.Data.Tables](/dotnet/api/azure.data.tables). It also introduces the ability to use Cosmos DB Table APIs.
-This new extension is available by installing the [Microsoft.Azure.WebJobs.Extensions.Tables NuGet package][table-api-package] to a project using version 5.x or higher of the storage extension for [blobs](./functions-bindings-storage-blob.md?tabs=in-process%2Cextensionv5) and [queues](./functions-bindings-storage-queue.md?tabs=in-process%2Cextensionv5).
+This extension is available by installing the [Microsoft.Azure.WebJobs.Extensions.Tables NuGet package][table-api-package] into a project using version 5.x or higher of the extensions for [blobs](./functions-bindings-storage-blob.md?tabs=in-process%2Cextensionv5) and [queues](./functions-bindings-storage-queue.md?tabs=in-process%2Cextensionv5).
Using the .NET CLI: ```dotnetcli # Install the Tables API extension
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Tables --version 1.0.0-beta.1
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Tables --version 1.0.0
# Update the combined Azure Storage extension (to a version which no longer includes Tables) dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0 ```
-> [!IMPORTANT]
-> If you install the Table API extension with the [Microsoft.Azure.WebJobs.Extensions.Tables NuGet package][table-api-package], ensure that you are using [Microsoft.Azure.WebJobs.Extensions.Storage version 5.x or higher][storage-5.x], as prior versions of that package also include the older version of the table bindings. Using an older version of the storage extension will result in conflicts.
-Any existing functions in your project which use table bindings may need to be updated to account for changes in allowed parameter types.
+# [Combined Azure Storage extension](#tab/storage-extension/in-process)
+
+Working with the bindings requires that you reference the appropriate NuGet package. Tables are included in a combined package for Azure Storage. Install the [Microsoft.Azure.WebJobs.Extensions.Storage NuGet package][storage-4.x], version 3.x or 4.x.
+
+> [!NOTE]
+> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x of the extension NuGet package or additionally include the [Table API extension](#table-api-extension) when using version 5.x.
# [Functions 1.x](#tab/functionsv1/in-process)
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
The following table describes the available deployment methods for your Function
| Deployment&nbsp;type | Methods | Best for... | | -- | -- | -- |
-| Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad-hock deployments. Deployments are managed locally by the tooling. |
+| Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad hoc deployments. Deployments are managed locally by the tooling. |
| App Service-managed| &bull;&nbsp;[Deployment&nbsp;Center&nbsp;(CI/CD)](functions-continuous-deployment.md)<br/>&bull;&nbsp;[Container&nbsp;deployments](functions-create-function-linux-custom-image.md#enable-continuous-deployment-to-azure) | Continuous deployment (CI/CD) from source control or from a container registry. Deployments are managed by the App Service platform (Kudu).| | External pipelines|&bull;&nbsp;[Azure Pipelines](functions-how-to-azure-devops.md)<br/>&bull;&nbsp;[GitHub actions](functions-how-to-github-actions.md) | Production and DevOps pipelines that include additional validation, testing, and other actions be run as part of an automated deployment. Deployments are managed by the pipeline. |
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 04/12/2022 Last updated : 04/18/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* You can configure only one Active Directory (AD) connection per subscription and per region.
- Azure NetApp Files does not support multiple AD connections in a single *region*, even if the AD connections are in different NetApp accounts. However, you can have multiple AD connections in a single subscription if the AD connections are in different regions. If you need multiple AD connections in a single region, you can use separate subscriptions to do so.
+ Azure NetApp Files doesn't support multiple AD connections in a single *region*, even if the AD connections are in different NetApp accounts. However, you can have multiple AD connections in a single subscription if the AD connections are in different regions. If you need multiple AD connections in a single region, you can use separate subscriptions to do so.
- The AD connection is visible only through the NetApp account it is created in. However, you can enable the Shared AD feature to allow NetApp accounts that are under the same subscription and same region to use an AD server created in one of the NetApp accounts. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad). When you enable this feature, the AD connection becomes visible in all NetApp accounts that are under the same subscription and same region.
+ The AD connection is visible only through the NetApp account it's created in. However, you can enable the Shared AD feature to allow NetApp accounts that are under the same subscription and same region to use an AD server created in one of the NetApp accounts. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad). When you enable this feature, the AD connection becomes visible in all NetApp accounts that are under the same subscription and same region.
-* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify.
+* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you'll specify.
-* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you will specify. In some cases, `msDS-SupportedEncryptionTypes` write permission is required to set account attributes within AD.
+* The admin account you use must have the capability to create machine accounts in the organizational unit (OU) path that you'll specify. In some cases, `msDS-SupportedEncryptionTypes` write permission is required to set account attributes within AD.
-* Group Managed Service Accounts (GMSA) cannot be used with the Active Directory connection user account.
+* Group Managed Service Accounts (GMSA) can't be used with the Active Directory connection user account.
-* If you change the password of the Active Directory user account that is used in Azure NetApp Files, be sure to update the password configured in the [Active Directory Connections](#create-an-active-directory-connection). Otherwise, you will not be able to create new volumes, and your access to existing volumes might also be affected depending on the setup.
+* If you change the password of the Active Directory user account that is used in Azure NetApp Files, be sure to update the password configured in the [Active Directory Connections](#create-an-active-directory-connection). Otherwise, you won't be able to create new volumes, and your access to existing volumes might also be affected depending on the setup.
* Before you can remove an Active Directory connection from your NetApp account, you need to first remove all volumes associated with it.
Several features of Azure NetApp Files require that you have an Active Directory
* You can enable AES encryption for AD Authentication by checking the **AES Encryption** box in the [Join Active Directory](#create-an-active-directory-connection) window. Azure NetApp Files supports DES, Kerberos AES 128, and Kerberos AES 256 encryption types (from the least secure to the most secure). If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled that matches the capabilities enabled for your Active Directory.
- For example, if your Active Directory has only the AES-128 capability, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory does not have any Kerberos encryption capability, Azure NetApp Files uses DES by default.
+ For example, if your Active Directory has only the AES-128 capability, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory doesn't have any Kerberos encryption capability, Azure NetApp Files uses DES by default.
You can enable the account options in the properties of the Active Directory Users and Computers Microsoft Management Console (MMC): ![Active Directory Users and Computers MMC](../media/azure-netapp-files/ad-users-computers-mmc.png)
-* Azure NetApp Files supports [LDAP signing](/troubleshoot/windows-server/identity/enable-ldap-signing-in-windows-server), which enables secure transmission of LDAP traffic between the Azure NetApp Files service and the targeted [Active Directory domain controllers](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview). If you are following the guidance of Microsoft Advisory [ADV190023](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023) for LDAP signing, then you should enable the LDAP signing feature in Azure NetApp Files by checking the **LDAP Signing** box in the [Join Active Directory](#create-an-active-directory-connection) window.
+* Azure NetApp Files supports [LDAP signing](/troubleshoot/windows-server/identity/enable-ldap-signing-in-windows-server), which enables secure transmission of LDAP traffic between the Azure NetApp Files service and the targeted [Active Directory domain controllers](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview). If you're following the guidance of Microsoft Advisory [ADV190023](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023) for LDAP signing, then you should enable the LDAP signing feature in Azure NetApp Files by checking the **LDAP Signing** box in the [Join Active Directory](#create-an-active-directory-connection) window.
[LDAP channel binding](https://support.microsoft.com/help/4034879/how-to-add-the-ldapenforcechannelbinding-registry-entry) configuration alone has no effect on the Azure NetApp Files service. However, if you use both LDAP channel binding and secure LDAP (for example, LDAPS or `start_tls`), then the SMB volume creation will fail.
Several features of Azure NetApp Files require that you have an Active Directory
| Unix groups | 24-hour TTL, 1-minute negative TTL | | Unix users | 24-hour TTL, 1-minute negative TTL |
- Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries do not linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.
+ Caches have a specific timeout period called *Time to Live*. After the timeout period, entries age out so that stale entries don't linger. The *negative TTL* value is where a lookup that has failed resides to help avoid performance issues due to LDAP queries for objects that might not exist.
-* Azure NetApp Files does not support the use of Active Directory Domain Services Read-Only Domain Controllers (RODC). To ensure that Azure NetApp Files does not try to use an RODC domain controller, configure the **AD Site** field of the Azure NetApp Files Active Directory connection with an Active Directory site that does not contain any RODC domain controllers.
+* Azure NetApp Files doesn't support the use of Active Directory Domain Services Read-Only Domain Controllers (RODC). To ensure that Azure NetApp Files doesn't try to use an RODC domain controller, configure the **AD Site** field of the Azure NetApp Files Active Directory connection with an Active Directory site that doesn't contain any RODC domain controllers.
## Decide which Domain Services to use
For more information, see [Compare self-managed Active Directory Domain Services
### Active Directory Domain Services
-You can use your preferred [Active Directory Sites and Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) scope for Azure NetApp Files. This option enables reads and writes to Active Directory Domain Services (AD DS) domain controllers that are [accessible by Azure NetApp Files](azure-netapp-files-network-topologies.md). It also prevents the service from communicating with domain controllers that are not in the specified Active Directory Sites and Services site.
+You can use your preferred [Active Directory Sites and Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) scope for Azure NetApp Files. This option enables reads and writes to Active Directory Domain Services (AD DS) domain controllers that are [accessible by Azure NetApp Files](azure-netapp-files-network-topologies.md). It also prevents the service from communicating with domain controllers that aren't in the specified Active Directory Sites and Services site.
To find your site name when you use AD DS, you can contact the administrative group in your organization that is responsible for Active Directory Domain Services. The example below shows the Active Directory Sites and Services plugin where the site name is displayed:
Additional AADDS considerations apply for Azure NetApp Files:
* If you use another VNet in the region where Azure NetApp Files is deployed, you should create a peering between the two VNets. * Azure NetApp Files supports `user` and `resource forest` types. * For synchronization type, you can select `All` or `Scoped`.
- If you select `Scoped`, ensure the correct Azure AD group is selected for accessing SMB shares. If you are uncertain, you can use the `All` synchronization type.
+ If you select `Scoped`, ensure the correct Azure AD group is selected for accessing SMB shares. If you're uncertain, you can use the `All` synchronization type.
* If you use AADDS with a dual-protocol volume, you must be in a custom OU in order to apply POSIX attributes. See [Manage LDAP POSIX Attributes](create-volumes-dual-protocol.md#manage-ldap-posix-attributes) for details. When you create an Active Directory connection, note the following specifics for AADDS:
This setting is configured in the **Active Directory Connections** under **NetAp
1. From your NetApp account, select **Active Directory connections**, then select **Join**.
- Azure NetApp Files supports only one Active Directory connection within the same region and the same subscription. If Active Directory is already configured by another NetApp account in the same subscription and region, you cannot configure and join a different Active Directory from your NetApp account. However, you can enable the Shared AD feature to allow an Active Directory configuration to be shared by multiple NetApp accounts within the same subscription and the same region. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad).
+ Azure NetApp Files supports only one Active Directory connection within the same region and the same subscription. If Active Directory is already configured by another NetApp account in the same subscription and region, you can't configure and join a different Active Directory from your NetApp account. However, you can enable the Shared AD feature to allow an Active Directory configuration to be shared by multiple NetApp accounts within the same subscription and the same region. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad).
![Active Directory Connections](../media/azure-netapp-files/azure-netapp-files-active-directory-connections.png)
This setting is configured in the **Active Directory Connections** under **NetAp
* **Organizational unit path** This is the LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. That is, OU=second level, OU=first level.
- If you are using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
+ If you're using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
![Join Active Directory](../media/azure-netapp-files/azure-netapp-files-join-active-directory.png)
This setting is configured in the **Active Directory Connections** under **NetAp
||| | `SeSecurityPrivilege` | Manage log operations. |
- For example, user accounts used for installing SQL Server in certain scenarios must (temporarily) be granted elevated security privilege. If you are using a non-administrator (domain) account to install SQL Server and the account does not have the security privilege assigned, you should add security privilege to the account.
+ For example, user accounts used for installing SQL Server in certain scenarios must (temporarily) be granted elevated security privilege. If you're using a non-administrator (domain) account to install SQL Server and the account doesn't have the security privilege assigned, you should add security privilege to the account.
> [!IMPORTANT] > Using the **Security privilege users** feature requires that you submit a waitlist request through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
This setting is configured in the **Active Directory Connections** under **NetAp
||| | `SeBackupPrivilege` | Back up files and directories, overriding any ACLs. | | `SeRestorePrivilege` | Restore files and directories, overriding any ACLs. <br> Set any valid user or group SID as the file owner. |
- | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. |
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traverse (`x`) permissions to traverse folders or symlinks. |
![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
This setting is configured in the **Active Directory Connections** under **NetAp
||| | `SeBackupPrivilege` | Back up files and directories, overriding any ACLs. | | `SeRestorePrivilege` | Restore files and directories, overriding any ACLs. <br> Set any valid user or group SID as the file owner. |
- | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. |
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traverse (`x`) permissions to traverse folders or symlinks. |
| `SeTakeOwnershipPrivilege` | Take ownership of files or other objects. | | `SeSecurityPrivilege` | Manage log operations. |
- | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. |
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege aren't required to have traverse (`x`) permissions to traverse folders or symlinks. |
![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
na Previously updated : 04/05/2022 Last updated : 04/18/2022 # Dynamically change the service level of a volume
The capacity pool that you want to move the volume to must already exist. The ca
* After the volume is moved to another capacity pool, you will no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool. * If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.-
-## Register the feature
-
-The feature to move a volume to another capacity pool is currently in preview. If you are using this feature for the first time, you need to register the feature first.
-
-If you have multiple Azure subscriptions, ensure that you are registering for the intended subscription by using the ['Set-AzContext'](/powershell/module/az.accounts/set-azcontext) command. <!-- GitHub #74191 -->
-
-1. Register the feature:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFTierChange
- ```
-
-2. Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFTierChange
- ```
-You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
## Move a volume to another capacity pool
azure-netapp-files Troubleshoot Capacity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-capacity-pools.md
na Previously updated : 03/24/2022 Last updated : 04/18/2022 # Troubleshoot capacity pool errors
This article describes resolutions to issues you might have when managing capaci
| Error condition | Resolution | |-|-|
-| Changing the capacity pool for a volume is not permitted. | You might not be authorized yet to use this feature. <br> The feature to move a volume to another capacity pool is currently in preview. If you're using this feature for the first time, you need to register the feature first and set `-FeatureName ANFTierChange`. See the registration steps in [Dynamically change the service level of a volume](dynamic-change-volume-service-level.md). |
| The capacity pool size is too small for total volume size. | The error is a result of the destination capacity pool not having the available capacity for the volume being moved. <br> Increase the size of the destination pool, or choose another pool that is larger. See [Resize a capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md). | | The pool change cannot be completed because a volume called `'{source pool name}'` already exists in the target pool `'{target pool name}'` | This error occurs because the volume with same name already exists in the target capacity pool. Select another capacity pool that does not have a volume with same name. | | Error changing volume's pool. Pool: `'{target pool name}'` not available or does not exit | You cannot change a volume's capacity pool when the destination capacity pool is not healthy. Check the status of the destination capacity pool. If the pool is in a failed state (not "Succeeded"), try performing an update on the capacity pool by adding a tag name and value pair, then save. |
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## April 2022
-* The [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users) feature is now generally available (GA).
+* Features that are now generally available (GA)
- You no longer need to register this feature before using it.
+ The following features are now GA. You no longer need to register the features before using them.
+ * [Dynamic change of service level](dynamic-change-volume-service-level.md)
+ * [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users)
## March 2022
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 03/23/2022 Last updated : 04/18/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Authorization
+* accessReviewHistoryDefinitions
* batchResourceCheckAccess * denyAssignments * eligibleChildResources
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.CostManagement * Alerts
+* BenefitRecommendations
* BenefitUtilizationSummaries * Budgets * CheckNameAvailability
An extension resource is a resource that adds to another resource's capabilities
* Insights * OperationResults * OperationStatus
+* Pricesheets
+* Publish
* Query * Reportconfigs * Reports
An extension resource is a resource that adds to another resource's capabilities
* DatabaseMigrations
+## Microsoft.DataProtection
+
+* backupInstances
+ ## Microsoft.Diagnostics
-* InsightDiagnostics
-* Solutions
+* apollo
+* insights
+* solutions
## Microsoft.EventGrid
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.GuestConfiguration
-* configurationProfileAssignments
* guestConfigurationAssignments
-* software
## Microsoft.HybridConnectivity
An extension resource is a resource that adds to another resource's capabilities
* InformationProtectionPolicies * insights * jitPolicies
+* secureScoreControls
+* secureScores
* serverVulnerabilityAssessments * sqlVulnerabilityAssessments
An extension resource is a resource that adds to another resource's capabilities
* enrichment * entities * entityQueryTemplates
+* fileImports
* incidents * listrepositories * metadata * MitreCoverageRecords * onboardingStates
+* securityMLAnalyticsSettings
* settings * sourceControls * threatIntelligence
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 02/22/2022 Last updated : 04/18/2022 # Move operation support for resources
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |
-> | holographicsbroadcastaccounts | No | No | No |
> | objectunderstandingaccounts | No | No | No | > | remoterenderingaccounts | Yes | Yes | No | > | spatialanchorsaccounts | Yes | Yes | No |
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 03/23/2022 Last updated : 04/18/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.AlertsManagement * prometheusRuleGroups
-* resourceHealthAlertRules
* smartDetectorAlertRules ## Microsoft.Automation
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.GuestConfiguration
-* autoManagedVmConfigurationProfiles
-* configurationProfileAssignments
* guestConfigurationAssignments
-* software
-* softwareUpdateProfile
-* softwareUpdates
## Microsoft.HybridCompute
Some resources have a limit on the number instances per region. This limit is di
* netAppAccounts/capacityPools/volumes/mountTargets * netAppAccounts/capacityPools/volumes/snapshots * netAppAccounts/capacityPools/volumes/subvolumes
+* netAppAccounts/capacityPools/volumes/volumeQuotaRules
* netAppAccounts/snapshotPolicies * netAppAccounts/volumeGroups
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/recovery-using-backups.md
Previously updated : 01/10/2022 Last updated : 04/18/2022 # Recover using automated database backups - Azure SQL Database & SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
For a large or very active database, the restore might take several hours. If th
For a single subscription, there are limitations on the number of concurrent restore requests. These limitations apply to any combination of point-in-time restores, geo-restores, and restores from long-term retention backup.
-> [!NOTE]
-> Very large restores on Managed Instance lasting for more than 36 hours will be prolonged in case of pending critical system update. In such case current restore operation will be paused, critical system update will be applied, and restore resumed after the update has completed.
+> [!TIP]
+> For Azure SQL Managed Instance system updates will take precedence over database restores in progress. All pending restores in case of a system update on Managed Instance will be suspended and resumed once the update has been applied. This system behavior might prolong the time of restores and might be especially impactful to long-running restores. To achieve a predictable time of database restores, consider configuring [maintenance window](maintenance-window.md) allowing scheduling of system updates at a specific day/time, and consider running database restores outside of the scheduled maintenance window day/time.
| **Deployment option** | **Max # of concurrent requests being processed** | **Max # of concurrent requests being submitted** | | : | --: | --: |
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-copy-paste.md
Title: 'Copy and paste to and from a Windows virtual machine: Azure Bastion'
+ Title: 'Copy and paste to and from a Windows virtual machine: Azure'
+ description: Learn how copy and paste to and from a Windows VM using Bastion.- - Previously updated : 08/30/2021 Last updated : 04/18/2022 # Customer intent: I want to copy and paste to and from VMs using Azure Bastion.
# Copy and paste to a Windows virtual machine: Azure Bastion
-This article helps you copy and paste text to and from virtual machines when using Azure Bastion. Before you work with a VM, make sure you have followed the steps to [Create a Bastion host](./tutorial-create-host-portal.md). Then, connect to the VM that you want to work with using either [RDP](bastion-connect-vm-rdp-windows.md) or [SSH](bastion-connect-vm-ssh-windows.md).
+This article helps you copy and paste text to and from virtual machines when using Azure Bastion.
-For browsers that support the advanced Clipboard API access, you can copy and paste text between your local device and the remote session in the same way you copy and paste between applications on your local device. For other browsers, you can use the Bastion clipboard access tool palette.
+## Prerequisites
->[!NOTE]
->Only text copy/paste is currently supported.
->
+Before you proceed, make sure you have the following items.
+
+* A VNet with [Azure Bastion](./tutorial-create-host-portal.md) deployed.
+* A Windows VM deployed to your VNet.
+
+## <a name="configure"></a> Configure the bastion host
- ![Allow clipboard](./media/bastion-vm-manage/allow.png)
+By default, Azure Bastion is automatically enabled to allow copy and paste for all sessions connected through the bastion resource. You don't need to configure anything additional. This applies to both the Basic and the Standard SKU tier. If you want to disable the copy and paste feature, the Standard SKU is required.
-Only text copy/paste is supported. For direct copy and paste, your browser may prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard. If you are working from a Mac, the keyboard shortcut to paste is **SHIFT-CTRL-V**.
+1. To view or change your configuration, in the portal, go to your Bastion resource.
+1. Go to the **Configuration** page.
+ * To enable, select the **Copy and paste** checkbox if it isn't already selected.
+ * To disable, clear the checkbox. Disable is only available with the Standard SKU. You can upgrade the SKU if necessary.
+1. **Apply** changes. The bastion host will update.
-## <a name="to"></a>Copy to a remote session
+ :::image type="content" source="./media/bastion-vm-copy-paste/configure.png" alt-text="Screenshot that shows the configuration page." lightbox="./media/bastion-vm-copy-paste/configure.png":::
-After you connect to the virtual machine using the [Azure portal ](https://portal.azure.com), complete the following steps:
+## <a name="to"></a> Copy and paste
+
+For browsers that support the advanced Clipboard API access, you can copy and paste text between your local device and the remote session in the same way you copy and paste between applications on your local device. For other browsers, you can use the Bastion clipboard access tool palette.
+
+> [!NOTE]
+> Only text copy/paste is currently supported.
+>
-1. Copy the text/content from the local device into local clipboard.
-1. During the remote session, launch the Bastion clipboard access tool palette by selecting the two arrows. The arrows are located on the left center of the session.
+### <a name="advanced"></a> Advanced Clipboard API browsers
- ![Screenshot that shows the launch arrows for the tool palette highlighted on the left-side of the window.](./media/bastion-vm-manage/left.png)
+1. Connect to your VM.
+1. For direct copy and paste, your browser may prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard.
- ![Screenshot shows a clipboard for text copied in Bastion.](./media/bastion-vm-manage/clipboard.png)
-1. Typically, the copied text automatically shows on the Bastion copy paste palette. If your text is not there, then paste the text in the text area on the palette.
-1. Once the text is in the text area, you can paste it to the remote session.
+ :::image type="content" source="./media/bastion-vm-copy-paste/copy-paste.png" alt-text="Screenshot that shows allow clipboard access." lightbox="./media/bastion-vm-copy-paste/copy-paste.png":::
+1. You can now use keyboard shortcuts as usual to copy and paste. If you're working from a Mac, the keyboard shortcut to paste is **SHIFT-CTRL-V**.
- ![Screenshot that shows the copy/paste button highlighted and a sample text string copied into the remote session.](./media/bastion-vm-manage/local.png)
+### <a name="other"></a>Non-advanced Clipboard API browsers
-## <a name="from"></a>Copy from a remote session
+To copy text from your local computer to a VM, use the following steps.
-After you connect to the virtual machine using the [Azure portal ](https://portal.azure.com), complete the following steps:
+1. Connect to your VM.
+1. Copy the text/content from the local device into your local clipboard.
+1. On the VM, launch the Bastion clipboard access tool palette by selecting the two arrows. The arrows are located on the left center of the session.
-1. Copy the text/content from the remote session into remote clipboard (using Ctrl-C).
+ :::image type="content" source="./media/bastion-vm-copy-paste/left.png" alt-text="Screenshot that shows the launch arrows for the clipboard access tool palette." lightbox="./media/bastion-vm-copy-paste/left.png":::
+1. Copy the text from your local computer. Typically, the copied text automatically shows on the Bastion clipboard access tool palette. If doesn't show up on the tool palette, then paste the text in the text area on the tool palette. Once the text is in the text area, you can paste it to the remote session. In this example, we copied text to the Bastion clipboard tool palette, then pasted it to the VM Notepad app.
- ![tool palette](./media/bastion-vm-manage/remote.png)
-1. During the remote session, launch the Bastion clipboard access tool palette by selecting the two arrows. The arrows are located on the left center of the session.
+ :::image type="content" source="./media/bastion-vm-copy-paste/clipboard-paste.png" alt-text="Screenshot shows a clipboard for text copied in Bastion." lightbox="./media/bastion-vm-copy-paste/clipboard-paste.png":::
- ![clipboard](./media/bastion-vm-manage/clipboard2.png)
-1. Typically, the copied text automatically shows on the Bastion copy paste palette. If your text is not there, then paste the text in the text area on the palette.
-1. Once the text is in the text area, you can paste it to the local device.
+1. If you want to copy the text from the VM to your local computer, copy the text to the clipboard access tool. Once your text is in the text area on the palette, paste it to your local computer.
- ![paste](./media/bastion-vm-manage/local2.png)
-
## Next steps For more VM features, see [About VM connections and features](vm-about.md).
bastion Vm About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-about.md
description: Learn about VM connections and features when connecting using Azure
Previously updated : 03/16/2022 Last updated : 04/18/2022
You can use a variety of different methods to connect to a target VM. Some conne
## <a name="copy-paste"></a>Copy and paste
-For browsers that support the advanced Clipboard API access, you can copy and paste text between your local device and the remote session in the same way you copy and paste between applications on your local device. For other browsers, you can use the Bastion clipboard access tool palette. Only text copy/paste is supported.
+You can copy and paste text between your local device and the remote session. Only text copy/paste is supported. By default, this feature is enabled. If you want to disable this feature, you can change the setting on the configuration page for your bastion host. To disable, your bastion host must be configured with the Standard SKU tier.
For steps and more information, see [Copy and paste - Windows VMs](bastion-vm-copy-paste.md).
For steps and more information, see [Upload or download files to a VM using a na
## <a name="faq"></a>FAQ
-For FAQs, see [Bastion FAQ - VM connectons and features](bastion-faq.md#vm).
+For FAQs, see [Bastion FAQ - VM connections and features](bastion-faq.md#vm).
## Next steps
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 08/18/2021 Last updated : 04/18/2022 ms.devlang: csharp # Configure managed identities in Batch pools
This topic explains how to enable user-assigned managed identities on Batch pool
## Create a user-assigned identity
-First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity) in the same tenant as your Batch account. This managed identity does not need to be in the same resource group or even in the same subscription.
+First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) in the same tenant as your Batch account. You can create the identity using the Azure portal, the Azure Command-Line Interface (Azure CLI), PowerShell, Azure Resource Manager, or the Azure REST API. This managed identity does not need to be in the same resource group or even in the same subscription.
## Create a Batch pool with user-assigned managed identities
-After you've created one or more user-assigned managed identities, you can create a Batch pool with that managed identity by using the [Batch .NET management library](/dotnet/api/overview/azure/batch#management-library).
+After you've created one or more user-assigned managed identities, you can create a Batch pool with that identity or those identities. You can:
+
+- [Use the Azure portal to create the Batch pool](#create-batch-pool-in-azure-portal)
+- [Use the Batch .NET management library to create the Batch pool](#create-batch-pool-with-net)
+
+### Create Batch pool in Azure portal
+
+To create a Batch pool with a user-assigned managed identity through the Azure portal:
+
+1. [Sign in to the Azure portal](https://portal.azure.com/).
+1. In the search bar, enter and select **Batch accounts**.
+1. On the **Batch accounts** page, select the Batch account where you want to create a Batch pool.
+1. In the menu for the Batch account, under **Features**, select **Pools**.
+1. In the **Pools** menu, select **Add** to add a new Batch pool.
+1. For **Pool ID**, enter an identifier for your pool.
+1. For **Identity**, change the setting to **User assigned**.
+1. Under **User assigned managed identity**, select **Add**.
+1. Select the user assigned managed identity or identities you want to use. Then, select **Add**.
+1. Under **Operating System**, select the publisher, offer, and SKU to use.
+1. Optionally, enable the managed identity in the container registry:
+ 1. For **Container configuration**, change the setting to **Custom**. Then, select your custom configuration.
+ 1. For **Start task** select **Enabled**. Then, select **Resource files** and add your storage container information.
+ 1. Enable **Container settings**.
+ 1. Change **Container registry** to **Custom**
+ 1. For **Identity reference**, select the storage container.
+
+### Create Batch pool with .NET
+
+To create a Batch pool with a user-assigned managed identity with the [Batch .NET management library](/dotnet/api/overview/azure/batch#management-library), use the following example code:
```csharp var poolParameters = new Pool(name: "yourPoolName")
communication-services Media Comp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-comp.md
These media streams are typically arrayed in a grid and broadcast to call partic
- Connect devices and services using streaming protocols such as [RTMP](https://datatracker.ietf.org/doc/html/rfc7016) or [SRT](https://datatracker.ietf.org/doc/html/draft-sharabayko-srt) - Compose media streams into complex scenes
-RTMP & SRT connectivity can be used for both input and output. Using RTMP/SRT input, a videography studio that emits RTMP/SRT can join an Azure Communication Services call. RTMP/SRT output allows you to stream media from Azure Communication Services into [Azure Media Services](/media-services/latest/concepts-overview), YouTube Live, and many other broadcasting channels. The ability to attach industry standard RTMP/SRT emitters and to output content to RTMP/SRT subscribers for broadcasting transforms a small group call into a virtual event that reaches millions of people in real time.
+RTMP & SRT connectivity can be used for both input and output. Using RTMP/SRT input, a videography studio that emits RTMP/SRT can join an Azure Communication Services call. RTMP/SRT output allows you to stream media from Azure Communication Services into [Azure Media Services](/azure/media-services/latest/concepts-overview), YouTube Live, and many other broadcasting channels. The ability to attach industry standard RTMP/SRT emitters and to output content to RTMP/SRT subscribers for broadcasting transforms a small group call into a virtual event that reaches millions of people in real time.
Media Composition REST APIs (and open-source SDKs) allow you to command the Azure service to cloud compose these media streams. For example, a **presenter layout** can be used to compose a speaker and a translator together in a classic picture-in-picture style. Media Composition allows for all clients and services connected to the media data plane to enjoy a particular dynamic layout without local processing or application complexity.
The presenter layout is one of several layouts available through the media compo
<!-To try out media composition, check out following content:--> <!- [Quick Start - Applying Media Composition to a video call](../../quickstarts/media-composition/get-started-media-composition.md) -->
-<!- [Tutorial - Media Composition Layouts](../../quickstarts/media-composition/media-composition-layouts.md) -->
+<!- [Tutorial - Media Composition Layouts](../../quickstarts/media-composition/media-composition-layouts.md) -->
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Communication Services connections require internet connectivity to specific por
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- | | Media traffic | [Range of Azure public cloud IP addresses](https://www.microsoft.com/download/confirmation.aspx?id=56519) | UDP 3478 through 3481, TCP ports 443 |
-| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azureedge.net, *.office.com, *.trouter.io | TCP 443, 80 |
+| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, *.azureedge.net, *.office.com, *.trouter.io | TCP 443, 80 |
## Network optimization
container-apps Deploy Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio.md
In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to
## Prerequisites - An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Visual Studio 2022 Preview 2 or higher, available as a [free download](https://visualstudio.microsoft.com/vs/preview/).
+- Visual Studio 2022 Preview 3 or higher, available as a [free download](https://visualstudio.microsoft.com/vs/preview/).
- [Docker Desktop](https://hub.docker.com/editions/community/docker-ce-desktop-windows) for Windows. Visual Studio uses Docker Desktop for various containerization features. ## Create the project
Begin by creating the containerized ASP.NET Core application to deploy to Azure.
If this is your first time creating a project using Docker, you may get a prompt instructing you to install Docker Desktop. This installation is required for working with containerized apps, as mentioned in the prerequisites, so click **Yes**. You can also download and [install Docker Desktop for Windows from the official Docker site](https://hub.docker.com/editions/community/docker-ce-desktop-windows).
-Visual Studio launches the Docker Desktop for Windows installer. You can follow the installation instructions on this page to setup Docker, which requires a system reboot.
+Visual Studio launches the Docker Desktop for Windows installer. You can follow the installation instructions on this page to set up Docker, which requires a system reboot.
## Deploy to Azure Container Apps
The Visual Studio publish dialogs will help you choose existing Azure resources,
7) Once the resources are created, choose **Next**.
+ :::image type="content" source="media/visual-studio/container-apps-select-resource.png" alt-text="A screenshot showing how to select the created resource.":::
+ 8) On the **Registry** screen, you can either select an existing Registry if you have one, or create a new one. To create a new one, click the green **+** icon on the right. On the **Create new** registry screen, fill in the following values: - **DNS prefix**: Enter a value of `msdocscontainerregistry` or a name of your choosing.
Choose **Publish** in the upper right of the publishing profile screen to deploy
:::image type="content" source="media/visual-studio/container-apps-publish.png" alt-text="A screenshot showing how to publish the app.":::
-When the app finishes deploying, Visual Studio opens a browser to the the URL of your deployed site. This page may initially display an error if all of the proper resources have not finished provisioning. You can continue to refresh the browser periodically to check if the deployment has fully completed.
+When the app finishes deploying, Visual Studio opens a browser to the URL of your deployed site. This page may initially display an error if all of the proper resources have not finished provisioning. You can continue to refresh the browser periodically to check if the deployment has fully completed.
:::image type="content" source="media/visual-studio/container-apps-site.png" alt-text="A screenshot showing the published site.":::
cosmos-db Access Previews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-previews.md
+
+ Title: Request access to Azure Cosmos DB previews
+description: Learn how to request access to Azure Cosmos DB previews
+++ Last updated : 04/13/2022+++
+# Access Azure Cosmos DB Preview Features
++
+## Steps to register for a preview feature from the portal
+
+Azure Cosmos DB offers several preview features that you can request access to. Here are the steps to request access to these preview features.
+
+1. Go to **Preview Features** area in your Azure subscription.
+2. Under **Type**, select "Microsoft.DocumentDBΓÇ¥.
+3. Click on the feature you would like access to in the list of available preview features.
+4. Click the **Register** button at the bottom of the page to join the preview.
++
+## Next steps
+
+- Learn [how to choose an API](choose-api.md) in Azure Cosmos DB
+- [Get started with Azure Cosmos DB SQL API](create-sql-api-dotnet.md)
+- [Get started with Azure Cosmos DB API for MongoDB](mongodb/create-mongodb-nodejs.md)
+- [Get started with Azure Cosmos DB Cassandra API](cassandr)
+- [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)
+- [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
cosmos-db Lwt Cassandra Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/lwt-cassandra-api.md
# Azure Cosmos DB Cassandra API Lightweight Transactions with Conditions [!INCLUDE[appliesto-cassandra-api](../includes/appliesto-cassandra-api.md)]
-> [!IMPORTANT]
-> Lightweight Transactions for Azure Cosmos DB API for Cassandra is currently in public preview.
-> This preview version is provided without a service level agreement. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Apache Cassandra as most NoSQL database platforms gives precedence to availability and partition-tolerance above consistency as it does not support ACID transactions as in relational database. For details on how consistency level works with LWT see [Azure Cosmos DB Cassandra API consistency levels](apache-cassandra-consistency-mapping.md). Cassandra supports lightweight transactions(LWT) which borders on ACID. It helps perform a read before write, for operations that require the data insert or update must be unique. ## LWT support within Azure Cosmos DB Cassandra API
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Depending on which API you use, an Azure Cosmos item can represent either a docu
| Resource | Default limit | | | |
-| Maximum size of an item | 2 MB (UTF-8 length of JSON representation) |
+| Maximum size of an item | 2 MB (UTF-8 length of JSON representation) <sup>*</sup> |
| Maximum length of partition key value | 2048 bytes | | Maximum length of ID value | 1023 bytes | | Maximum number of properties per item | No practical limit |
Depending on which API you use, an Azure Cosmos item can represent either a docu
There are no restrictions on the item payloads like number of properties and nesting depth, except for the length restrictions on partition key and ID values, and the overall size restriction of 2 MB. You may have to configure indexing policy for containers with large or complex item structures to reduce RU consumption. See [Modeling items in Cosmos DB](how-to-model-partition-example.md) for a real-world example, and patterns to manage large items.
+<sup>*</sup> Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature ΓÇ£Azure Cosmos DB API For MongoDB 16MB Document SupportΓÇ¥ from [Preview Features blade in the portal](./access-previews.md), to try it.
+ ## Per-request limits Azure Cosmos DB supports [CRUD and query operations](/rest/api/cosmos-db/) against resources like containers, items, and databases. It also supports [transactional batch requests](/dotnet/api/microsoft.azure.cosmos.transactionalbatch) against multiple items with the same partition key in a container.
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
In the preceding example, omitting the ```"university":1``` clause returns an er
`cannot create unique index over {student_id : 1.0} with shard key pattern { university : 1.0 }`
+#### Note
+
+Support for unique index on existing collections with data is available in preview. You can sign up for the feature ΓÇ£Azure Cosmos DB API for MongoDB New Unique Indexes in existing collectionΓÇ¥ through the [Preview Features blade in the portal](./../access-previews.md).
+ #### Limitations On accounts that have continuous backup or synapse link enabled, unique indexes will need to be created while the collection is empty.
db.books.createIndex(
) ```
-To delete a partial unique index using om Mongo Shell, use the command `getIndexes()` to list the indexes in the collection.
+To delete a partial unique index using from Mongo Shell, use the command `getIndexes()` to list the indexes in the collection.
Then drop the index with the following command: ```shell
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
description: Learn how to provision an account with continuous backup and point
Previously updated : 04/06/2022 Last updated : 04/18/2022
This article explains how to provision an account with continuous backup and poi
> You can provision continuous backup mode account only if the following conditions are true: > > * If the account is of type SQL API or API for MongoDB.
+> * If the account is of type Table API or Gremlin API.
> * If the account has a single write region.
When creating a new Azure Cosmos DB account, in the **Backup policy** tab, choos
:::image type="content" source="./media/provision-account-continuous-backup/configure-continuous-backup-portal.png" alt-text="Provision an Azure Cosmos DB account with continuous backup configuration." border="true" lightbox="./media/provision-account-continuous-backup/configure-continuous-backup-portal.png":::
+Table API and Gremlin API are in preview and can be provisioned with PowerShell and Azure CLI.
+ ## <a id="provision-powershell"></a>Provision using Azure PowerShell Before provisioning the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps?view=azps-6.2.1&preserve-view=true) or version higher than 6.2.0. Next connect to your Azure account and select the required subscription with the following commands:
Before provisioning the account, install the [latest version of Azure PowerShell
#### <a id="provision-powershell-sql-api"></a>SQL API account
-To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
+To provision an account with continuous backup, add the argument `-BackupPolicyType Continuous` along with the regular provisioning command.
The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
New-AzCosmosDBAccount `
```
+#### <a id="provision-powershell-table-api"></a>Table API account
+
+To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
+
+The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+
+```azurepowershell
+
+New-AzCosmosDBAccount `
+ -ResourceGroupName "MyRG" `
+ -Location "West US" `
+ -BackupPolicyType Continuous `
+ -Name "pitracct" `
+ -ApiKind "Table"
+
+```
+
+#### <a id="provision-powershell-graph-api"></a>Gremlin API account
+
+To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
+
+The following cmdlet is an example of a single region write account *Pitracct* with continuous backup policy created in *West US* region under *MyRG* resource group:
+
+```azurepowershell
+
+New-AzCosmosDBAccount `
+ -ResourceGroupName "MyRG" `
+ -Location "West US" `
+ -BackupPolicyType Continuous `
+ -Name "pitracct" `
+ -ApiKind "Gremlin"
+
+```
+ ## <a id="provision-cli"></a>Provision using Azure CLI Before provisioning the account, install Azure CLI with the following steps:
az cosmosdb create \
### <a id="provision-cli-mongo-api"></a>API for MongoDB
-The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created the *West US* region under *MyRG* resource group:
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
```azurecli-interactive
az cosmosdb create \
--locations regionName="West US" ```
+### <a id="provision-cli-table-api"></a>Table API account
+
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created in the *West US* region under *MyRG* resource group:
+```azurecli-interactive
+
+az cosmosdb create \
+ --name Pitracct \
+ --kind GlobalDocumentDB \
+ --resource-group MyRG \
+ --capabilities EnableTable \
+ --backup-policy-type Continuous \
+ --default-consistency-level Session \
+ --locations regionName="West US"
+```
+### <a id="provision-cli-graph-api"></a>Gremlin API account
+
+The following command shows an example of a single region write account named *Pitracct* with continuous backup policy created the *West US* region under *MyRG* resource group:
+```azurecli-interactive
+
+az cosmosdb create \
+ --name Pitracct \
+ --kind GlobalDocumentDB \
+ --resource-group MyRG \
+ --capabilities EnableGremlin \
+ --backup-policy-type Continuous \
+ --default-consistency-level Session \
+ --locations regionName="West US"
+```
## <a id="provision-arm-template"></a>Provision using Resource Manager template
You can use Azure Resource Manager templates to deploy an Azure Cosmos DB accoun
} ```
-Next deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
+Next, deploy the template by using Azure PowerShell or CLI. The following example shows how to deploy the template with a CLI command:
```azurecli-interactive az group deployment create -g <ResourceGroup> --template-file <ProvisionTemplateFilePath>
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Publisher document:
Book documents: {"id": "1","name": "Azure Cosmos DB 101", "pub-id": "mspress"} {"id": "2","name": "Azure Cosmos DB for RDBMS Users", "pub-id": "mspress"}
-{"id": "3","name": "Taking over the world one JSON doc at a time"}
+{"id": "3","name": "Taking over the world one JSON doc at a time", "pub-id": "mspress"}
... {"id": "100","name": "Learn about Azure Cosmos DB", "pub-id": "mspress"} ...
cosmos-db Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-bicep.md
+
+ Title: Quickstart - Create an Azure Cosmos DB and a container using Bicep
+description: Quickstart showing how to an Azure Cosmos database and a container using Bicep
++
+tags: azure-resource-manager, bicep
+++ Last updated : 04/18/2022+
+#Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
++
+# Quickstart: Create an Azure Cosmos DB and a container using Bicep
++
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos database and a container within that database. You can later store data in this container.
++
+## Prerequisites
+
+An Azure subscription or free Azure Cosmos DB trial account.
+
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cosmosdb-sql/).
++
+Three Azure resources are defined in the Bicep file:
+
+- [Microsoft.DocumentDB/databaseAccounts](/azure/templates/microsoft.documentdb/databaseaccounts): Create an Azure Cosmos account.
+
+- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases): Create an Azure Cosmos database.
+
+- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos container.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters primaryRegion=<primary-region> secondaryRegion=<secondary-region>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -primaryRegion "<primary-region>" -secondaryRegion "<secondary-region>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<primary-region\>** with the primary replica region for the Cosmos DB account, such as **WestUS**. Replace **\<secondary-region\>** with the secondary replica region for the Cosmos DB account, such as **EastUS**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place.
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created an Azure Cosmos account, a database and a container by using a Bicep file and validated the deployment. To learn more about Azure Cosmos DB and Bicep, continue on to the articles below.
+
+- Read an [Overview of Azure Cosmos DB](../introduction.md).
+- Learn more about [Bicep](../../azure-resource-manager/bicep/overview.md).
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Sql Api Sdk Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-core.md
ms.devlang: csharp Previously updated : 03/04/2022 Last updated : 04/18/2022
Because version 3 of the Azure Cosmos DB .NET SDK includes updated features and
## <a name="recommended-version"></a> Recommended version
-Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.16.2**.
+Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.18.0**.
## <a name="known-issues"></a> Known issues
-Below is a list of any know issues affecting the [recommended minimum version](#recommended-version):
+Below is a list of any known issues affecting the [recommended minimum version](#recommended-version):
| Issue | Impact | Mitigation | Tracking link | | | | | |
-| When using Direct mode with an account with multiple write locations, the SDK might not detect when a region is added to the account. The background process that [refreshes the account information](troubleshoot-sdk-availability.md#adding-a-region-to-an-account) fails to start. |If a new region is added to the account which is part of the PreferredLocations on a higher order than the current region, the SDK won't detect the new available region. |Upgrade to 2.17.0. |https://github.com/Azure/azure-cosmos-dotnet-v2/issues/852 |
## See Also
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet.md
ms.devlang: csharp Previously updated : 03/04/2022 Last updated : 04/18/2022
Because version 3 of the Azure Cosmos DB .NET SDK includes updated features and
## <a name="recommended-version"></a> Recommended version
-Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.16.2**.
+Different sub versions of .NET SDKs are available under the 2.x.x version. **The minimum recommended version is 2.18.0**.
## <a name="known-issues"></a> Known issues
-Below is a list of any know issues affecting the [recommended minimum version](#recommended-version):
+Below is a list of any known issues affecting the [recommended minimum version](#recommended-version):
| Issue | Impact | Mitigation | Tracking link | | | | | |
-| When using Direct mode with an account with multiple write locations, the SDK might not detect when a region is added to the account. The background process that [refreshes the account information](troubleshoot-sdk-availability.md#adding-a-region-to-an-account) fails to start. |If a new region is added to the account which is part of the PreferredLocations on a higher order than the current region, the SDK won't detect the new available region. |Upgrade to 2.17.0. |https://github.com/Azure/azure-cosmos-dotnet-v2/issues/852 |
## FAQ
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md
Previously updated : 09/09/2021 Last updated : 04/18/2022 # Copy data from MariaDB using Azure Data Factory or Synapse Analytics
You can copy data from MariaDB to any supported sink data store. For a list of d
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
-This connector currently supports MariaDB of version 10.0 to 10.2.
+This connector currently supports MariaDB of version 10.0 to 10.5.
## Prerequisites
data-factory Data Factory Build Your First Pipeline Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-powershell.md
Previously updated : 10/22/2021 Last updated : 04/18/2022 # Tutorial: Build your first Azure data factory using Azure PowerShell
Last updated 10/22/2021
> [!NOTE]
-> This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [Quickstart: Create a data factory using Azure Data Factory](../quickstart-create-data-factory-powershell.md).
+> This article applies to version 1 of Data Factory. Version 1 is in maintenance mode. The document exists for legacy users. If you are using the current version of the Data Factory service, see [Quickstart: Create a data factory using Azure Data Factory](../quickstart-create-data-factory-powershell.md).
In this article, you use Azure PowerShell to create your first Azure data factory. To do the tutorial using other tools/SDKs, select one of the options from the drop-down list.
databox-online Azure Stack Edge Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-connect-powershell-interface.md
Previously updated : 09/30/2020 Last updated : 04/14/2022 # Manage an Azure Stack Edge Pro FPGA device via Windows PowerShell
databox-online Azure Stack Edge Gpu 2203 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2203-release-notes.md
Previously updated : 03/23/2022 Last updated : 04/14/2022
The following table provides a summary of known issues in this release.
| No. | Feature | Issue | Workaround/comments | | | | | | |**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |
+|**2.**|HPN VMs |For this release, the Standard_F12_HPN can only support one network interface and cannot be used for Multi-Access Edge Computing (MEC) deployments. | |
## Known issues from previous releases
databox-online Azure Stack Edge Gpu Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
Previously updated : 08/10/2021 Last updated : 04/14/2022 # Manage an Azure Stack Edge Pro GPU device via Windows PowerShell
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Password Reset Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension.md
+
+ Title: Install the password reset extension on VMs for your Azure Stack Edge Pro GPU device
+description: Describes how to install the password reset extension on virtual machines (VMs) on an Azure Stack Edge Pro GPU device.
++++++ Last updated : 04/14/2022+
+#Customer intent: As an IT admin, I need to understand how install the password reset extension on virtual machines (VMs) on my Azure Stack Edge Pro GPU device.
+
+# Install the password reset extension on VMs for your Azure Stack Edge Pro GPU device
++
+This article covers steps to install, verify, and remove the password reset extension using Azure Resource Manager templates on both Windows and Linux VMs.
+
+## Prerequisites
+
+Before you install the password reset extension on the VMs running on your device:
+
+1. Make sure to have access to an Azure Stack Edge device on which you've deployed one or more VMs. For more information, see [Deploy VMs on your Azure Stack Edge Pro GPU device via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+
+ Here's an example where Port 2 was used to enable the compute network. If Kubernetes isn't deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment.
+
+ ![Screenshot of the Advanced networking pane for an Azure Stack Edge device. Network settings for Port 2 are highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension/enable-compute-device-1.png)
+
+1. [Download the templates](https://aka.ms/ase-vm-templates) to your client machine. Unzip the files into a directory youΓÇÖll use as a working directory.
+1. Verify that the client you'll use to access your device is connected to the local Azure Resource Manager over Azure PowerShell. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
+
+ The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If your connection expires, any cmdlets that you execute will return error messages to the effect that you aren't connected to Azure. In this case, sign in again.
+
+## Edit parameters file
+
+Depending on the operating system for your VM, you can install the extension for Windows or for Linux. You'll find the parameter and template files in the *PasswordResetExtension* folder.
+
+### [Windows](#tab/windows)
+
+To change the password for an existing VM, edit the `addPasswordResetExtensionTemplate.parameters.json` parameters file and then deploy the template `addPasswordResetExtensionTemplate.json`.
+
+The file `addPasswordResetExtensionTemplate.parameters.json` takes the following parameters:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "<Name of the VM>"
+ },
+ "extensionType": {
+ "value": "<OS type of the VM, for example, Linux or Windows>"
+ },
+ "username": {
+ "value": "<Existing username for connecting to your VM>"
+ },
+ "Password": {
+ "value": "<New password for the user>"
+ }
+ }
+}
+```
+
+### [Linux](#tab/linux)
+
+To change the password for an existing VM, edit the `addPasswordResetExtensionTemplate.parameters.json` parameters file and then deploy the template `addPasswordResetExtensionTemplate.json`.
+
+The file `addPasswordResetExtensionTemplate.parameters.json` takes the following parameters:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "<Name of the VM>"
+ },
+ "extensionType": {
+ "value": "<OS type of the VM, for example, Linux or Windows>"
+ },
+ "username": {
+ "value": "<Existing username for connecting to your VM>"
+ },
+ "Password": {
+ "value": "<New password for the user>"
+ }
+ }
+}
+```
+++
+## Deploy template
+
+### [Windows](#tab/windows)
+
+Set some parameters. Run the following command:
+
+```powershell
+$templateFile = "<Path to addPasswordResetExtensionTemplate.json file>"
+$templateParameterFile = "<Path to addPasswordResetExtensionTemplate.parameters.json file>"
+$RGName = "<Name of resource group>"
+New-AzResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Deployment name>" -AsJob
+```
+
+The extension deployment is a long running job and takes about 10 minutes to complete.
+
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> $templateFile = "C:\PasswordResetVmExtensionTemplates\addPasswordResetExtensionTemplate.json"
+PS C:\WINDOWS\system32> $templateParameterFile = "C:\PasswordResetVmExtensionTemplates\addPasswordResetExtensionTemplate.parameters.json"
+PS C:\WINDOWS\system32> $RGName = "myasepro2rg"
+PS C:\WINDOWS\system32> New-AzResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "windowsvmdeploy" -AsJob
+Id Name PSJobTypeName State HasMoreData Location Command
+-- - - -- -- -- -
+9 Long Running... AzureLongRun... Running True localhost New-AzResourceGro...
+
+PS C:\WINDOWS\system32>
+
+```
+
+### [Linux](#tab/linux)
+
+Set some parameters. Run the following command:
+
+```powershell
+$templateFile = "<Path to addPasswordResetExtensionTemplate.json file>"
+$templateParameterFile = "<Path to addPasswordResetExtensionTemplate.parameters.json file>"
+$RGName = "<Name of resource group>"
+New-AzResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Deployment name>" -AsJob
+```
+
+The extension deployment is a long running job and takes about 10 minutes to complete.
+
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> $templateFile = "C:\PasswordResetVmExtensionTemplates\addPasswordResetExtensionTemplate.json"
+PS C:\WINDOWS\system32> $templateParameterFile = "C:\PasswordResetVmExtensionTemplates\addPasswordResetExtensionTemplate.parameters.json"
+PS C:\WINDOWS\system32> $RGName = "myasepro2rg"
+PS C:\WINDOWS\system32> New-AzResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "linuxvmdeploy" -AsJob
+Id Name PSJobTypeName State HasMoreData Location Command
+-- - - -- -- -- -
+4 Long Running... AzureLongRun... Running True localhost New-AzResourceGroupDep...
+```
+++
+## Track deployment
+
+### [Windows](#tab/windows)
+
+To check the deployment status of extensions for a given VM, run the following command:
+
+```powershell
+Get-AzVMExtension -ResourceGroupName <MyResourceGroup> -VMName <MyWindowsVM> -Name <Name of the extension>
+```
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32>
+Get-AzVMExtension -ResourceGroupName myasepro2rg -VMName mywindowsvm -Name windowsVMAccessExt
+
+ResourceGroupName : myasepro2rg
+VMName : mywindowsvm
+Name : windowsVMAccessExt
+Location : dbelocal
+Etag : null
+Publisher : Microsoft.Compute
+ExtensionType : VMAccessAgent
+TypeHandlerVersion : 2.4
+Id : /subscriptions/04a485ed-7a09-44ab-6671-66db7f111122/resourceGroups/myasepro2rg/provi
+ ders/Microsoft.Compute/virtualMachines/mywindowsvm/extensions/windowsVMAccessExt
+PublicSettings : {
+ "username": "azureuser"
+ }
+ProtectedSettings :
+ProvisioningState : Succeeded
+Statuses :
+SubStatuses :
+AutoUpgradeMinorVersion : True
+ForceUpdateTag :
+
+PS C:\WINDOWS\system32>
+```
+You can see below that the extension has been installed successfully.
+
+ ![Screenshot of the VM details pane with call-outs for the network interface and installed extensions on Windows.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension/installed-extension-windows-vm.png)
+
+### [Linux](#tab/linux)
+
+To check the deployment status of extensions for a given VM, run the following command:
+
+```powershell
+Get-AzVMExtension -ResourceGroupName <MyResourceGroup> -VMName <MyLinuxVM> -Name <Name of the extension>
+```
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32>
+Get-AzVMExtension -ResourceGroupName myasepro2rg -VMName mylinuxvm5 -Name linuxVMAccessExt
+
+ResourceGroupName : myasepro2rg
+VMName : mylinuxvm5
+Name : linuxVMAccessExt
+Location : dbelocal
+Etag : null
+Publisher : Microsoft.OSTCExtensions
+ExtensionType : VMAccessForLinux
+TypeHandlerVersion : 1.5
+Id : /subscriptions/04a485ed-7a09-44ab-6671-66db7f111122/resourceGroups
+ /myasepro2rg/providers/Microsoft.Compute/virtualMachines/mylinuxvm
+ 5/extensions/linuxVMAccessExt
+PublicSettings : {}
+ProtectedSettings :
+ProvisioningState : Succeeded
+Statuses :
+SubStatuses :
+AutoUpgradeMinorVersion : True
+ForceUpdateTag :
+
+PS C:\WINDOWS\system32>
+```
+You can see below that the extension has been installed successfully.
+
+ ![Screenshot of the VM details pane with call-outs for the network interface and installed extensions on Linux.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension/installed-extension-linux-vm.png)
+++
+## Verify the updated VM password
+
+### [Windows](#tab/windows)
+
+To verify the VM password update, connect to the VM using the new password. For detailed instructions, see [Connect to a Windows VM.](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#connect-to-a-windows-vm)
+
+ ![Screenshot of the Remote Desktop Connection dialog to connect to a VM.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension/connect-to-vm.png)
+
+### [Linux](#tab/linux)
+
+To verify the VM password update, connect to the VM using the new password. For detailed instructions, see [Connect to a Linux VM.](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#connect-to-a-linux-vm)
+
+Here's a sample output:
+
+```powershell
+
+Microsoft Windows [Version 10.0.22000.556]
+(c) Microsoft Corporation. All rights reserved.
+
+C:\WINDOWS\system32>ssh -l azureuser 10.57.51.13
+azureuser@10.57.51.13's password:
+Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 5.0.0-1027-azure x86_64)
+
+* Documentation: https://help.ubuntu.com
+* Management: https://landscape.canonical.com
+* Support: https://ubuntu.com/advantage
+
+ System information as of Wed Mar 30 21:22:24 UTC 2022
+
+ System load: 1.06 Processes: 113
+ Usage of /: 5.4% of 28.90GB Users logged in: 0
+ Memory usage: 14% IP address for eth0: 10.57.51.13
+ Swap usage: 0%
+
+* Super-optimized for small spaces - read how we shrank the memory
+ footprint of MicroK8s to make it the smallest full K8s around.
+
+ https://ubuntu.com/blog/microk8s-memory-optimisation
+
+230 packages can be updated.
+160 updates are security updates.
+
+New release '20.04.4 LTS' available.
+Run 'do-release-upgrade' to upgrade to it.
+
+*** System restart required ***
+Last login: Wed Mar 30 21:16:52 2022 from 10.191.227.85
+To run a command as administrator (user "root"), use "sudo <command>".
+See "man sudo_root" for details.
+
+azureuser@mylinuxvm5:~$
+
+```
++
+## Remove the extension
+
+### [Windows](#tab/windows)
+
+To remove the password reset extension, run the following command:
+
+```powershell
+Remove-AzVMExtension -ResourceGroupName <Resource group name> -VMName <VM name> -Name <Name of the extension>
+```
+
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> Remove-AzVMExtension -ResourceGroupName myasepro2rg -VMName mywindowsvm5 -Name windowsVMAccessExt
+
+Virtual machine extension removal operation
+This cmdlet will remove the specified virtual machine extension. Do you want to continue?
+[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Yes
+
+RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK OK
+
+PS C:\WINDOWS\system32>
+```
+
+### [Linux](#tab/linux)
+
+To remove the password reset extension, run the following command:
+
+```powershell
+Remove-AzVMExtension -ResourceGroupName <Resource group name> -VMName <VM name> -Name <Name of the extension>
+```
+
+Here's a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> Remove-AzVMExtension -ResourceGroupName myasepro2rg -VMName mylinuxvm5 -Name linuxVMAccessExt
+
+Virtual machine extension removal operation
+This cmdlet will remove the specified virtual machine extension. Do you want to continue?
+[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Yes
+
+RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK OK
+
+PS C:\WINDOWS\system32>
+```
++
+## Next steps
+
+Learn how to:
+
+- [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md)
+- [Manage VM disks](azure-stack-edge-gpu-manage-virtual-machine-disks-portal.md)
+- [Manage VM network interfaces](azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md)
+- [Manage VM sizes](azure-stack-edge-gpu-manage-virtual-machine-resize-portal.md)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md
Previously updated : 02/26/2021 Last updated : 04/18/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device. I want to use APIs so that I can efficiently manage my VMs.
Successfully created Resource Group:rg191113014333
```
-## Create a storage account
+## Create a local storage account
-Create a new storage account by using the resource group that you created in the preceding step. This is a local storage account that you use to upload the virtual disk image for the VM.
-### [Az](#tab/az)
-
-1. Set some parameters.
-
- ```powershell
- $StorageAccountName = "<Storage account name>"
- ```
-
-1. Create a new local storage account on your device.
-
- ```powershell
- New-AzStorageAccount -Name $StorageAccountName -ResourceGroupName $ResourceGroupName -Location DBELocal -SkuName Standard_LRS
- ```
-
- > [!NOTE]
- > By using Azure Resource Manager, you can create only local storage accounts, such as locally redundant storage (standard or premium). To create tiered storage accounts, see [Tutorial: Transfer data via storage accounts with Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-deploy-add-storage-accounts.md).
-
- Here's an example output:
-
- ```output
- PS C:\WINDOWS\system32> New-AzStorageAccount -Name myaseazsa -ResourceGroupName myaseazrg -Location DBELocal -SkuName Standard_LRS
-
- StorageAccountName ResourceGroupName PrimaryLocation SkuName Kind AccessTier CreationTime
- -- - - -
- myaseazsa myaseazrg DBELocal Standard_LRS Storage 6/10/2021 11:45...
-
- PS C:\WINDOWS\system32>
- ```
-
-1. Get the storage account key for the account that you created in the earlier step. When prompted, provide the resource group name and the storage account name.
-
- ```powershell
- Get-AzStorageAccountKey
- ```
-
- Here's an example output:
-
- ```output
- PS C:\WINDOWS\system32> Get-AzStorageAccountKey
-
- cmdlet Get-AzStorageAccountKey at command pipeline position 1
- Supply values for the following parameters:
- (Type !? for Help.)
- ResourceGroupName: myaseazrg
- Name: myaseazsa
-
- KeyName Value Permissions
- - --
- key1 gv3OF57tuPDyzBNc1M7fhil2UAiiwnhTT6zgiwE3TlF/CD217Cvw2YCPcrKF47joNKRvzp44leUe5HtVkGx8RQ== Full
- key2 kmEynIs3xnpmSxWbU41h5a7DZD7v4gGV3yXa2NbPbmhrPt10+QmE5PkOxxypeSqbqzd9si+ArNvbsqIRuLH2Lw== Full
-
- PS C:\WINDOWS\system32>
- ```
-
-### [AzureRM](#tab/azure-rm)
-
-```powershell
-New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Resource group name> -Location DBELocal -SkuName Standard_LRS
-```
-
-> [!NOTE]
-> By using Azure Resource Manager, you can create only local storage accounts, such as locally redundant storage (standard or premium). To create tiered storage accounts, see [Tutorial: Transfer data via storage accounts with Azure Stack Edge Pro with GPU](./azure-stack-edge-gpu-deploy-add-storage-accounts.md).
-
-Here's some example output:
-
-```output
-New-AzureRmStorageAccount -Name sa191113014333 -ResourceGroupName rg191113014333 -SkuName Standard_LRS -Location DBELocal
-
-ResourceGroupName : rg191113014333
-StorageAccountName : sa191113014333
-Id : /subscriptions/.../resourceGroups/rg191113014333/providers/Microsoft.Storage/storageaccounts/sa191113014333
-Location : DBELocal
-Sku : Microsoft.Azure.Management.Storage.Models.Sku
-Kind : Storage
-Encryption : Microsoft.Azure.Management.Storage.Models.Encryption
-AccessTier :
-CreationTime : 11/13/2019 9:43:49 PM
-CustomDomain :
-Identity :
-LastGeoFailoverTime :
-PrimaryEndpoints : Microsoft.Azure.Management.Storage.Models.Endpoints
-PrimaryLocation : DBELocal
-ProvisioningState : Succeeded
-SecondaryEndpoints :
-SecondaryLocation :
-StatusOfPrimary : Available
-StatusOfSecondary :
-Tags :
-EnableHttpsTrafficOnly : False
-NetworkRuleSet :
-Context : Microsoft.WindowsAzure.Commands.Common.Storage.LazyAzureStorageContext
-ExtendedProperties : {}
-```
-
-To get the storage account key, run the `Get-AzureRmStorageAccountKey` command. Here's some example output:
-
-```output
-PS C:\windows\system32> Get-AzureRmStorageAccountKey
-
-cmdlet Get-AzureRmStorageAccountKey at command pipeline position 1
-Supply values for the following parameters:
-(Type !? for Help.)
-ResourceGroupName: my-resource-ase
-Name:myasestoracct
-
-KeyName Value
-- --
-key1 /IjVJN+sSf7FMKiiPLlDm8mc9P4wtcmhhbnCa7...
-key2 gd34TcaDzDgsY9JtDNMUgLDOItUU0Qur3CBo6Q...
-```
- ## Add the blob URI to the host file
databox-online Azure Stack Edge Gpu Deploy Vm Specialized Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-vm-specialized-image-powershell.md
Follow these steps to copy VHD to local storage account:
1. Take note of the resulting URI. You'll use this URI in a later step.
- To create and access a local storage account, see the sections [Create a storage account](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#create-a-storage-account) through [Upload a VHD](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#upload-a-vhd) in the article: [Deploy VMs on your Azure Stack Edge device via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
+ To create and access a local storage account, see the sections [Create a storage account](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#create-a-local-storage-account) through [Upload a VHD](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#upload-a-vhd) in the article: [Deploy VMs on your Azure Stack Edge device via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
## Create a managed disk from VHD
databox-online Azure Stack Edge Gpu Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-storage-accounts.md
Previously updated : 08/13/2021 Last updated : 04/18/2022 # Use the Azure portal to manage Edge storage accounts on your Azure Stack Edge Pro GPU [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes how to manage Edge storage accounts on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro GPU via the Azure portal or via the local web UI. Use the Azure portal to add or delete Edge storage accounts on your device.
+This article describes how to manage Edge storage accounts and local storage accounts on your Azure Stack Edge. You can manage the Azure Stack Edge Pro device via the Azure portal or via the local web UI. Use the Azure portal to add or delete Edge storage accounts on your device. Use Azure PowerShell to add local storage accounts on your device.
## About Edge storage accounts
In this article, you learn how to:
> * Add an Edge storage account > * Delete an Edge storage account - ## Add an Edge storage account To create an Edge storage account, do the following procedure: [!INCLUDE [Add an Edge storage account](../../includes/azure-stack-edge-gateway-add-storage-account.md)]
+## Create a local storage account
++
+## Get access keys for a local storage account
+
+Before you get the access keys, you must configure your client to connect to the device via Azure Resource Manager over Azure PowerShell. For detailed instructions, seeΓÇ»[Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
++ ## Delete an Edge storage account Take the following steps to delete an Edge storage account.
Take the following steps to delete an Edge storage account.
The list of storage accounts is updated to reflect the deletion. - ## Add, delete a container You can also add or delete the containers for these storage accounts.
ddos-protection Manage Ddos Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-cli.md
na Previously updated : 09/28/2020 Last updated : 04/18/2022 # Quickstart: Create and configure Azure DDoS Protection Standard using Azure CLI
-Get started with Azure DDoS Protection Standard by using Azure CLI.
+Get started with Azure DDoS Protection Standard by using Azure CLI.
-A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
-In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
+In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
## Prerequisites
In this quickstart, you'll create a DDoS protection plan and link it to a virtua
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.28 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.56 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
## Create a DDoS Protection plan
az network vnet create \
--resource-group MyResourceGroup \ --name MyVnet \ --location eastus \
+ --ddos-protection-plan MyDdosProtectionPlan \
--ddos-protection true
- --ddos-protection-plan MyDdosProtectionPlan
```
-You cannot move a virtual network to another resource group or subscription when DDoS Standard is enabled for the virtual network. If you need to move a virtual network with DDoS Standard enabled, disable DDoS Standard first, move the virtual network, and then enable DDoS standard. After the move, the auto-tuned policy thresholds for all the protected public IP addresses in the virtual network are reset.
### Enable DDoS protection for an existing virtual network
az group create \
az network ddos-protection create \ --resource-group MyResourceGroup \
- --name MyDdosProtectionPlan
+ --name MyDdosProtectionPlan
--vnets MyVnet ```
Alternatively, you can enable DDoS protection for a given virtual network:
az network vnet update \ --resource-group MyResourceGroup \ --name MyVnet \
+ --ddos-protection-plan MyDdosProtectionPlan \
--ddos-protection true
- --ddos-protection-plan MyDdosProtectionPlan
``` ## Validate and test
Verify that the command returns the correct details of your DDoS protection plan
## Clean up resources
-You can keep your resources for the next tutorial. If no longer needed, delete the _MyResourceGroup_ resource group. When you delete the resource group, you also delete the DDoS protection plan and all its related resources.
+You can keep your resources for the next tutorial. If no longer needed, delete the _MyResourceGroup_ resource group. When you delete the resource group, you also delete the DDoS protection plan and all its related resources.
To delete the resource group use [az group delete](/cli/azure/group#az-group-delete):
Update a given virtual network to disable DDoS protection:
az network vnet update \ --resource-group MyResourceGroup \ --name MyVnet \
+ --ddos-protection-plan MyDdosProtectionPlan \
--ddos-protection false
- --ddos-protection-plan ""
+
```
-If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
+If you want to delete a DDoS protection plan, you must first dissociate all virtual networks from it.
## Next steps
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell.md
na Previously updated : 09/28/2020 Last updated : 04/18/2022 # Quickstart: Create and configure Azure DDoS Protection Standard using Azure PowerShell
-Get started with Azure DDoS Protection Standard by using Azure PowerShell.
+Get started with Azure DDoS Protection Standard by using Azure PowerShell.
-A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
+A DDoS protection plan defines a set of virtual networks that have DDoS protection standard enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
-In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
+In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
## Prerequisites
New-AzDdosProtectionPlan -ResourceGroupName MyResourceGroup -Name MyDdosProtecti
### Enable DDoS for a new virtual network
-You can enable DDoS protection when creating a virtual network. In this example, we'll name our virtual network _MyVnet_:
+You can enable DDoS protection when creating a virtual network. In this example, we'll name our virtual network _MyVnet_:
```azurepowershell-interactive
-New-AzVirtualNetwork -Name MyVnet -ResourceGroupName MyResourceGroup -Location "East US" -AddressPrefix 10.0.0.0/16
+#Gets the DDoS protection plan ID
+$ddosProtectionPlanID = Get-AzDdosProtectionPlan -ResourceGroupName MyResourceGroup -Name MyDdosProtectionPlan
+
+#Creates the virtual network
+New-AzVirtualNetwork -Name MyVnet -ResourceGroupName MyResourceGroup -Location "East US" -AddressPrefix 10.0.0.0/16 -DdosProtectionPlan $ddosProtectionPlanID -EnableDdosProtection
``` ### Enable DDoS for an existing virtual network
New-AzVirtualNetwork -Name MyVnet -ResourceGroupName MyResourceGroup -Location "
You can associate an existing virtual network when creating a DDoS protection plan: ```azurepowershell-interactive
-# Creates the DDoS protection plan
-$ddosProtectionPlan = New-AzDdosProtectionPlan -ResourceGroupName MyResourceGroup -Name MyDdosProtectionPlan -Location "East US"
+#Gets the DDoS protection plan ID
+$ddosProtectionPlanID = Get-AzDdosProtectionPlan -ResourceGroupName MyResourceGroup -Name MyDdosProtectionPlan
# Gets the most updated version of the virtual network $vnet = Get-AzVirtualNetwork -Name MyVnet -ResourceGroupName MyResourceGroup $vnet.DdosProtectionPlan = New-Object Microsoft.Azure.Commands.Network.Models.PSResourceId # Update the properties and enable DDoS protection
-$vnet.DdosProtectionPlan.Id = $ddosProtectionPlan.Id
+$vnet.DdosProtectionPlan.Id = $ddosProtectionPlanID.Id
$vnet.EnableDdosProtection = $true $vnet | Set-AzVirtualNetwork
-```
+```
## Validate and test
-First, check the details of your DDoS protection plan:
+Check the details of your DDoS protection plan and verify that the command returns the correct details of your DDoS protection plan.
```azurepowershell-interactive Get-AzDdosProtectionPlan -ResourceGroupName MyResourceGroup -Name MyDdosProtectionPlan ```
-Verify that the command returns the correct details of your DDoS protection plan.
+Check the details of your vNet and verify the DDoS protection plan is enabled.
+
+```azurepowershell-interactive
+Get-AzVirtualNetwork -Name MyVnet -ResourceGroupName MyResourceGroup
+```
## Clean up resources
-You can keep your resources for the next tutorial. If no longer needed, delete the _MyResourceGroup_ resource group. When you delete the resource group, you also delete the DDoS protection plan and all its related resources.
+You can keep your resources for the next tutorial. If no longer needed, delete the _MyResourceGroup_ resource group. When you delete the resource group, you also delete the DDoS protection plan and all its related resources.
```azurepowershell-interactive Remove-AzResourceGroup -Name MyResourceGroup ```
-To disable DDoS protection for a virtual network:
+To disable DDoS protection for a virtual network:
```azurepowershell-interactive # Gets the most updated version of the virtual network
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
Under **Protected resources**, you can view your protected virtual networks and
You can keep your resources for the next tutorial. If no longer needed, delete the _MyResourceGroup_ resource group. When you delete the resource group, you also delete the DDoS protection plan and all its related resources. If you don't intend to use this DDoS protection plan, you should remove resources to avoid unnecessary charges. >[!WARNING]
- >This action is irreversable.
+ >This action is irreversible.
1. In the Azure portal, search for and select **Resource groups**, or select **Resource groups** from the Azure portal menu.
devtest-labs Encrypt Disks Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-disks-customer-managed-keys.md
Last updated 09/29/2021+ # Encrypt disks using customer-managed keys in Azure DevTest Labs
The following section shows how a lab owner can set up encryption using a custom
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/encrypt-disks-customer-managed-keys/managed-keys.png" alt-text="Managed keys":::
-1. For the lab to handle encryption for all the lab disks, lab owner needs to explicitly grant the labΓÇÖs **system-assigned identity** reader role on the disk encryption set as well as virtual machine contributor role on the underlying Azure subscription. Lab owner can do so by completing the following steps:
+1. For the lab to handle encryption for all the lab disks, lab owner needs to explicitly grant the labΓÇÖs **system-assigned identity** reader role on the disk encryption set as well as virtual machine contributor role on the underlying Azure subscription. The lab owner can do so by completing the following steps:
-
- 1. Ensure you are a member of [User Access Administrator role](../role-based-access-control/built-in-roles.md#user-access-administrator) at the Azure subscription level so that you can manage user access to Azure resources.
- 1. On the **Disk Encryption Set** page, select **Access control (IAM)** on the left menu.
- 1. Select **+ Add** on the toolbar and select **Add a role assignment**.
-
- :::image type="content" source="./media/encrypt-disks-customer-managed-keys/add-role-management-menu.png" alt-text="Add role management - menu":::
- 1. On the **Add role assignment** page, select the **Reader** role or a role that allows more access.
- 1. Type the lab name for which the disk encryption set will be used and select the lab name (system-assigned identity for the lab) from the dropdown-list.
-
- :::image type="content" source="./media/encrypt-disks-customer-managed-keys/select-lab.png" alt-text="Select system-managed identity of the lab":::
- 1. Select **Save** on the toolbar.
-
- :::image type="content" source="./media/encrypt-disks-customer-managed-keys/save-role-assignment.png" alt-text="Save role assignment":::
-3. Add the lab's **system-assigned identity** to the **Virtual Machine Contributor** role using the **Subscription** -> **Access control (IAM)** page. The steps are similar to the ones in the previous steps.
-
-
- 1. Navigate to the **Subscription** page in the Azure portal.
- 1. Select **Access control (IAM)**.
- 1. Select **+Add** on the toolbar, and select **Add a role assignment**.
-
- :::image type="content" source="./media/encrypt-disks-customer-managed-keys/subscription-access-control-page.png" alt-text="Subscription -> Access control (IAM) page":::
- 1. On the **Add role assignment** page, select **Virtual Machine Contributor** for the role.
- 1. Type the lab name, and select the **lab name** (system-assigned identity for the lab) from the dropdown-list.
- 1. Select **Save** on the toolbar.
+ 1. Ensure you are a member of [User Access Administrator role](../role-based-access-control/built-in-roles.md#user-access-administrator) at the Azure subscription level so that you can manage user access to Azure resources.
+
+ 1. On the **Disk Encryption Set** page, assign at least the Reader role to the lab name for which the disk encryption set will be used.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ 1. Navigate to the **Subscription** page in the Azure portal.
+
+ 1. Assign the Virtual Machine Contributor role to the lab name (system-assigned identity for the lab).
## Encrypt lab OS disks with a customer-managed key
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
-+ Last updated 03/13/2020
You need to create an Azure App registration ID that the on-premises hybrid work
4. After App ID registration is completed, make a note of the **Application (client) ID**, which you'll use while installing the hybrid worker.
-5. In the Azure portal, navigate to Azure Database Migration Service, select **Access control (IAM)**, and then select **Add role assignment** to assign contributor access to the App ID.
+5. In the Azure portal, navigate to Azure Database Migration Service.
- ![Azure Database Migration Service hybrid mode assign contributor role](media/quickstart-create-data-migration-service-hybrid-portal/dms-app-assign-contributor.png)
+6. In the navigation menu, select **Access control (IAM)**.
-6. Select **Contributor** as the role, assign access to **Azure AD user, or service principal**, and then select the App ID name.
+7. Select **Add** > **Add role assignment**.
- ![Azure Database Migration Service hybrid mode assign contributor role details](media/quickstart-create-data-migration-service-hybrid-portal/dms-add-role-assignment.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
-7. Select **Save** to save the role assignment for the App ID on the Azure Database Migration Service resource.
+8. On the **Role** tab, select the **Contributor** role.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected.":::
+
+9. On the **Members** tab, select **User, group, or service principal**, and then select the App ID name.
+
+10. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
## Download and install the hybrid worker
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/delivery-retry.md
By default, Event Grid on Kubernetes delivers one event at a time to the subscri
[!INCLUDE [event-grid-preview-feature-note.md](../includes/event-grid-preview-feature-note.md)] > [!NOTE]
-> During the preview, Event Grid on Kubernetes features are supported through API version [2020-10-15-Preview](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update).
+> During the preview, Event Grid on Kubernetes features are supported through API version [2020-10-15-Preview](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update).
## Retry schedule
There are two configurations that determine retry policy. They are:
An event is dropped if either of the limits of the retry policy is reached. Configuration of these limits is done per subscription basis. The following section describes each one is further detail. ### Configuring defaults per subscriber
-You can also specify retry policy limits on a per subscription basis. See our [API documentation](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update) for information on configuring defaults per subscriber. Subscription level defaults override the Event Grid module on Kubernetes level configurations.
+You can also specify retry policy limits on a per subscription basis. See our [API documentation](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update) for information on configuring defaults per subscriber. Subscription level defaults override the Event Grid module on Kubernetes level configurations.
The following example sets up a Web hook subscription with `maxNumberOfAttempts` to 3 and `eventTimeToLiveInMinutes` to 30 minutes.
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/event-handlers.md
# Event handlers destinations in Event Grid on Kubernetes An event handler is any system that exposes an endpoint and is the destination for events sent by Event Grid. An event handler receiving an event acts upon it and uses the event payload to execute some logic, which might lead to the occurrence of new events.
-The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update) version.
+The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update) version.
In general, Event Grid on Kubernetes can send events to any destination via **Webhooks**. Webhooks are HTTP(s) endpoints exposed by a service or workload to which Event Grid has access. The webhook can be a workload hosted in the same cluster, in the same network space, on the cloud, on-prem or anywhere that Event Grid can reach.
In addition to Webhooks, Event Grid on Kubernetes can send events to the followi
## Feature parity
-Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
+Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
-1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions).
+1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions).
2. [Azure Event Grid trigger for Azure Functions](../../azure-functions/functions-bindings-event-grid-trigger.md?tabs=csharp%2Cconsole) isn't supported. You can use a WebHook destination type to deliver events to Azure Functions. 3. There's no [dead letter location](../manage-event-delivery.md#set-dead-letter-location) support. That means that you cannot use ``properties.deadLetterDestination`` in your event subscription payload. 4. Azure Relay's Hybrid Connections as a destination isn't supported yet.
-5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
-6. Labels ([properties.labels](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they are not available.
-7. [Delivery with resource identity](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported.
+5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
+6. Labels ([properties.labels](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they are not available.
+7. [Delivery with resource identity](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported.
8. [Destination endpoint validation](../webhook-event-delivery.md#endpoint-validation-with-event-grid-events) isn't supported yet. ## Event filtering in event subscriptions
event-grid Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/features.md
# Event Grid on Kubernetes with Azure Arc features
-Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
+Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/controlplane-version2021-10-15-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
Although Event Grid on Kubernetes and Azure Event Grid share many features and the goal is to provide the same user experience, there are some differences given the unique requirements they seek to meet and the stage in which they are on their software lifecycle. For example, the only type of topic available in Event Grid on Kubernetes are Event Grid Topics that sometimes are also referred as custom topics. Other types of topics (see below) are either not applicable or support for them is not yet available. The main differences between the two editions of Event Grid are presented in the table below.
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Feature | Event Grid on Kubernetes | Azure Event Grid | |:--|:-:|:-:|
-| [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics) | Γ£ö | Γ£ö |
+| [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-10-15-preview/topics) | Γ£ö | Γ£ö |
| [CNCF Cloud Events schema](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md) | Γ£ö | Γ£ö | | Event Grid and custom schemas | Γ£ÿ* | Γ£ö | | Reliable delivery | Γ£ö | Γ£ö |
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/overview.md
Event Grid on Kubernetes supports various event-driven integration scenarios. Ho
"As an owner of a system deployed to a Kubernetes cluster, I want to communicate my system's state changes by publishing events and configuring routing of those events so that event handlers, under my control or otherwise, can process my system's events in a way they see fit."
-**Feature** that helps you realize above requirement: [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics).
+**Feature** that helps you realize above requirement: [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-10-15-preview/topics).
### Event Grid on Kubernetes at a glance From the user perspective, Event Grid on Kubernetes is composed of the following resources in blue:
With Event Grid on Kubernetes, you can forward events to Azure for further proce
Event handler destinations can be any HTTPS or HTTP endpoint to which Event Grid can reach through the network, public or private, and has access (not protected with some authentication mechanism). You define event delivery destinations when you create an event subscription. For more information, see [event handlers](event-handlers.md). ## Features
-Event Grid on Kubernetes supports [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-06-01-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
+Event Grid on Kubernetes supports [Event Grid Topics](/rest/api/eventgrid/controlplane-version2021-10-15-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
Some of the capabilities you get with Azure Event Grid on Kubernetes are:
governance Export Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/export-resources.md
Title: Export Azure Policy resources description: Learn to export Azure Policy resources to GitHub, such as policy definitions and policy assignments. Previously updated : 08/17/2021 Last updated : 04/18/2022 ms.devlang: azurecli++ # Export Azure Policy resources
az policy definition show --name 'VirtualMachineStorage'
Azure Policy definitions, initiatives, and assignments can each be exported as JSON with [Azure PowerShell](/powershell/azure/). Each of these cmdlets uses a **Name** parameter to specify which
-object to get the JSON for. The **Name** property is often a _GUID_ and isn't the **displayName** of
+object to get the JSON for. The **Name** property is often a _GUID_ (Globally Unique Identifier) and isn't the **displayName** of
the object. - Definition - [Get-AzPolicyDefinition](/powershell/module/az.resources/get-azpolicydefinition) - Initiative - [Get-AzPolicySetDefinition](/powershell/module/az.resources/get-azpolicysetdefinition) - Assignment - [Get-AzPolicyAssignment](/powershell/module/az.resources/get-azpolicyassignment)
-Here is an example of getting the JSON for a policy definition with **Name** of
-_VirtualMachineStorage_:
+Here is an example of getting the JSON for a policy definition with **Name** (as mentioned previously, GUID) of
+_d7fff7ea-9d47-4952-b854-b7da261e48f2_:
```azurepowershell-interactive
-Get-AzPolicyDefinition -Name 'VirtualMachineStorage' | ConvertTo-Json -Depth 10
+Get-AzPolicyDefinition -Name 'd7fff7ea-9d47-4952-b854-b7da261e48f2' | ConvertTo-Json -Depth 10
``` ## Next steps
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 03/08/2022 Last updated : 04/18/2022 ++ # Azure Resource Graph table and resource type reference
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.migrate/migrateprojects - microsoft.migrate/movecollections - Microsoft.Migrate/projects (Migration projects)-- microsoft.mixedreality/holographicsbroadcastaccounts - Microsoft.MixedReality/objectAnchorsAccounts (Object Anchors Accounts) - Microsoft.MixedReality/objectUnderstandingAccounts (Object Understanding Accounts) - Microsoft.MixedReality/remoteRenderingAccounts (Remote Rendering Accounts)
load-testing How To High Scale Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md
In this section, you configure the scaling settings of your load test.
1. Select **Apply** to modify the test and use the new configuration when you rerun it.
+## Service quotas and limits
+
+All Azure services set default limits and quotas for resources and features. The following table describes the maximum limits for Azure Load Testing.
+
+|Resource |Limit |
+|||
+|Maximum concurrent engine instances that can be utilized per region per subscription | 100 |
+|Maximum concurrent test runs per region per subscription | 25 |
+
+You can increase the default limits and quotas by requesting the increase through an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+1. Select **create a support ticket**.
+
+1. Provide a summary of your issue.
+
+1. Select **Issue type** as *Technical*.
+
+1. Select your subscription. Then, select **Service Type** as *Azure Load Testing - Preview*.
+
+1. Select **Problem type** as *Test Execution*.
+
+1. Select **Problem subtype** as *Provisioning stalls or fails*.
+ ## Next steps - For more information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md).
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
On the left, select **AI + Machine Learning**, then select **Azure Machine Learn
The following screenshot shows the cost estimation by using the calculator: As you add new resources to your workspace, return to this calculator and add the same resource here to update your cost estimates.
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Microsoft Sentinel is a security solution that can integrate with Azure Machine
Microsoft Sentinel can automatically create a workspace for you if you are OK with a public endpoint. In this configuration, the security operations center (SOC) analysts and system administrators connect to notebooks in your workspace through Sentinel.
-For information on this process, see [Create an Azure ML workspace from Microsoft Sentinel](../sentinel/notebooks.md?tabs=public-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
+For information on this process, see [Create an Azure ML workspace from Microsoft Sentinel](../sentinel/notebooks-hunt.md?tabs=public-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
:::image type="content" source="./media/how-to-network-security-overview/common-public-endpoint-deployment.svg" alt-text="Diagram showing Microsoft Sentinel public connection.":::
For information on this process, see [Create an Azure ML workspace from Microsof
If you want to secure your workspace and associated resources in a VNet, you must create the Azure Machine Learning workspace first. You must also create a virtual machine 'jump box' in the same VNet as your workspace, and enable Azure Bastion connectivity to it. Similar to the public configuration, SOC analysts and administrators can connect using Microsoft Sentinel, but some operations must be performed using Azure Bastion to connect to the VM.
-For more information on this configuration, see [Create an Azure ML workspace from Microsoft Sentinel](../sentinel/notebooks.md?tabs=private-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
+For more information on this configuration, see [Create an Azure ML workspace from Microsoft Sentinel](../sentinel/notebooks-hunt.md?tabs=private-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
:::image type="content" source="./media/how-to-network-security-overview/private-endpoint-deploy-bastion.svg" alt-text="Daigram showing Microsoft Sentinel connection through a VNet.":::
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
ws = Workspace.from_config()
### Get the data
-The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html).
+The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
### Prepare training script
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
| Support for deep learning and other advanced learners | GA | YES | YES | | Large data support (up to 100 GB) | Public Preview | YES | YES | | Azure Databricks integration | GA | NO | NO |
-| SQL, CosmosDB, and HDInsight integrations | GA | YES | YES |
+| SQL, Azure Cosmos DB, and HDInsight integrations | GA | YES | YES |
| **[Machine Learning pipelines](concept-ml-pipelines.md)** | | | | | Create, run, and publish pipelines using the Azure ML SDK | GA | YES | YES | | Create pipeline endpoints using the Azure ML SDK | GA | YES | YES |
The information in the rest of this document provides information on what featur
| [.NET integration ML.NET 1.0](/dotnet/machine-learning/tutorials/object-detection-model-builder) | GA | YES | YES | | **Inference** | | | | | [Batch inferencing](tutorial-pipeline-batch-scoring-classification.md) | GA | YES | YES |
-| [Data Box Edge with FPGA](how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
+| [Azure Stack Edge with FPGA](how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
| **Other** | | | | | [Open Datasets](../open-datasets/samples.md) | Public Preview | YES | YES | | [Custom Cognitive Search](how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
The information in the rest of this document provides information on what featur
|-|::|:--:|:-:| | **Automated machine learning** | | | | | Create and run experiments in notebooks | GA | YES | N/A |
-| Create and run experiments in studio web experience | Public Preview | YES | N/A |
+| Create and run experiments in studio web experience | Preview | YES | N/A |
| Industry-leading forecasting capabilities | GA | YES | N/A | | Support for deep learning and other advanced learners | GA | YES | N/A |
-| Large data support (up to 100 GB) | Public Preview | YES | N/A |
-| Azure Databricks Integration | GA | NO | N/A |
-| SQL, CosmosDB, and HDInsight integrations | GA | YES | N/A |
+| Large data support (up to 100 GB) | Preview | YES | N/A |
+| Azure Databricks Integration | GA | YES | N/A |
+| SQL, Azure Cosmos DB, and HDInsight integrations | GA | YES | N/A |
| **Machine Learning pipelines** | | | | | Create, run, and publish pipelines using the Azure ML SDK | GA | YES | N/A | | Create pipeline endpoints using the Azure ML SDK | GA | YES | N/A | | Create, edit, and delete scheduled runs of pipelines using the Azure ML SDK | GA | YES | N/A | | View pipeline run details in studio | GA | YES | N/A | | Create, run, visualize, and publish pipelines in Azure ML designer | GA | YES | N/A |
-| Azure Databricks Integration with ML Pipeline | GA | NO | N/A |
+| Azure Databricks Integration with ML Pipeline | GA | YES | N/A |
| Create pipeline endpoints in Azure ML designer | GA | YES | N/A | | **Integrated notebooks** | | | | | Workspace notebook and file sharing | GA | YES | N/A | | R and Python support | GA | YES | N/A |
-| Virtual Network support | Public Preview | NO | N/A |
+| Virtual Network support | Preview | YES | N/A |
| **Compute instance** | | | | | Managed compute Instances for integrated Notebooks | GA | YES | N/A | | Jupyter, JupyterLab Integration | GA | YES | N/A |
The information in the rest of this document provides information on what featur
| **Security** | | | | | Virtual Network (VNet) support for training | GA | YES | N/A | | Virtual Network (VNet) support for inference | GA | YES | N/A |
-| Scoring endpoint authentication | Public Preview | YES | N/A |
-| Workplace Private Endpoint | GA | NO | N/A |
-| ACI behind VNet | Public Preview | NO | N/A |
+| Scoring endpoint authentication | Preview | YES | N/A |
+| Workplace Private Endpoint | GA | YES | N/A |
+| ACI behind VNet | Preview | NO | N/A |
| ACR behind VNet | GA | YES | N/A |
-| Private IP of AKS cluster | Public Preview | NO | N/A |
+| Private IP of AKS cluster | Preview | NO | N/A |
| **Compute** | | | | | quota management across workspaces | GA | YES | N/A | | **Data for machine learning** | | | | | Create, view, or edit datasets and datastores from the SDK | GA | YES | N/A | | Create, view, or edit datasets and datastores from the UI | GA | YES | N/A |
-| View, edit, or delete dataset drift monitors from the SDK | Public Preview | YES | N/A |
-| View, edit, or delete dataset drift monitors from the UI | Public Preview | YES | N/A |
+| View, edit, or delete dataset drift monitors from the SDK | Preview | YES | N/A |
+| View, edit, or delete dataset drift monitors from the UI | Preview | YES | N/A |
| **Machine learning lifecycle** | | | |
-| Model profiling | GA | PARTIAL | N/A |
+| Model profiling | GA | YES | N/A |
| The Azure DevOps extension for Machine Learning & the Azure ML CLI | GA | YES | N/A |
-| FPGA-based Hardware Accelerated Models | GA | NO | N/A |
-| Visual Studio Code integration | Public Preview | NO | N/A |
-| Event Grid integration | Public Preview | YES | N/A |
-| Integrate Azure Stream Analytics with Azure Machine Learning | Public Preview | NO | N/A |
+| FPGA-based Hardware Accelerated Models | Deprecating | Deprecating | N/A |
+| Visual Studio Code integration | Preview | NO | N/A |
+| Event Grid integration | Preview | YES | N/A |
+| Integrate Azure Stream Analytics with Azure Machine Learning | Preview | NO | N/A |
| **Labeling** | | | | | Labeling Project Management Portal | GA | YES | N/A | | Labeler Portal | GA | YES | N/A | | Labeling using private workforce | GA | YES | N/A |
-| ML assisted labeling (Image classification and object detection) | Public Preview | YES | N/A |
+| ML assisted labeling (Image classification and object detection) | Preview | YES | N/A |
| **Responsible AI** | | | |
-| Explainability in UI | Public Preview | NO | N/A |
+| Explainability in UI | Preview | NO | N/A |
| Differential privacy SmartNoise toolkit | OSS | NO | N/A |
-| custom tags in Azure Machine Learning to implement datasheets | GA | NO | N/A |
-| Fairness AzureML Integration | Public Preview | NO | N/A |
+| custom tags in Azure Machine Learning to implement datasheets | GA | YES | N/A |
+| Fairness AzureML Integration | Preview | NO | N/A |
| Interpretability SDK | GA | YES | N/A | | **Training** | | | | | Experimentation log streaming | GA | YES | N/A |
-| Reinforcement Learning | Public Preview | NO | N/A |
+| Reinforcement Learning | Deprecating | Deprecating | N/A |
| Experimentation UI | GA | YES | N/A | | .NET integration ML.NET 1.0 | GA | YES | N/A | | **Inference** | | | | | Batch inferencing | GA | YES | N/A |
-| Data Box Edge with FPGA | Public Preview | NO | N/A |
+| Azure Stack Edge with FPGA | Deprecating | Deprecating | N/A |
| **Other** | | | |
-| Open Datasets | Public Preview | YES | N/A |
-| Custom Cognitive Search | Public Preview | YES | N/A |
+| Open Datasets | Preview | YES | N/A |
+| Custom Cognitive Search | Preview | YES | N/A |
The information in the rest of this document provides information on what featur
## Next steps
-To learn more about the regions that Azure Machine learning is available in, see [Products by region](https://azure.microsoft.com/global-infrastructure/services/).
+To learn more about the regions that Azure Machine learning is available in, see [Products by region](https://azure.microsoft.com/global-infrastructure/services/).
managed-grafana Grafana App Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/grafana-app-ui.md
+
+ Title: Grafana UI
+description: Learn about the Grafana UI components--panels, visualizations and dashboards.
++++ Last updated : 3/31/2022
+
+
+# Grafana UI
+
+This reference covers the Grafana web application's main UI components, including panels, visualizations, and dashboards. For consistency, it links to the corresponding topics in the Grafana documentation.
+
+## Panels
+
+A Grafana panel is a basic building block in Grafana. Each panel displays a dataset from a data source query using a [visualization](#visualizations). For more information about panels, refer to the following items:
+
+* [Working with Grafana panels](https://grafana.com/docs/grafana/latest/panels/working-with-panels/)
+* [Query a data source](https://grafana.com/docs/grafana/latest/panels/query-a-data-source/)
+* [Modify visualization text and background colors](https://grafana.com/docs/grafana/latest/panels/specify-thresholds/)
+* [Override field values](https://grafana.com/docs/grafana/latest/panels/override-field-values/)
+* [Transform data](https://grafana.com/docs/grafana/latest/panels/transform-data/)
+* [Format data using value mapping](https://grafana.com/docs/grafana/latest/panels/format-data/)
+* [Create reusable Grafana panels](https://grafana.com/docs/grafana/latest/panels/library-panels/)
+* [Enable template variables to add panels dynamically](https://grafana.com/docs/grafana/latest/panels/add-panels-dynamically/)
+* [Reference: Query options](https://grafana.com/docs/grafana/latest/panels/reference-query-options/)
+* [Reference: Calculation types](https://grafana.com/docs/grafana/latest/panels/reference-calculation-types/)
+* [Reference: Standard field definitions](https://grafana.com/docs/grafana/latest/panels/reference-standard-field-definitions/)
+
+## Visualizations
+
+Grafana [panels](#panels) support various visualizations, which are visual representations of underlying data. These representations are often graphical and include:
+
+* Graphs and charts
+ * [Time series](https://grafana.com/docs/grafana/latest/visualizations/time-series/)
+ * [State timeline](https://grafana.com/docs/grafana/latest/visualizations/state-timeline/)
+ * [Status history](https://grafana.com/docs/grafana/latest/visualizations/status-history/)
+ * [Bar chart](https://grafana.com/docs/grafana/latest/visualizations/bar-chart/)
+ * [Histogram](https://grafana.com/docs/grafana/latest/visualizations/histogram/)
+ * [Heatmap](https://grafana.com/docs/grafana/latest/visualizations/heatmap/)
+ * [Pie chart](https://grafana.com/docs/grafana/latest/visualizations/pie-chart-panel/)
+ * [Candlestick](https://grafana.com/docs/grafana/latest/visualizations/candlestick/)
+* Stats and numbers
+ * [Stat](https://grafana.com/docs/grafana/latest/visualizations/stat-panel/)
+ * [Gauge](https://grafana.com/docs/grafana/latest/visualizations/gauge-panel/)
+ * [Bar gauge](https://grafana.com/docs/grafana/latest/visualizations/bar-gauge-panel/)
+* Others
+ * [Table](https://grafana.com/docs/grafana/latest/visualizations/table/)
+ * [Logs](https://grafana.com/docs/grafana/latest/visualizations/logs-panel/)
+ * [Node graph](https://grafana.com/docs/grafana/latest/visualizations/node-graph/)
+ * [Text](https://grafana.com/docs/grafana/latest/visualizations/text-panel/)
+ * [News](https://grafana.com/docs/grafana/latest/visualizations/news-panel/)
+ * [Alert list](https://grafana.com/docs/grafana/latest/visualizations/alert-list-panel/)
+ * [Dashboard list](https://grafana.com/docs/grafana/latest/visualizations/dashboard-list-panel/)
+
+## Dashboards
+
+A Grafana dashboard is a collection of [panels](#panels) arranged in rows and columns. Panels typically show datasets that are related. You can create multiple dashboards in Grafana. For more information about dashboards, refer to the following links:
+
+* [Working with Grafana dashboard UI](https://grafana.com/docs/grafana/latest/dashboards/dashboard-ui/)
+* [Dashboard folders](https://grafana.com/docs/grafana/latest/dashboards/)
+* [Create dashboard](https://grafana.com/docs/grafana/latest/dashboards/dashboard-create/)
+* [Manage dashboards](https://grafana.com/docs/grafana/latest/dashboards/dashboard-manage/)
+* [Annotations](https://grafana.com/docs/grafana/latest/dashboards/annotations/)
+* [Playlist](https://grafana.com/docs/grafana/latest/dashboards/playlist/)
+* [Search](https://grafana.com/docs/grafana/latest/dashboards/search/)
+* [Keyboard shortcuts](https://grafana.com/docs/grafana/latest/dashboards/shortcuts/)
+* [Reporting](https://grafana.com/docs/grafana/latest/dashboards/reporting/)
+* [Time range controls](https://grafana.com/docs/grafana/latest/dashboards/time-range-controls/)
+* [Dashboard version history](https://grafana.com/docs/grafana/latest/dashboards/)
+* [Dashboard export and import](https://grafana.com/docs/grafana/latest/dashboards/export-import/)
+* [Dashboard JSON model](https://grafana.com/docs/grafana/latest/dashboards/json-model/)
+* [Scripted dashboards](https://grafana.com/docs/grafana/latest/dashboards/scripted-dashboards/)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to share an Azure Managed Grafana Preview workspace](./how-to-share-grafana-workspace.md)
managed-grafana How To Api Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-api-calls.md
+
+ Title: 'How to call Grafana APIs in your automation: Azure Managed Grafana Preview'
+description: Learn how to call Grafana APIs in your automation with Azure Active Directory (Azure AD) and an Azure service principal
++++ Last updated : 3/31/2022 ++
+# How to call Grafana APIs in your automation within Azure Managed Grafana Preview
+
+In this article, you'll learn how to call Grafana APIs within Azure Managed Grafana Preview using a service principal.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/quickstart-managed-grafana-workspace.md).
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Assign roles to the service principal of your application and of your Azure Managed Grafana workspace
+
+1. Start by [Creating an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). This guide takes you through creating an application and assigning a role to its service principal. For simplicity, use an application located in the same Azure Active Directory (Azure AD) tenant as your Grafana workspace.
+1. Assign the role of your choice to the service principal for your Grafana resource. Refer to [How to share a Managed Grafana workspace](how-to-share-grafana-workspace.md) to learn how to grant access to a Grafana instance. Instead of selecting a user, select **Service principal**.
+
+## Get an access token
+
+To access Grafana APIs, you first need to get an access token. Here's an example showing how you can call Azure AD to retrieve a token:
+
+```bash
+curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \
+-d 'grant_type=client_credentials&client_id=<client-id>&client_secret=<application-secret>&resource=ce34e7e5-485f-4d76-964f-b3d2b16d1e4f' \
+https://login.microsoftonline.com/<tenant-id>/oauth2/token
+```
+
+Replace `<tenant-id>` with your own Azure AD tenant ID, replace `<client-id>` with your client ID and `<application-secret>` with the application secret of the application you want to share.
+
+Here's an example of response:
+
+```bash
+{
+ "token_type": "Bearer",
+ "expires_in": "599",
+ "ext_expires_in": "599",
+ "expires_on": "1575500555",
+ "not_before": "1575499766",
+ "resource": "ce34...1e4f",
+ "access_token": "eyJ0eXAiOiJ......AARUQ"
+}
+```
+
+## Call a Grafana API
+
+You can now call the Grafana API using the access token retrieved in the previous step as the Authorization header. For example:
+
+```bash
+curl -X GET \
+-H 'Authorization: Bearer <access-token>' \
+https://<grafana-url>/api/user
+```
+
+Replace `<access-token>` with the access token retrieved in the previous step and replace `<grafana-url>` with the URL of your Grafana instance. For example `https://grafanaworkspace-abcd.cuse.grafana.azure.com`. This URL is displayed in the Azure platform, in the **Overview** page of your Managed Grafana workspace.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Grafana UI](./grafana-app-ui.md)
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
+
+ Title: How to configure data sources for Azure Managed Grafana Preview with Managed Identity
+description: In this how-to guide, discover how you can configure data sources for Azure Managed Grafana using Managed Identity.
++++ Last updated : 3/31/2022 ++
+# How to configure data sources for Azure Managed Grafana Preview with Managed Identity
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/how-to-permissions.md).
+- A resource including monitoring data with Managed Grafana monitoring permissions. Read [how to configure permissions](how-to-permissions.md) for more information.
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Supported Grafana data sources
+
+By design, Grafana can be configured with multiple data sources. A data source is an externalized storage backend that holds your telemetry information. Azure Managed Grafana supports many popular data sources. Azure-specific data sources are:
+
+- [Azure Data Explorer](https://github.com/grafana/azure-data-explorer-datasource?utm_source=grafana_add_ds)
+- [Azure Monitor](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/)
+
+Other data sources include:
+
+- [Alertmanager](https://grafana.com/docs/grafana/latest/datasources/alertmanager/)
+- [CloudWatch](https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/)
+- Direct Input
+- [Elasticsearch](https://grafana.com/docs/grafana/latest/datasources/elasticsearch/)
+- [Google Cloud Monitoring](https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/)
+- [Graphite](https://grafana.com/docs/grafana/latest/datasources/graphite/)
+- [InfluxDB](https://grafana.com/docs/grafana/latest/datasources/influxdb/)
+- [Jaeger](https://grafana.com/docs/grafana/latest/datasources/jaeger/)
+- [Loki](https://grafana.com/docs/grafana/latest/datasources/loki/)
+- [Microsoft SQL Server](https://grafana.com/docs/grafana/latest/datasources/mssql/)
+- [MySQL](https://grafana.com/docs/grafana/latest/datasources/mysql/)
+- [OpenTSDB](https://grafana.com/docs/grafana/latest/datasources/opentsdb/)
+- [PostgreSQL](https://grafana.com/docs/grafana/latest/datasources/postgres/)
+- [Prometheus](https://grafana.com/docs/grafana/latest/datasources/prometheus/)
+- [Tempo](https://grafana.com/docs/grafana/latest/datasources/tempo/)
+- [TestData DB](https://grafana.com/docs/grafana/latest/datasources/testdata/)
+- [Zipkin](https://grafana.com/docs/grafana/latest/datasources/zipkin/)
+
+You can find all available Grafana data sources by going to your workspace and selecting this page from the left menu: **Configuration** > **Data sources** > **Add a data source** . Search for the data source you need from the available list. For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
+
+ :::image type="content" source="media/managed-grafana-how-to-source-plugins.png" alt-text="Screenshot of the Add data source page.":::
+
+## Default data sources in an Azure Managed Grafana workspace
+
+The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps in your workspace endpoint:
+
+1. From the left menu, select **Configuration** > **Data sources**.
+
+ :::image type="content" source="media/managed-grafana-how-to-source-configuration.png" alt-text="Screenshot of the Add data sources page.":::
+
+1. Azure Monitor should be listed as a built-in data source for your workspace. Select **Azure Monitor**.
+1. In **Settings**, authenticate through **Managed Identity** and select your subscription from the dropdown list or enter your **App Registration** details
+
+ :::image type="content" source="media/managed-grafana-how-to-source-configuration-Azure-Monitor-settings.png" alt-text="Screenshot of the Azure Monitor page in data sources.":::
+
+Authentication and authorization are subsequently made through the provided managed identity. With Managed Identity, you can assign permissions for your Managed Grafana workspace to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
+
+## Manually assign permissions for Managed Grafana to access data in Azure
+
+Azure Managed Grafana automatically configures the **Monitoring Reader** role for accessing all the Azure Monitor data and Log Analytics resources in your subscription. To change this:
+
+1. Go to the Log Analytics resource that contains the monitoring data you want to visualize.
+1. Select **Access Control (IAM)**.
+1. Search for your Managed Grafana workspace and change the permission.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
managed-grafana How To Monitor Managed Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-monitor-managed-grafana-workspace.md
+
+ Title: 'How to monitor your workspace with logs in Azure Managed Grafana Preview'
+description: Learn how to monitor your workspace in Azure Managed Grafana Preview with logs
++++ Last updated : 3/31/2022 ++
+# How to monitor your workspace with logs in Azure Managed Grafana Preview
+
+In this article, you'll learn how to monitor an Azure Managed Grafana Preview workspace by configuring diagnostic settings and accessing event logs.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure Managed Grafana workspace with access to at least one data source. If you don't have a workspace yet, [create an Azure Managed Grafana workspace](/how-to-permissions.md) and [add a data source](how-to-data-source-plugins-managed-identity.md).
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Add diagnostic settings
+
+To monitor an Azure Managed Grafana workspace, the first step to take is to configure diagnostic settings. In this process, you'll configure the streaming export of your workspace's logs to a destination of your choice.
+
+You can create up to five different diagnostic settings to send different logs to independent destinations.
+
+1. Open a Managed Grafana workspace, and go to **Diagnostic settings**, under **Monitoring**
+
+ :::image type="content" source="media/managed-grafana-monitoring-diagnostic-overview.png" alt-text="Screenshot of the Azure platform. Diagnostic settings.":::
+
+1. Select **+ Add diagnostic setting**
+
+ :::image type="content" source="media/managed-grafana-monitoring-add-settings.png" alt-text="Screenshot of the Azure platform. Add diagnostic settings.":::
+
+1. Enter a unique **diagnostic setting name** for your diagnostic
+
+1. Select a category. You can select **allLogs**, to stream all logs, **audit** to stream audit logs, or select the **GrafanaLoginEvents** category to stream log in events. Select **allLogs**.
+
+1. Under **Destination details**, select one or more destinations, fill out details and select **Save**.
+
+ | Destination | Description | Settings |
+ |-|-|-|
+ | Log Analytics workspace | Send data to a Log Analytics workspace | Select the **subscription** containing an existing Log Analytics workspace, then select the **Log Analytics workspace** |
+ | Storage account | Archive data to a storage account | Select the **subscription** containing an existing storage account, then select the **storage account**. Only storage accounts in the same region as the Grafana workspace are displayed in the dropdown menu. |
+ | Event hub | Stream to an event hub | Select a **subscription** and an existing Azure Event Hub **namespace**. Optionally also choose an existing **event hub**. Lastly, choose an **event hub policy** from the list. Only event hubs in the same region as the Grafana workspace are displayed in the dropdown menu. |
+ | Partner solution | Send to a partner solution | Select a **subscription** and a **destination**. For more information about available destinations, go to [partner destinations](/azure/azure-monitor/partners). |
+
+ :::image type="content" source="media/managed-grafana-monitoring-settings.png" alt-text="Screenshot of the Azure platform. Diagnostic settings configuration.":::
+
+## Access logs
+
+Now that you've configured your diagnostic settings, Azure will stream all new events to your selected destinations and generate logs. You can now create queries and access logs to monitor your application.
+
+1. In your Managed Grafana workspace, select **Logs** from the left menu. The Azure platform displays a **Queries** page, with suggestions of queries to choose from.
+
+ :::image type="content" source="media/managed-grafana-monitoring-logs-menu.png" alt-text="Screenshot of the Azure platform. Open Logs.":::
+
+1. Select a query from the suggestions displayed under the **Queries** page, or close the page to create your own query.
+ 1. To use a suggested query, select a query and select **Run**, or select **Load to editor** to review the code.
+ 1. To create your own query, enter your query in the code editor and select **Run**. You can also perform some actions, such as editing the scope and the range of the query, as well as saving and sharing the query. The result of the query is displayed in the lower part of the screen.
+
+ :::image type="content" source="media/managed-grafana-monitoring-logs-query.png" alt-text="Screenshot of the Azure platform. Log query editing." lightbox="media/managed-grafana-monitoring-logs-query-expanded.png":::
+
+1. Select **Schema and Filter** on the left side of the screen to access tables, queries and functions. You can also filter and group results, as well as find your favorites.
+1. Select **Columns** on the right of **Results** to edit the columns of the results table, and manage the table like a pivot table.
+
+ :::image type="content" source="media/managed-grafana-monitoring-logs-filters.png" alt-text="Screenshot of the Azure platform. Log query filters and columns." lightbox="media/managed-grafana-monitoring-logs-filters-expanded.png":::
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Grafana UI](./grafana-app-ui.md)
+> [How to share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
+
+ Title: How to configure permissions for Azure Managed Grafana
+description: Learn how to manually configure access permissions with roles for your Azure Managed Grafana Preview workspace
++++ Last updated : 3/31/2022 ++
+# How to configure permissions for Azure Managed Grafana Preview
+
+By default, when a Grafana workspace is created, Azure Managed Grafana grants it the Monitoring Reader role for all Azure Monitor data and Log Analytics resources within a subscription.
+
+This means that the new Grafana workspace can access and search all monitoring data in the subscription, including viewing the Azure Monitor metrics and logs from all resources, and any logs stored in Log Analytics workspaces in the subscription.
+
+In this article, you'll learn how to manually edit permissions for a specific resource.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/how-to-permissions.md).
+- An Azure resource with monitoring data and write permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner)
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Assign permissions for an Azure Managed Grafana workspace to access data in Azure
+
+To edit permissions for a specific resource, follow these steps:
+
+1. Open a resource that contains the monitoring data you want to retrieve. In this example, we're configuring an Application Insights resource.
+1. Select **Access Control (IAM)**.
+1. Under **Grant access to this resource**, select **Add role assignment**.
+
+ :::image type="content" source="media/managed-grafana-how-to-permissions-iam.png" alt-text="Screenshot of the Azure platform to add role assignment in App Insights.":::
+
+1. The portal lists various roles you can give to your Managed Grafana resource. Select a role. For instance, **Monitoring Reader**. Select this role.
+1. Click **Next**.
+ :::image type="content" source="media/managed-grafana-how-to-permissions-role.png" alt-text="Screenshot of the Azure platform and choose Monitor Reader.":::
+
+1. For **Assign access to**, select **Managed Identity**.
+1. Click **Select members**.
+
+ :::image type="content" source="media/managed-grafana-how-to-permissions-members.png" alt-text="Screenshot of the Azure platform selecting members.":::
+
+1. Select the **Subscription** containing your Managed Grafana workspace
+1. Select a **Managed identity** from the options in the dropdown list
+1. Select your Managed Grafana workspace from the list.
+1. Click **Select** to confirm
+
+ :::image type="content" source="media/managed-grafana-how-to-permissions-identity.png" alt-text="Screenshot of the Azure platform selecting the workspace.":::
+
+1. Click **Next**, then **Review + assign** to confirm the application of the new permission
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure data source plugins for Azure Managed Grafana with Managed Identity](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
+
+ Title: How to share an Azure Managed Grafana Preview workspace
+description: 'Azure Managed Grafana: learn how you can share access permissions and dashboards with your team and customers.'
++++ Last updated : 3/31/2022 ++
+# How to share an Azure Managed Grafana Preview workspace
+
+A DevOps team may build dashboards to monitor and diagnose an application or infrastructure that it manages. Likewise, a support team may use a Grafana monitoring solution for troubleshooting customer issues. In these scenarios, multiple users will be accessing one Grafana workspace. Azure Managed Grafana enables such sharing by allowing you to set the custom permissions on a workspace that you own. This article explains what permissions are supported and how to grant permissions to share dashboards with your internal teams or external customers.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/how-to-permissions.md).
+
+## Supported Grafana roles
+
+Azure Managed Grafana supports the Admin, Viewer and Editor roles:
+
+- The Admin role provides full control of the workspace including viewing, editing, and configuring data sources.
+- The Editor role provides read-write access to the dashboards in the workspace
+- The Viewer role provides read-only access to dashboards in the workspace.
+
+The Admin role is automatically assigned to the creator of a Grafana workspace. More details on Admin, Editor, and Viewer roles can be found at [Grafana organization roles](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#compare-roles).
+
+Grafana user roles and assignments are fully integrated with the Azure Active Directory. You can manage these permissions from the Azure portal or the command line. This section explains how to assign users to the Viewer or Editor role in the Azure portal.
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+## Assign an Admin, Viewer or Editor role to a user
+
+1. Open your Managed Grafana workspace.
+1. Select **Access control (IAM)** in the navigation menu.
+1. Click **Add**, then **Add role assignment**
+
+ :::image type="content" source="media/managed-grafana-how-to-share-IAM.png" alt-text="Screenshot of Add role assignment in the Azure platform.":::
+
+1. Select one of the Grafana roles to assign to a user or security group. The available roles are:
+
+ - Grafana Admin
+ - Grafana Editor
+ - Grafana Viewer
+
+ :::image type="content" source="media/managed-grafana-how-to-share-role-assignment.png" alt-text="Screenshot of the Grafana roles in the Azure platform.":::
+
+> [!NOTE]
+> Dashboard and data source level sharing will be done from within the Grafana application. Fore more details, refer to [Grafana permissions](https://grafana.com/docs/grafana/latest/permissions/).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure permissions for Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [Configure data source plugins for Azure Managed Grafana with Managed Identity](./how-to-data-source-plugins-managed-identity.md)
+> [How to call Grafana APIs in your automation with Azure Managed Grafana Preview](./how-to-api-calls.md)
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
+
+ Title: What is Azure Managed Grafana Preview?
+description: Read an overview of Azure Managed Grafana. Understand why and how to use Managed Grafana.
++++ Last updated : 3/31/2022
+
+
+# What is Azure Managed Grafana Preview?
+
+Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. It's built as a fully managed Azure service operated and supported by Microsoft. Grafana helps you bring together metrics, logs and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time.
+
+Azure Managed Grafana is optimized for the Azure environment. It works seamlessly with many Azure services. Specifically, for the current preview, it provides with the following integration features:
+
+* Built-in support for Azure Monitor and Azure Data Explorer
+* User authentication and access control using Azure Active Directory identities
+* Direct import of existing charts from Azure portal
+
+To learn more about how Grafana works, visit the [Getting Started documentation](https://grafana.com/docs/grafana/latest/getting-started/) on the Grafana Labs website.
+
+## Why use Azure Managed Grafana Preview?
+
+Managed Grafana lets you bring together all your telemetry data into one place. It can access a wide variety of data sources supported, including your data stores in Azure and elsewhere. By combining charts, logs and alerts into one view, you can get a holistic view of your application and infrastructure, and correlate information across multiple datasets.
+
+As a fully managed service, Azure Managed Grafana lets you deploy Grafana without having to deal with setup. The service provides high availability, SLA guarantees and automatic software updates.
+
+You can share Grafana dashboards with people inside and outside of your organization and allow others to join in for monitoring or troubleshooting.
+
+Managed Grafana uses Azure Active Directory (Azure AD)ΓÇÖs centralized identity management, which allows you to control which users can use a Grafana instance, and you can use managed identities to access Azure data stores, such as Azure Monitor.
+
+You can create dashboards instantaneously by importing existing charts directly from the Azure portal or by using prebuilt dashboards.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a workspace in Azure Managed Grafana Preview using the Azure portal](./quickstart-managed-grafana-portal.md).
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
+
+ Title: 'Quickstart: create a workspace in Azure Managed Grafana Preview using the Azure portal'
+description: Learn how to create a Managed Grafana workspace using the Azure portal
++++ Last updated : 03/31/2022
+
+
+# Quickstart: Create a workspace in Azure Managed Grafana Preview using the Azure portal
+
+Get started by using the Azure portal to create a new workspace in Azure Managed Grafana Preview.
+
+## Prerequisite
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+
+## Create a Managed Grafana workspace
+
+1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+
+1. In the upper-left corner of the home page, select **Create a resource**. In the **Search services and Marketplace** box, enter *Grafana* and select **Enter**.
+
+1. Select **Grafana Workspaces** from the search results, and then **Create**.
+
+ :::image type="content" source="media/managed-grafana-quickstart-portal-grafana-create.png" alt-text="Screenshot of the Azure portal. Create Grafana workspace.":::
+
+1. In the Create Grafana Workspace pane, enter the following settings.
+
+ :::image type="content" source="media/managed-grafana-quickstart-portal-form.png" alt-text="Screenshot of the Azure portal. Create workspace form.":::
+
+ | Setting | Sample value | Description |
+ ||||
+ | Subscription ID | mysubscription | Select the Azure subscription you want to use. |
+ | Resource group name | myresourcegroup | Select or create a resource group for your Azure Managed Grafana resources. |
+ | Location | East US | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. |
+ | Name | mygrafanaworkspace | Enter a unique resource name. It will be used as the domain name in your workspace URL. |
+
+1. Select **Next : Permission >** to access rights for your Grafana dashboard and data sources:
+ 1. Make sure the **System assigned identity** is set to **On**. The box **Add role assignment to this identity with 'Monitoring Reader' role on target subscription** should also be checked for this Managed Identity to get access to your current subscription.
+
+ 1. Make sure that you're listed as a Grafana administrator. You can also add more users as administrators at this point or later.
+
+ If you uncheck this option (or if the option grays out for you), someone with the Owner role on the subscription can do the role assignment to give you the Grafana Admin permission.
+
+ > [!NOTE]
+ > If creating a Grafana workspace fails the first time, please try again. The failure might be due to a limitation in our backend, and we are actively working to fix.
+
+1. Optionally select **Next : Tags** and add tags to categorize resources.
+
+1. Select **Next : Review + create >** and then **Create**. Your Azure Managed Grafana resource is deploying.
+
+## Connect to your Managed Grafana workspace
+
+1. Once the deployment is complete, select **Go to resource** to open your resource.
+
+ :::image type="content" source="media/managed-grafana-quickstart-portal-deployment-complete.png" alt-text="Screenshot of the Azure portal. Message: Your deployment is complete.":::
+
+1. In the **Overview** tab's Essentials section, note the **Endpoint** URL. Open it to access the newly created Managed Grafana workspace. Single sign-on via Azure Active Directory should have been configured for you automatically. If prompted, enter your Azure account.
+
+ :::image type="content" source="media/managed-grafana-quickstart-workspace-overview.png" alt-text="Screenshot of the Azure portal. Endpoint URL display.":::
+
+ :::image type="content" source="media/managed-grafana-quickstart-portal-grafana-workspace.png" alt-text="Screenshot of a Managed Grafana dashboard.":::
+
+You can now start interacting with the Grafana application to configure data sources, create dashboards, reporting and alerts.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure permissions for Azure Managed Grafana Preview](./how-to-data-source-plugins-managed-identity.md)
+> [Configure data source plugins for Azure Managed Grafana Preview with Managed Identity](./how-to-data-source-plugins-managed-identity.md)
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/customer-dashboard.md
Previously updated : 9/27/2021 Last updated : 04/18/2022 # Customers dashboard in commercial marketplace analytics
_**Table 1: Dictionary of data terms**_
| Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerId | | Billing Account ID | Billing Account ID | The identifier of the account on which billing is generated. Map **Billing Account ID** to **customerID** to connect your Payout Transaction Report with the Customer, Order, and Usage Reports. | BillingAccountId | | Customer Type | Customer Type | The value of this field signifies the type of the customer. The possible values are:<ul><li>individual</li> <li>organization</li></ul> | CustomerType |
-|||||
+| OfferName | OfferName | The name of the commercial marketplace offer | OfferName|
+| PlanID | PlanID | The display name of the plan entered when the offer was created in Partner Center | PlanID |
+| SKU | SKU | The plan associated with the offer | SKU |
+| N/A | lastModifiedAt | The latest timestamp for customer purchases. Use this field, via programmatic API access, to pull the latest snapshot of all customer purchase transactions since a specific date | lastModifiedAt |
### Customers page filters
marketplace Deprecate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/deprecate-vm.md
+
+ Title: Deprecate or restore a virtual machine offer from Azure Marketplace
+description: Deprecate or restore a virtual machine, image, plan, or offer.
++++++ Last updated : 04/18/2022++
+# Deprecate or restore a virtual machine offer
+
+This article describes how to deprecate or restore virtual machine images, plans, and offers. The deprecation feature replaces the _stop sell_ feature. It complies with the Azure 90-day wind down period as it allows deprecation to be scheduled in advance.
+
+## What is deprecation?
+
+Deprecation is the delisting of a VM offer or a subset of the offer from Azure Marketplace so that it is no longer available for customers to deploy additional instances. Reasons to deprecate may vary. Common examples are due to security issues or end of life. You can deprecate image versions, plans, or an entire VM offer:
+
+- **Deprecation of an image version** ΓÇô The removal of an individual VM image version
+- **Deprecation of a plan** ΓÇô The removal of a plan and subsequently all images within the plan
+- **Deprecation of an offer** ΓÇô The removal of an entire VM offer, including all plans within the offer and subsequently all images within each plan
+
+To ensure your customers are provided with ample notification, deprecation is scheduled in advance.
+
+> [!IMPORTANT]
+> Existing deployments are not impacted by deprecation.
+
+## How deprecation affects customers
+
+Here are some important things to understand about the deprecation process.
+
+Before the scheduled deprecation date:
+
+- Customers with active deployments are notified.
+- Customers can continue to deploy new instances up until the deprecation date.
+- If deprecating an offer or plan, the offer or plan will no longer be available in the marketplace. This is to reduce the discoverability of the offer or plan.
+
+After the scheduled deprecation date:
+
+- Customers will not be able to deploy new instances using the affected images. If deprecating a plan, all images within the plan will no longer be available and if deprecating an offer, all images within the offer will no longer be available following deprecation.
+- Active VM instances will not be impacted.
+- Existing virtual machine scale sets (VMSS) deployments cannot be scaled out if configured with any of the impacted images. If deprecating a plan or offer, all existing VMSS deployments pinned to any image within the plan or offer respectively cannot be scaled out.
+
+> [!TIP]
+> Before you deprecate an offer or plan, make sure you understand the current usage by reviewing the [Usage dashboard in commercial marketplace analytics](usage-dashboard.md). If usage is high, consider hiding the plan or offer to minimize discoverability within the commercial marketplace. This will steer new customers towards other available options.
+
+## Deprecate an image
+
+Keep the following things in mind when deprecating an image:
+
+- You can deprecate any image within a plan.
+- Each plan must have at least one image.
+- Publish the offer after scheduling the deprecation of an image.
+- Images that are published to preview can be deprecated or deleted immediately.
+
+**To deprecate an image**:
+
+1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the image you want to deprecate.
+1. On the **Offer overview** page, under **Plan overview**, select the plan with the image.
+1. In the left nav, select the **Technical Configuration** page.
+1. Under **VM images**, select the **Active** tab.
+1. In the **Action** column, select **Deprecate** for the image you want to deprecate. Upon confirming the deprecation, the image is listed on the **Deprecated** tab.
+1. Save your changes on the **Technical configuration** page.
+1. For the change to take effect and for customers to be notified, select **Review and publish** and publish the offer.
+
+## Restore a deprecated image
+
+Keep the following things in mind when restoring a deprecated image:
+
+- Publish the offer after restoring an image for it to become available to customers.
+- You can undo or cancel the deprecation anytime up until the scheduled date.
+- You can restore an image for a period of time after deprecation. After the window has expired, the image can no longer be restored.
+
+**To restore a deprecated image**:
+
+1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the image you want to restore.
+1. On the **Offer overview** page, under **Plan overview**, select the plan with the image.
+1. In the left nav, select the **Technical configuration** page.
+1. Under **VM images**, select the **Deprecated** tab. The status is shown in the **Status** column.
+1. In the **Action** column, select one of the following:
+ - If the deprecation date shown in the **Status** column is in the future, you can select **Cancel deprecation**. The image version will then be listed under the Active tab.
+ - If the deprecation date shown in the **Status** column is in the past, select **Restore image**. The image is then listed on the **Active** tab.
+ > [!NOTE]
+ > If the image can no longer be restored, then no actions will be available.
+1. Save your changes on the **Technical configuration** page.
+1. For the change to take effect, select **Review and publish** and publish the offer.
+
+## Deprecate a plan
+
+Keep the following things in ming when deprecating a plan:
+
+- Publish the offer after scheduling the deprecation of a plan.
+- Upon scheduling the deprecation of a plan, free trials are disabled immediately.
+- If a test drive is enabled on your offer and itΓÇÖs configured to use the plan thatΓÇÖs being deprecated, be sure to reconfigure the test drive to use another plan in the offer. Otherwise, disable the test drive on the **Offer Setup** page.
+
+**To deprecate a plan**:
+
+1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the plan you want to deprecate.
+1. On the **Offer overview** page, under **Plan overview**, in the **Action** column, select **Deprecate plan**.
+1. In the confirmation box that appears, enter the Plan ID and confirm that you want to deprecate the plan.
+1. For the change to take effect and for customers to be notified, select **Review and publish** and publish the offer.
+
+## Restore a deprecated plan
+
+Keep the following things in mind when restoring a plan:
+
+- Ensure that there is at least one active image version available on the **Technical Configuration** page of the plan. You can either restore a deprecated image or provide a new one.
+- Publish the offer after restoring a plan for it to become available to customers.
+
+**To restore a plan**:
+
+1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer with the plan you want to restore.
+1. On the **Offer overview** page, under **Plan overview**, in the **Action** column of the plan you want to restore, select **Restore plan**.
+1. In the confirmation dialog box that appears, confirm that you want to restore the plan.
+1. Ensure that there is at least one active image version available on the **Technical Configuration** page of the plan. Note that all deprecated images are listed under **VM Images** on the **Deprecated** tab. You can either [restore a deprecated image](#restore-a-deprecated-image) or [add a new VM image](azure-vm-plan-technical-configuration.md#vm-images). Remember, if the restore window has expired, the image can no longer be restored.
+1. Save your changes on the **Technical configuration** page.
+1. For the change to take effect, select **Review and publish** and publish the offer.
+
+## Deprecate an offer
+
+On the **Offer Overview** page, you can deprecate the entire offer. This deprecates all plans and images within the offer.
+
+Keep the following things in mind when deprecating an offer:
+
+- The deprecation will be scheduled 90 days into the future and customers will be notified.
+- Test drive and any free trials will be disabled immediately upon scheduling deprecation of an offer.
+
+**To deprecate an offer**:
+
+1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer you want to deprecate.
+1. On the **Offer overview** page, in the upper right, select **Deprecate offer**.
+1. In the confirmation dialog box that appears, enter the Offer ID and then confirm that you want to deprecate the offer.
+ > [!NOTE]
+ > On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, the **Status column** of the offer will say **Deprecation scheduled**. On the **Offer overview** page, under **Publish status**, the scheduled deprecation date is shown.
+
+## Restore a deprecated offer
+
+You can restore an offer only if the offer contains at least one active plan and at least one active image.
+
+**To restore a deprecated offer**:
+
+1. On the [Marketplace offers](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, in the **Offer alias** column, select the offer you want to restore.
+1. In the left nav, select **Plan overview**.
+1. In the **Action** column of the plan you want to restore, select **Restore**. You can optionally [create a new plan](azure-vm-plan-overview.md) within the offer.
+1. Ensure that there is at least one active image version available on the **Technical Configuration** page of the plan. Note that all deprecated images are listed under **VM Images** on the **Deprecated** tab. You can either [restore a deprecated image](#restore-a-deprecated-image) or [add a new VM image](azure-vm-plan-technical-configuration.md#vm-images). Remember, if the restore window has expired, the image can no longer be restored.
+1. Save your changes on the **Technical configuration** page.
+1. For the changes to take effect, select **Review and publish** and publish the offer.
marketplace Revenue Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/revenue-dashboard.md
Previously updated : 04/06/2022 Last updated : 04/18/2022 # Revenue dashboard in commercial marketplace analytics
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
## Data dictionary table
-| Data field | Definition |
+| Column name in user interface | Definition |
|-||
-| <img width=130/> | |
| Billed revenue | Represents billed sales of a partner for customerΓÇÖs offer purchases and consumption through the commercial marketplace. This is in transaction currency and will always be present in download reports. | | Estimated revenue (USD) | Estimated revenue reported in US dollars. This column will always be present in download reports. | | Estimated revenue (PC) | Estimated revenue reported in partner preferred currency. This column will always be present in download reports. |
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
| Exchange rate date | The date used to calculate exchange rates for currency conversions | | Estimated pay out month | The month for receiving your estimated earnings | | Sales channel | Represents the sales channel for the customer. It is the same as `Azure license type` in the orders report and usage report. The possible values are:<ul><li>Cloud Solution Provider (CSP)</li><li>Enterprise (EA)</li><li>Enterprise through Reseller</li><li>Pay as You Go</li><li>Go to market (GTM)</li></ul> |
-| Plan Id | Unique identifier for the plan in the offer |
+| PlanId | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a numeric number. |
| Billing model | Subscription or consumption-based billing model used for calculation of estimated revenue. It can have one of these two values:<ul><li>UsageBased</li><li>SubscriptionBased</li></ul> | | Customer postal code | The postal code name provided by the bill-to customer | | Customer city | The city name provided by the bill-to customer |
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Previously updated : 04/13/2022 Last updated : 04/18/2022 # What's new in the Microsoft commercial marketplace
Learn about important updates in the commercial marketplace program of Partner C
| Offers | While [private plans](private-plans.md) were previously only available on the Azure portal, they are now also available on Microsoft AppSource. | 2021-09-10 | | Analytics | Publishers of Azure application offers can view offer deployment health in the Quality of service (QoS) reports. QoS helps publishers understand the reasons for offer deployment failures and provides actionable insights for their remediation. For details, see [Quality of service (QoS) dashboard](quality-of-service-dashboard.md). | 2021-09-07 | | Policy | The SaaS customer [refund window](/marketplace/refund-policies) is now [72 hours](./marketplace-faq-publisher-guide.yml) for all offers. | 2021-09-01 |
-|
## Tax updates
Learn about important updates in the commercial marketplace program of Partner C
| Taxation | - Kenya, Moldova, Tajikistan, and Uzbekistan were moved from the Publisher/Developer managed list to the [End-customer taxation with differences in marketplace](/partner-center/tax-details-marketplace) list to show the difference in treatment between the two Marketplaces. <br> - Rwanda and Qatar were added to the [Publisher/Developer managed countries](/partner-center/tax-details-marketplace) list. <br> - Barbados was moved from the [Publisher/Developer managed countries](/partner-center/tax-details-marketplace) list to [Microsoft Managed country](/partner-center/tax-details-marketplace) list. | 2022-02-10 | | Payouts | We've updated the external tax form page, including instructions on how to reconcile 1099-k forms; see questions about tax forms at [Understand IRS tax forms issued by Microsoft](/partner-center/understand-irs-tax-forms). | 2022-01-06 | | Taxation | Nigeria and Thailand are now [Microsoft-managed countries](/partner-center/tax-details-marketplace) in Azure Marketplace. | 2021-09-13 |
-|
## Documentation updates | Category | Description | Date | | | - | - |
+| Policy | We've updated the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). For change history, see [Change history for Microsoft Publisher Agreement version 8.0 ΓÇô May 2022 update](/legal/marketplace/mpa-change-history-may-2022). | 2022-04-15 |
| Offers | Added new articles to lead you step-by-step through the process of [testing a SaaS offer](test-saas-overview.md). | 2022-03-30 | | Payouts | We updated the payment schedule for [Payout schedules and processes](/partner-center/payout-policy-details). | 2022-01-19 | | Analytics | Added questions and answers to the [Commercial marketplace analytics FAQ](./analytics-faq.yml), such as enrolling in the commercial marketplace, where to create a marketplace offer, getting started with programmatic access to commercial marketplace analytics reports, and more. | 2022-01-07 |
Learn about important updates in the commercial marketplace program of Partner C
| Offers | We moved the list of categories and industries from our [Marketing Best Practices](gtm-offer-listing-best-practices.md) topic to their [own page](marketplace-categories-industries.md). | 2021-08-20 | | Offers | The [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md) topic now includes a flowchart to help you determine the appropriate transactable offer type and pricing plan to sell your software in the commercial marketplace. | 2021-08-18 | | Policy | Updated [certification](/legal/marketplace/certification-policies?context=/azure/marketplace/context/context) policy; see [change history](/legal/marketplace/offer-policies-change-history). | 2021-08-06 |
-|
migrate Replicate Using Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/replicate-using-expressroute.md
Title: Replicate data over ExpressRoute with Azure Migrate Server Migration
+ Title: Replicate data over ExpressRoute for Azure Migrate projects with public endpoint connectivity
description: Use Azure ExpressRoute for replication with Azure Migrate Server Migration.
Last updated 02/22/2021
-# Replicate data over ExpressRoute with Azure Migrate: Server Migration
+# Replicate data over ExpressRoute for Azure Migrate projects with public endpoint connectivity
-In this article, you'll learn how to configure the [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tool to replicate data over an Azure ExpressRoute circuit while you migrate servers to Azure.
+In this article, you'll learn how to configure the [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tool to replicate data over an Azure ExpressRoute circuit while you migrate servers to Azure. This document is to be referenced if you want to use ExpressRoute for your replications when using an Azure Migrate project with public endpoint connectivity. To use private endpoint support, create a new Azure Migrate project with private endpoint connectivity. See [Using Azure Migrate with private endpoints](./how-to-use-azure-migrate-with-private-endpoints.md).
## Understand Azure ExpressRoute circuits
In this article, you'll learn how to replicate data by using:
> * An ExpressRoute circuit with private peering. > * An ExpressRoute circuit with Microsoft peering.
+> [!Important]
+> This document is to be referenced if you want to use ExpressRoute for your replications when using an Azure Migrate project with public endpoint connectivity.<br>
+> To use private endpoint support end-to-end, create a new Azure Migrate project with private endpoint connectivity. See [Using Azure Migrate with private endpoints](./how-to-use-azure-migrate-with-private-endpoints.md).
+ ## Replicate data by using an ExpressRoute circuit with private peering
-> [!Note]
-> This article shows how to replicate over a private peering circuit for [agentless migration of VMware virtual machines to Azure](./tutorial-migrate-vmware.md). To use private endpoint support for [other replication methods](./migrate-services-overview.md#azure-migrate-server-migration-tool), see [Using Azure Migrate with private endpoints](./how-to-use-azure-migrate-with-private-endpoints.md).
-
In the agentless method for migrating VMware virtual machines to Azure, the Azure Migrate appliance first uploads replication data to a storage account (cache storage account) in your subscription. Azure Migrate then moves the replicated data from the cache storage account to replica-managed disks in your subscription. To use a private peering circuit for replication, you'll create and attach a private endpoint to the cache storage account. Private endpoints use one or more private IP addresses from your virtual network, which effectively brings the storage account into your Azure virtual network. The private endpoint allows the Azure Migrate appliance to connect to the cache storage account by using ExpressRoute private peering. Data can then be transferred directly on the private IP address. <br/>
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This release of Azure Database for MySQL - Flexible Server includes the followin
- **GitHub actions support with Azure CLI**
- Flexible Server CLI now allows customers to automate workflows to deploy updates with GitHub actions. This feature helps set up and deploy database updates with MySQL GitHub action workflow. These CLI commands assist with setting up a repository to enable continuous deployment for ease of development. [Learn more](/cli/azure/mysql/flexible-server/deploy).
+ Flexible Server CLI now allows customers to automate workflows to deploy updates with GitHub actions. This feature helps set up and deploy database updates with MySQL GitHub Actions workflow. These CLI commands assist with setting up a repository to enable continuous deployment for ease of development. [Learn more](/cli/azure/mysql/flexible-server/deploy).
- **Zone redundant HA forced failover fixes**
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
After creating and connecting to the cluster, install the Open Liberty Operator.
4. Select **Install**. 5. In the page **Install Operator**, check **beta2** for **Update channel**, **All namespaces on the cluster (default)** for **Installation mode**, and **Automatic** for **Update approval**:
- ![create operator subscription for Open Liberty Operator](./media/howto-deploy-java-liberty-app/install-operator.png)
+ ![Screenshot of creating operator subscription for Open Liberty Operator.](./media/howto-deploy-java-liberty-app/install-operator.png)
6. Select **Install** and wait a minute or two until the installation completes. 7. Observe the Open Liberty Operator is successfully installed and ready for use. If you don't, diagnose and resolve the problem before continuing.
Follow the instructions below to create an OpenShift namespace for use with your
2. Navigate to **Administration** > **Namespaces** > **Create Namespace**. 3. Fill in `open-liberty-demo` for **Name** and select **Create**, as shown next.
- ![create namespace](./media/howto-deploy-java-liberty-app/create-namespace.png)
+ ![Screenshot of creating namespace.](./media/howto-deploy-java-liberty-app/create-namespace.png)
### Create an Azure Database for MySQL
Follow the instructions below to set up an Azure Database for MySQL for use with
2. Select **Add current client IP address**. 3. Set **Minimal TLS Version** to **>1.0** and select **Save**.
- ![configure mysql database connection security rule](./media/howto-deploy-java-liberty-app/configure-mysql-database-connection-security.png)
+ ![Screenshot of configuring mysql database connection security rule.](./media/howto-deploy-java-liberty-app/configure-mysql-database-connection-security.png)
3. Open **your SQL database** > **Connection strings** > Select **JDBC**. Write down the **Port number** following sql server address. For example, **3306** is the port number in the example below.
In the sample application, we've prepared Dockerfile-local and Dockerfile-wlp-lo
1. Open `http://localhost:9080/` in your browser to visit the application home page. The application will look similar to the following image:
- ![JavaEE Cafe Web UI](./media/howto-deploy-java-liberty-app/javaee-cafe-web-ui.png)
+ ![Screenshot of JavaEE Cafe Web UI.](./media/howto-deploy-java-liberty-app/javaee-cafe-web-ui.png)
1. Press **Control-C** to stop the application and Open Liberty server. The directory `2-simple` of your local clone shows the Maven project with the above changes already applied.
Because we use the Open Liberty Operator to manage Liberty applications, we need
1. In the middle of the page, select **Open Liberty Operator**. 1. In the middle of the page, select **Open Liberty Application**. The navigation of items in the user interface mirrors the actual containment hierarchy of technologies in use. <!-- Diagram source https://github.com/Azure-Samples/open-liberty-on-aro/blob/master/diagrams/aro-java-containment.vsdx -->
- ![ARO Java Containment](./media/howto-deploy-java-liberty-app/aro-java-containment.png)
+ ![Diagram of ARO Java Containment.](./media/howto-deploy-java-liberty-app/aro-java-containment.png)
1. Select **Create OpenLibertyApplication** 1. Replace the generated yaml with yours, which is located at `<path-to-repo>/3-integration/connect-db/mysql/target/openlibertyapplication.yaml`. 1. Select **Create**. You'll be returned to the list of OpenLibertyApplications.
Because we use the Open Liberty Operator to manage Liberty applications, we need
1. In the middle of the page, select **Open Liberty Operator**. 1. In the middle of the page, select **Open Liberty Application**. The navigation of items in the user interface mirrors the actual containment hierarchy of technologies in use. <!-- Diagram source https://github.com/Azure-Samples/open-liberty-on-aro/blob/master/diagrams/aro-java-containment.vsdx -->
- ![ARO Java Containment](./media/howto-deploy-java-liberty-app/aro-java-containment.png)
+ ![Diagram of ARO Java Containment.](./media/howto-deploy-java-liberty-app/aro-java-containment.png)
1. Select **Create OpenLibertyApplication** 1. Replace the generated yaml with yours, which is located at `<path-to-repo>/2-simple/openlibertyapplication.yaml`. 1. Select **Create**. You'll be returned to the list of OpenLibertyApplications.
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
After you restore the database, you can perform the following tasks to get your
No. Currently, Flexible Server supports a maximum of 35 days of retention. You can use manual backups for a long-term retention requirement.
-* **How do I manually back up my Postgres servers?**
+* **How do I manually back up my PostgreSQL servers?**
You can manually take a backup by using the PostgreSQL tool [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html). For examples, see [Migrate your PostgreSQL database by using dump and restore](../howto-migrate-using-dump-and-restore.md).
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
To register your resource, follow the **Prerequisites** and **Register** section
- [Register multiple sources in Azure Purview](register-scan-azure-multiple-sources.md#prerequisites)
-After you've registered your resources, you'll need to enable data use governance. Data use governance affects the security of your data, as it allows your users to manage access to resources from within Azure Purview.
+After you've registered your resources, you'll need to enable data use governance. Data use governance affects the security of your data, as it delegates to certain users to manage access to data resources from within Azure Purview.
To ensure you securely enable data use governance, and follow best practices, follow this guide to enable data use governance for your resource group or subscription:
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
Previously updated : 04/15/2022 Last updated : 04/18/2022
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[Policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data use governance* in Azure Purview.
-
+[Access policies](concept-data-owner-policies.md) allow you to enable access to data sources that have been registered for *Data use governance* in Azure Purview.
This article describes how a data owner can delegate in Azure Purview management of access to Azure Storage datasets. Currently, these two Azure Storage sources are supported: - Blob storage - Azure Data Lake Storage (ADLS) Gen2
To register your resources, follow the **Prerequisites** and **Register** sectio
- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Azure Purview](register-scan-adls-gen2.md#prerequisites)
-After you've registered your resources, you'll need to enable *Data use governance*. Data use governance can affect the security of your data, as it allows certain Azure Purview roles to manage access to data sources that have been registered. Secure practices related to *Data use governance* are described in this guide:
+After you've registered your resources, you'll need to enable *Data use governance*. Data use governance can affect the security of your data, as it delegates to certain Azure Purview roles to manage access to data sources that have been registered. Secure practices related to *Data use governance* are described in this guide:
- [How to enable data use governance](./how-to-enable-data-use-governance.md)
-The expected outcome is that your data source will have the **Data use governance** toggle **Enabled**, as shown in the picture:
+Once your data source has the **Data use governance** toggle **Enabled**, it will look like this picture:
:::image type="content" source="./media/how-to-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Screenshot that shows how to register a data source for policy by toggling the enable tab in the resource editor."::: ## Create and publish a data owner policy
-Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
+Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish an access policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
:::image type="content" source="./media/how-to-data-owner-policies-storage/data-owner-policy-example-storage.png" alt-text="Screenshot that shows a sample data owner policy giving access to an Azure Storage account."::: - >[!Important] > - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in Storage account(s). - ## Additional information - Policy statements set below container level on a Storage account are supported. If no access has been provided at Storage account level or container level, then the App that requests the data must execute a direct access by providing a fully qualified name to the data object. If the App attempts to crawl down the hierarchy starting from the Storage account or Container (like Storage Explorer does), and there's no access at that level, the request will fail. The following documents show examples of how to perform a direct access. See also the blogs in the *Next steps* section of this how-to-guide. - [*abfs* for ADLS Gen2](../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md#access-files-from-the-cluster)
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Previously updated : 3/02/2022 Last updated : 4/18/2022 # Authoring and publishing data owner access policies (Preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-Access policies allow data owners to manage access to datasets from Azure Purview. Data owners can author policies directly from Azure Purview Studio, and then have those policies enforced by the data source.
-
-This tutorial describes how a data owner can create, update, and publish access policies in Azure Purview Studio.
+Access policies allow a data owner to delegate in Azure Purview access management to a data source. These policies can be authored directly in Azure Purview Studio, and after publishing, they get enforced by the data source. This tutorial describes how to create, update, and publish access policies in Azure Purview Studio.
## Prerequisites
-### Required permissions
-
->[!IMPORTANT]
-> - Currently, policy operations are only supported at **root collection level** and not child collection level.
-
-These permissions are required in Azure Purview at root collection level:
-- *Policy authors* role can create or edit policies.-- *Data source administrator* role can publish a policy.-
-For more information, see the guide on [managing Azure Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
+## Configuration
### Data source configuration
purview How To Enable Data Use Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-governance.md
Previously updated : 3/24/2022 Last updated : 4/18/2022
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-*Data use governance* (DUG) is an option in the data source registration in Azure Purview. Its purpose is to make those data sources available in the policy authoring experience of Azure Purview Studio. In other words, access policies can only be written on data sources that have been previously registered and with DUG toggle set to enable.
+*Data use governance* (DUG) is an option (enabled/disabled) that gets displayed when registering a data source in Azure Purview. Its purpose is to make that data source available in the policy authoring experience of Azure Purview Studio. In other words, access policies can only be written on data sources that have been previously registered with the DUG toggle set to enable.
## Prerequisites [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
To disable data use governance for a source, resource group, or subscription, a
## Additional considerations related to Data use governance - Make sure you write down the **Name** you use when registering in Azure Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.-- To disable a source for *Data use governance*, remove it first from being bound (i.e. published) in any policy.
+- To disable a source for *Data use governance*, remove it first from being bound (i.e., published) in any policy.
- While user needs to have both data source *Owner* and Azure Purview *Data source admin* to enable a source for *Data use governance*, either of those roles can independently disable it. - Disabling *Data use governance* for a subscription will disable it also for all assets registered in that subscription.
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
Previously updated : 08/16/2021 Last updated : 04/14/2022 # Azure Purview product glossary Below is a glossary of terminology used throughout Azure Purview.
+## Advanced resource sets
+A set of features activated at the Azure Purview instance level that, when enabled, enrich resource set assets by computing additional aggregations on the metadata to provide information such as partition counts, total size, and schema counts. Resource set pattern rules are also included.
## Annotation Information that is associated with data assets in Azure Purview, for example, glossary terms and classifications. After they are applied, annotations can be used within Search to aid in the discovery of the data assets. ## Approved
-The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request.
+The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request.
## Asset Any single object that is stored within an Azure Purview data catalog. > [!NOTE] > A single object in the catalog could potentially represent many objects in storage, for example, a resource set is an asset but it's made up of many partition files in storage. ## Azure Information Protection
-A cloud solution that supports labeling of documents and emails to classify and protect information. Labeled items can be protected by encryption, marked with a watermark, or restricted to specific actions or users, and is bound to the item. This cloud-based solution relies on Azure Rights Management Service (RMS) for enforcing restrictions.
+A cloud solution that supports labeling of documents and emails to classify and protect information. Labeled items can be protected by encryption, marked with a watermark, or restricted to specific actions or users, and is bound to the item. This cloud-based solution relies on Azure Rights Management Service (RMS) for enforcing restrictions.
## Business glossary A searchable list of specialized terms that an organization uses to describe key business words and their definitions. Using a business glossary can provide consistent data usage across the organization.
+## Capacity unit
+A measure of data map usage. All Azure Purview data maps include one capacity unit by default, which provides up to 2GB of metadata storage and has a throughput of 25 data map operations/second.
## Classification report A report that shows key classification details about the scanned data. ## Classification
A classification rule is a set of conditions that determine how scanned data sho
An asset where Azure Purview extracts schema and applies classifications during an automated scan. The scan rule set determines which assets get classified. If the asset is considered a candidate for classification and no classifications are applied during scan time, an asset is still considered a classified asset. ## Collection An organization-defined grouping of assets, terms, annotations, and sources. Collections allow for easier fine-grained access control and discoverability of assets within a data catalog.
+## Collection admin
+A role that can assign roles in Azure Purview. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
## Column pattern A regular expression included in a classification rule that represents the column names that you want to match. ## Contact An individual who is associated with an entity in the data catalog. ## Control plane operation
-Operations that manage resources in your subscription, such as role-based access control and Azure Policy, that are sent to the Azure Resource Manager end point.
+An operation that manages resources in your subscription, such as role-based access control and Azure policy, that are sent to the Azure Resource Manager end point. Control plane operations can also apply to resources outside of Azure across on-premises, multicloud, and SaaS sources.
## Credential A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group to grant access to a data asset. ## Data catalog Azure Purview features that enable customers to view and manage the metadata for assets in your data estate.
+## Data curator
+A role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets.
## Data map Azure Purview features that enable customers to manage their data estate, such as scanning, lineage, and movement.
+## Data map operation
+A create, read, update, or delete action performed on an entity in the data map. For example, creating an asset in the data map is considered a data map operation.
+## Data owner
+An individual or group responsible for managing a data asset.
## Data pattern A regular expression that represents the data that is stored in a data field. For example, a data pattern for employee ID could be Employee{GUID}. ## Data plane operation An operation within a specific Azure Purview instance, such as editing an asset or creating a glossary term. Each instance has predefined roles, such as "data reader" and "data curator" that control which data plane operations a user can perform.
+## Data reader
+A role that provides read-only access to data assets, classifications, classification rules, collections, glossary terms, and insights.
+## Data source admin
+A role that can manage data sources and scans. A user in the Data source admin role doesn't have access to Azure Purview studio. Combining this role with the Data reader or Data curator roles at any collection scope provides Azure Purview studio access.
+## Data steward
+An individual or group responsible for maintaining nomenclature, data quality standards, security controls, compliance requirements, and rules for the associated object.
+## Data dictionary
+A list of canonical names of database columns and their corresponding data types. It is often used to describe the format and structure of a database, and the relationship between its elements.
## Discovered asset An asset that Azure Purview identifies in a data source during the scanning process. The number of discovered assets includes all files or tables before resource set grouping. ## Distinct match threshold
An individual within an organization who understands the full context of a data
## Full scan A scan that processes all assets within a selected scope of a data source. ## Fully Qualified Name (FQN)
-A path that defines the location of an asset within its data source.
+A path that defines the location of an asset within its data source.
## Glossary term An entry in the Business glossary that defines a concept specific to an organization. Glossary terms can contain information on synonyms, acronyms, and related terms. ## Incremental scan
An area within Azure Purview where you can view reports that summarize informati
The compute infrastructure used to scan in a data source. ## Lineage How data transforms and flows as it moves from its origin to its destination. Understanding this flow across the data estate helps organizations see the history of their data, and aid in troubleshooting or impact analysis.
-## Management Center
-An area within Azure Purview where you can manage connections, users, roles, and credentials.
+## Management
+An area within Azure Purview where you can manage connections, users, roles, and credentials. Also referred to as "Management center."
## Minimum match threshold The minimum percentage of matches among the distinct data values in a column that must be found by the scanner for a classification to be applied. For example, a minimum match threshold of 60% for employee ID requires that 60% of all distinct values among the sampled data in a column match the data pattern set for employee ID. If the scanner samples 128 values in a column and finds 60 distinct values in that column, then at least 36 of the distinct values (60%) must match the employee ID data pattern for the classification to be applied.
+## Policy
+A statement or collection of statements that controls how access to data and data sources should be authorized.
+## Object type
+A categorization of assets based upon common data structures. For example, an Azure SQL Server table and Oracle database table both have an object type of table.
## On-premises data Data that is in a data center controlled by a customer, for example, not in the cloud or software as a service (SaaS). ## Owner
A single Azure Purview account.
## Registered source A source that has been added to an Azure Purview instance and is now managed as a part of the Data catalog. ## Related terms
-Glossary terms that are linked to other terms within the organization.
+Glossary terms that are linked to other terms within the organization.
## Resource set A single asset that represents many partitioned files or objects in storage. For example, Azure Purview stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file. ## Role
Permissions assigned to a user within an Azure Purview instance. Roles, such as
A system-generated collection that has the same friendly name as the Azure Purview account. All assets belong to the root collection by default. ## Scan An Azure Purview process that examines a source or set of sources and ingests its metadata into the data catalog. Scans can be run manually or on a schedule using a scan trigger.
-## Scan ruleset
+## Scan rule set
A set of rules that define which data types and classifications a scan ingests into a catalog. ## Scan trigger A schedule that determines the recurrence of when a scan runs.
+## Schema classification
+A classification applied to one of the columns in an asset schema.
## Search
-A data discovery feature of Azure Purview that returns a list of assets that match to a keyword.
+A feature that allows users to find items in the data catalog by entering in a set of keywords.
## Search relevance The scoring of data assets that determine the order search results are returned. Multiple factors determine an asset's relevance score. ## Self-hosted integration runtime
A categorization of the registered sources used in an Azure Purview instance, fo
An individual who defines the standards for a glossary term. They are responsible for maintaining quality standards, nomenclature, and rules for the assigned entity. ## Term template A definition of attributes included in a glossary term. Users can either use the system-defined term template or create their own to include custom attributes.
+## Workflow
+An automated process that coordinates the creation and modification of catalog entities, including validation and approval. Workflows define repeatable business processes to achieve high quality data, policy compliance, and user collaboration across an organization.
+ ## Next steps To get started with Azure Purview, see [Quickstart: Create an Azure Purview account](create-catalog-portal.md).
security Security Code Analysis Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-customize.md
description: This article describes customizing the tasks in the Microsoft Secur
Previously updated : 01/31/2022 Last updated : 04/18/2022
# Configure and customize the build tasks > [!Note]
-> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
This article describes in detail the configuration options available in each of the build tasks. The article starts with the tasks for security code analysis tools. It ends with the post-processing tasks.
security Security Code Analysis Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-onboard.md
description: Learn how to onboard and install the Microsoft Security Code Analys
Previously updated : 01/31/2022 Last updated : 04/18/2022
# Onboarding and installing > [!Note]
-> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
Prerequisites to getting started with Microsoft Security Code Analysis:
security Security Code Analysis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-overview.md
description: Learn about the Microsoft Security Code Analysis extension. With th
Previously updated : 01/31/2022 Last updated : 04/18/2022
# About Microsoft Security Code Analysis > [!Note]
-> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
With the Microsoft Security Code Analysis extension, teams can add security code analysis to their Azure DevOps continuous integration and delivery (CI/CD) pipelines. This analysis is recommended by the [Secure Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/practices) experts at Microsoft.
security Security Code Analysis Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-releases.md
description: This article describes upcoming releases for the Microsoft Security
Previously updated : 01/31/2022 Last updated : 04/18/2022
# Microsoft Security Code Analysis releases and roadmap > [!Note]
-> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
Microsoft Security Code Analysis team in partnership with Developer Support is proud to announce recent and upcoming enhancements to our MSCA extension.
security Yaml Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/yaml-configuration.md
description: This article describes lists YAML configuration options for customi
Previously updated : 01/31/2022 Last updated : 04/18/2022
# YAML configuration options to customize the build tasks > [!Note]
-> Effective July 1, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through July 1, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
This article lists all YAML configuration options available in each of the build tasks. The article starts with the tasks for security code analysis tools. It ends with the post-processing tasks.
sentinel Notebooks Hunt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-hunt.md
+
+ Title: Hunt for security threats with Jupyter notebooks - Microsoft Sentinel
+description: Launch and run notebooks with the Microsoft Sentinel hunting capabilities.
++++ Last updated : 04/04/2022
+#Customer intent: As a security analyst, I want to deploy and launch a Jupyter notebook to hunt for security threats.
++
+# Hunt for security threats with Jupyter notebooks
+
+As part of your security investigations and hunting, launch and run Jupyter notebooks to programmatically analyze your data.
+
+In this how-to guide, you'll create an Azure Machine Learning (ML) workspace, launch notebook from Sentinel portal to your Azure ML workspace, and run code in the notebook.
+
+## Prerequisites
+
+We recommend that you learn about Microsoft Sentinel notebooks in general before completing the steps in this article. See [Use Jupyter notebooks to hunt for security threats](notebooks.md).
+
+To use Microsoft Sentinel notebooks, you must have the following roles and permissions:
+
+|Type |Details |
+|||
+|**Microsoft Sentinel** |- The **Microsoft Sentinel Contributor** role, in order to save and launch notebooks from Microsoft Sentinel |
+|**Azure Machine Learning** |- A resource group-level **Owner** or **Contributor** role, to create a new Azure Machine Learning workspace if needed. <br>- A **Contributor** role on the Azure Machine Learning workspace where you run your Microsoft Sentinel notebooks. <br><br>For more information, see [Manage access to an Azure Machine Learning workspace](../machine-learning/how-to-assign-roles.md). |
+
+## Create an Azure ML workspace from Microsoft Sentinel
+
+To create your workspace, select one of the following tabs, depending on whether you'll be using a public or private endpoint.
+
+- We recommend using a *public endpoint* if your Microsoft Sentinel workspace has one, to avoid potential issues in the network communication.
+- If you want to use an Azure ML workspace in a virtual network, use a *private endpoint*.
+
+# [Public endpoint](#tab/public-endpoint)
+
+1. From the Azure portal, go to **Microsoft Sentinel** > **Threat management** > **Notebooks** and then select **Create a new AML workspace**.
+
+1. Enter the following details, and then select **Next**.
+
+ |Field|Description|
+ |--|--|
+ |**Subscription**|Select the Azure subscription that you want to use.|
+ |**Resource group**|Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution.|
+ |**Workspace name**|Enter a unique name that identifies your workspace. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others.|
+ |**Region**|Select the location closest to your users and the data resources to create your workspace.|
+ |**Storage account**| A storage account is used as the default datastore for the workspace. You may create a new Azure Storage resource or select an existing one in your subscription.|
+ |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.|
+ |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.|
+ |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
+ | | |
+
+1. On the **Networking** tab, select **Public endpoint (all networks)**.
+
+ Define any relevant settings in the **Advanced** or **Tags** tabs, and then select **Review + create**.
+
+1. On the **Review + create** tab, review the information to verify that it's correct, and then select **Create** to start deploying your workspace. For example:
+
+ :::image type="content" source="media/notebooks/machine-learning-create-last-step.png" alt-text="Review + create your Machine Learning workspace from Microsoft Sentinel.":::
+
+ It can take several minutes to create your workspace in the cloud. During this time, the workspace **Overview** page shows the current deployment status, and updates when the deployment is complete.
++
+# [Private endpoint](#tab/private-endpoint)
+
+The steps in this procedure reference specific articles in the Azure Machine Learning documentation when relevant. For more information, see [How to create a secure Azure ML workspace](../machine-learning/tutorial-create-secure-workspace.md).
+
+1. Create a VM jump box within a VNet. Since the VNet restricts access from the public internet, the jump box is used as a way to connect to resources behind the VNet.
+
+1. Access the jump box, and then go to your Microsoft Sentinel workspace. We recommend using [Azure Bastion](../bastion/bastion-overview.md) to access the VM.
+
+1. In Microsoft Sentinel, select **Threat management** > **Notebooks** and then select **Create a new AML workspace**.
+
+1. Enter the following details, and then select **Next**.
+
+ |Field|Description|
+ |--|--|
+ |**Subscription**|Select the Azure subscription that you want to use.|
+ |**Resource group**|Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution.|
+ |**Workspace name**|Enter a unique name that identifies your workspace. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others.|
+ |**Region**|Select the location closest to your users and the data resources to create your workspace.|
+ |**Storage account**| A storage account is used as the default datastore for the workspace. You may create a new Azure Storage resource or select an existing one in your subscription.|
+ |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.|
+ |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.|
+ |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
+ | | |
+
+1. On the **Networking** tab, select **Private endpoint**. Make sure to use the same VNet as you have in the VM jump box. For example:
+
+ :::image type="content" source="media/notebooks/create-private-endpoint.png" alt-text="Screenshot of the Create private endpoint page in Microsoft Sentinel." lightbox="media/notebooks/create-private-endpoint.png":::
+
+1. Define any relevant settings in the **Advanced** or **Tags** tabs, and then select **Review + create**.
+
+1. On the **Review + create** tab, review the information to verify that it's correct, and then select **Create** to start deploying your workspace. For example:
+
+ :::image type="content" source="media/notebooks/machine-learning-create-last-step.png" alt-text="Review + create your Machine Learning workspace from Microsoft Sentinel.":::
+
+ It can take several minutes to create your workspace in the cloud. During this time, the workspace **Overview** page shows the current deployment status, and updates when the deployment is complete.
+
+1. In the Azure Machine Learning studio, on the **Compute** page, create a new compute. On the **Advanced Settings** tab, make sure to select the same VNet that you'd used for your VM jump box. For more information, see [Create and manage an Azure Machine Learning compute instance](../machine-learning/how-to-create-manage-compute-instance.md?tabs=python).
+
+1. Configure your network traffic to access Azure ML from behind a firewall. For more information, see [Configure inbound and outbound network traffic](../machine-learning/how-to-access-azureml-behind-firewall.md?tabs=ipaddress%2cpublic).
+
+Continue with one of the following sets of steps:
+
+- **If you have one private link only**: You can now access the notebooks via any of the following methods:
+
+ - Clone and launch notebooks from Microsoft Sentinel to Azure Machine Learning
+ - Upload notebooks to Azure Machine Learning manually
+ - Clone the [Microsoft Sentinel notebooks GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) on the Azure Machine learning terminal
+
+- **If you have another private link, that uses a different VNET**, do the following:
+
+ 1. In the Azure portal, go to the resource group of your Azure Machine Learning workspace, and then search for the **Private DNS zone** resources named **privatelink.api.azureml.ms** and **privatelink.notebooks.azure.ms**. For example:
+
+ :::image type="content" source="media/notebooks/select-private-dns-zone.png" alt-text="Screenshot of a private DNS zone resource selected." lightbox="media/notebooks/select-private-dns-zone.png":::
+
+ 1. For each resource, including both **privatelink.api.azureml.ms** and **privatelink.notebooks.azure.ms**, add a virtual network link.
+
+ Select the resource > **Virtual network links** > **Add**. For more information, see [Link the virtual network](../dns/private-dns-getstarted-portal.md).
+
+For more information, see:
+
+- [Network traffic flow when using a secured workspace](../machine-learning/concept-secure-network-traffic-flow.md)
+- [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](../machine-learning/how-to-network-security-overview.md)
+++
+After your deployment is complete, you can go back to the Microsoft Sentinel **Notebooks** and launch notebooks from your new Azure ML workspace.
+
+If you have multiple notebooks, make sure to select a default AML workspace to use when launching your notebooks. For example:
+++
+## Launch a notebook in your Azure ML workspace
+
+After you've created an AML workspace, start launching your notebooks in your Azure ML workspace, from Microsoft Sentinel.
++
+1. From the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Notebooks**, where you can see notebooks that Microsoft Sentinel provides.
+1. Select a notebook to view its description, required data types, and data sources.
+
+ When you've found the notebook you want to use, select **Save notebook** to clone it into your own workspace.
+
+ Edit the name as needed. If the notebook already exists in your workspace, you can overwrite the existing notebook or create a new one.
+
+ :::image type="content" source="media/notebooks/save-notebook.png" alt-text="Save a notebook to clone it to your own workspace.":::
+
+1. After the notebook is saved, the **Save notebook** button changes to **Launch notebook**. Select **Launch notebook** to open it in your AML workspace.
+
+ For example:
+
+ :::image type="content" source="media/notebooks/sentinel-notebooks-on-machine-learning.png" alt-text="Launch your notebook in your AML workspace.":::
+
+1. At the top of the page, select a **Compute** instance to use for your notebook server.
+
+ If you don't have a compute instance, [create a new one](../machine-learning/how-to-create-manage-compute-instance.md?tabs=#use-the-script-in-the-studio). If your compute instance is stopped, make sure to start it. For more information, see [Run a notebook in the Azure Machine Learning studio](../machine-learning/how-to-run-jupyter-notebooks.md).
+
+ Only you can see and use the compute instances you create. Your user files are stored separately from the VM and are shared among all compute instances in the workspace.
+
+ If you are creating a new compute instance in order to test your notebooks, create your compute instance with the **General Purpose** category.
+
+ The kernel is also shown at the top right of your Azure ML window. If the kernel you need isn't selected, select a different version from the dropdown list.
+
+
+1. Once your notebook server is created and started, you can starting running your notebook cells. In each cell, select the **Run** icon to run your notebook code.
+
+ For more information, see [Command mode shortcuts.](../machine-learning/how-to-run-jupyter-notebooks.md)
+
+1. If your notebook hangs or you want to start over, you can restart the kernel and rerun the notebook cells from the beginning. If you restart the kernel, variables and other state are deleted. Rerun any initialization and authentication cells after you restart.
+
+ To start over, select **Kernel operations** > **Restart kernel**. For example:
+
+ :::image type="content" source="media/notebooks/sentinel-notebooks-restart-kernel.png" alt-text="Restart a notebook kernel.":::
+
+## Run code in your notebook
+
+Always run notebook code cells in sequence. Skipping cells can result in errors.
+
+In a notebook:
+
+- **Markdown** cells have text, including HTML, and static images.
+- **Code** cells contain code. After you select a code cell, run the code in the cell by selecting the **Play** icon to the left of the cell, or by pressing **SHIFT+ENTER**.
+
+For example, run the following code cell in your notebook:
+
+```python
+# This is your first code cell. This cell contains basic Python code.
+
+# You can run a code cell by selecting it and then selecting
+# the Play button to the left of the cell, or by pressing SHIFT+ENTER.
+# Code output displays below the code.
+
+print("Congratulations, you just ran this code cell")
+
+y = 2 + 2
+
+print("2 + 2 =", y)
+
+```
+
+The sample code shown above produces this output:
+
+```python
+Congratulations, you just ran this code cell
+
+2 + 2 = 4
+```
+
+Variables set within a notebook code cell persist between cells, so you can chain cells together. For example, the following code cell uses the value of `y` from the previous cell:
+
+```python
+# Note that output from the last line of a cell is automatically
+# sent to the output cell, without needing the print() function.
+
+y + 2
+```
+
+The output is:
+
+```output
+6
+```
+
+## Download all Microsoft Sentinel notebooks
+
+This section describes how to use Git to download all the notebooks available in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/), from inside a Microsoft Sentinel notebook, directly to your Azure ML workspace.
+
+Having Microsoft Sentinel notebooks stored in your Azure ML workspace allows you to keep them updated easily.
+
+1. From a Microsoft Sentinel notebook, enter the following code into an empty cell, and then run the cell:
+
+ ```python
+ !git clone https://github.com/Azure/Azure-Sentinel-Notebooks.git azure-sentinel-nb
+ ```
+
+ A copy of the GitHub repository contents is created in the **azure-Sentinel-nb** directory on your user folder in your Azure ML workspace.
+
+1. Copy the notebooks you want from this folder to your working directory.
+
+1. To update your notebooks with any recent changes from GitHub, run:
+
+ ```python
+ !cd azure-sentinel-nb && git pull
+ ```
+
+## Next steps
+
+- [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)
+- [Integrate notebooks with Azure Synapse (Public preview)](notebooks-with-synapse.md)
+
+Other resources:
+- Use notebooks shared in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) as useful tools, illustrations, and code samples that you can use when developing your own notebooks.
+
+- Submit feedback, suggestions, requests for features, contributed notebooks, bug reports or improvements and additions to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.
+
+- Learn more about using notebooks in threat hunting and investigation by exploring some notebook templates, such as [Credential Scan on Azure Log Analytics](https://www.youtube.com/watch?v=OWjXee8o04M) and Guided Investigation - Process Alerts.
+
+ Find more notebook templates in the Microsoft Sentinel > **Notebooks** > **Templates** tab.
+
+- **Find more notebooks** in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks):
+
+ - The [`Sample-Notebooks`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/Sample-Notebooks) directory includes sample notebooks that are saved with data that you can use to show intended output.
+
+ - The [`HowTos`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/HowTos) directory includes notebooks that describe concepts such as setting your default Python version, creating Microsoft Sentinel bookmarks from a notebook, and more.
+
+For more information, see:
+
+- [Create your first Microsoft Sentinel notebook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/creating-your-first-microsoft-sentinel-notebook/ba-p/2977745) (Blog series)
+
+- [Tutorial: Microsoft Sentinel notebooks - Getting started](https://www.youtube.com/results?search_query=azazure+sentinel+notebooks) (Video)
+- [Tutorial: Edit and run Jupyter notebooks without leaving Azure ML studio](https://www.youtube.com/watch?v=AAj-Fz0uCNk) (Video)
+- [Webinar: Microsoft Sentinel notebooks fundamentals](https://www.youtube.com/watch?v=rewdNeX6H94)
+- [Proactively hunt for threats](hunting.md)
+- [Use bookmarks to save interesting information while hunting](bookmarks.md)
+- [Jupyter, msticpy, and Microsoft Sentinel](https://msticpy.readthedocs.io/en/latest/getting_started/JupyterAndAzureSentinel.html)
sentinel Notebooks Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-troubleshoot.md
+
+ Title: Troubleshoot Jupyter notebooks - Microsoft Sentinel
+description: Troubleshoot errors for Jupyter notebooks in Microsoft Sentinel.
++++ Last updated : 04/04/2022++
+# Troubleshoot Jupyter notebooks
+
+Usually, a notebook creates or attaches to a kernel seamlessly, and you don't need to make any manual changes. If you get errors, or the notebook doesn't seem to be running, you might need to check the version and state of the kernel.
+
+If you run into issues with your notebooks, see the [Azure Machine Learning notebook troubleshooting](../machine-learning/how-to-run-jupyter-notebooks.md#troubleshooting).
+
+## Force caching for user accounts and credentials between notebook runs
+
+By default, user accounts and credentials are not cached between notebook runs, even for the same session.
+
+**To force caching for the duration of your session**:
+
+1. Authenticate using Azure CLI. In an empty notebook cell, enter and run the following code:
+
+ ```python
+ !az login
+ ```
+
+ The following output appears:
+
+ ```python
+ To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the 9-digit device code to authenticate.
+ ```
+
+1. Select and copy the nine-character token from the output, and select the `devicelogin` URL to go to the indicated page.
+
+1. Paste the token into the dialog and continue with signing in as prompted.
+
+ When sign-in successfully completes, you see the following output:
+
+ ```python
+ Subscription <subscription ID> 'Sample subscription' can be accessed from tenants <tenant ID>(default) and <tenant ID>. To select a specific tenant when accessing this subscription, use 'az login --tenant TENANT_ID'.
+
+> [!NOTE]
+> The following tenants don't contain accessible subscriptions. Use 'az login --allow-no-subscriptions' to have tenant level access.
+>
+> ```
+> <tenant ID> 'foo'
+><tenant ID> 'bar'
+>[
+> {
+> "cloudName": "AzureApp",
+> "homeTenantId": "<tenant ID>",
+> "id": "<ID>",
+> "isDefault": true,
+> "managedByTenants": [
+> ....
+>```
+>
+## Error: *Runtime dependency of PyGObject is missing*
+
+If the *Runtime dependency of PyGObject is missing* error appears when you load a query provider, try troubleshooting using the following steps:
+
+1. Proceed to the cell with the following code and run it:
+
+ ```python
+ qry_prov = QueryProvider("AzureSentinel")
+ ```
+
+ A warning similar to the following message is displayed, indicating a missing Python dependency (`pygobject`):
+
+ ```output
+ Runtime dependency of PyGObject is missing.
+
+ Depends on your Linux distribution, you can install it by running code similar to the following:
+ sudo apt install python3-gi python3-gi-cairo gir1.2-secret-1
+
+ If necessary, see PyGObject's documentation: https://pygobject.readthedocs.io/en/latest/getting_started.html
+
+ Traceback (most recent call last):
+ File "/anaconda/envs/azureml_py36/lib/python3.6/site-packages/msal_extensions/libsecret.py", line 21, in <module>
+ import gi # https://github.com/AzureAD/microsoft-authentication-extensions-for-python/wiki/Encryption-on-Linux
+ ModuleNotFoundError: No module named 'gi'
+ ```
+
+1. Use the [aml-compute-setup.sh](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/HowTos/aml-compute-setup.sh) script, located in the Microsoft Sentinel Notebooks GitHub repository, to automatically install the `pygobject` in all notebooks and Anaconda environments on the Compute instance.
+
+> [!TIP]
+> You can also fix this Warning by running the following code from a notebook:
+>
+> ```python
+> !conda install --yes -c conda-forge pygobject
+> ```
+>
+
+## Next steps
+
+We welcome feedback, suggestions, requests for features, contributed notebooks, bug reports or improvements and additions to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.
sentinel Notebooks With Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-with-synapse.md
Microsoft Sentinel provides the built-in, **Azure Synapse - Configure Azure ML a
1. After your notebook is deployed, select **Launch Notebook** to open it.
- The notebook opens in your Azure ML workspace, inside Microsoft Sentinel. For more information, see [Launch a notebook in your Azure ML workspace](notebooks.md#launch-a-notebook-in-your-azure-ml-workspace).
+ The notebook opens in your Azure ML workspace, inside Microsoft Sentinel. For more information, see [Launch a notebook in your Azure ML workspace](notebooks-hunt.md#launch-a-notebook-in-your-azure-ml-workspace).
1. Run the cells in the notebook's initial steps to load the required Python libraries and functions and to authenticate to Azure resources.
Microsoft Sentinel provides the built-in **Azure Synapse - Detect potential netw
1. After your notebook is deployed, select **Launch Notebook** to open it.
- The notebook opens in your Azure ML workspace, from inside Microsoft Sentinel. For more information, see [Launch a notebook in your Azure ML workspace](notebooks.md#launch-a-notebook-in-your-azure-ml-workspace).
+ The notebook opens in your Azure ML workspace, from inside Microsoft Sentinel. For more information, see [Launch a notebook in your Azure ML workspace](notebooks-hunt.md#launch-a-notebook-in-your-azure-ml-workspace).
1. Run the cells in the notebook's initial steps to load the required Python libraries and functions and to authenticate to Azure resources.
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks.md
Title: Use notebooks with Microsoft Sentinel for security hunting
-description: This article describes how to use notebooks with the Microsoft Sentinel hunting capabilities.
--
+description: Learn about Jupyter notebooks with the Microsoft Sentinel hunting capabilities.
++ Previously updated : 11/14/2021 Last updated : 04/04/2022 # Use Jupyter notebooks to hunt for security threats
+Jupyter notebooks combine full programmability with a huge collection of libraries for machine learning, visualization, and data analysis. These attributes make Jupyter a compelling tool for security investigation and hunting.
+The foundation of Microsoft Sentinel is the data store; it combines high-performance querying, dynamic schema, and scales to massive data volumes. The Azure portal and all Microsoft Sentinel tools use a common API to access this data store. The same API is also available for external tools such as [Jupyter](https://jupyter.org/) notebooks and Python.
-The foundation of Microsoft Sentinel is the data store; it combines high-performance querying, dynamic schema, and scales to massive data volumes. The Azure portal and all Microsoft Sentinel tools use a common API to access this data store.
+## When to use Jupyter notebooks
-The same API is also available for external tools such as [Jupyter](https://jupyter.org/) notebooks and Python. While many common tasks can be carried out in the portal, Jupyter extends the scope of what you can do with this data. It combines full programmability with a huge collection of libraries for machine learning, visualization, and data analysis. These attributes make Jupyter a compelling tool for security investigation and hunting.
+While many common tasks can be carried out in the portal, Jupyter extends the scope of what you can do with this data.
For example, use notebooks to:
Several notebooks, developed by some of Microsoft's security analysts, are packa
- Some of these notebooks are built for a specific scenario and can be used as-is. - Others are intended as samples to illustrate techniques and features that you can copy or adapt for use in your own notebooks.
-Still other notebooks may also be imported from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/).
+Other notebooks may also be imported from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/).
-## Notebook components
+## How Jupyter notebooks work
Notebooks have two components: - **The browser-based interface**, where you enter and run queries and code, and where the results of the execution are displayed. - **A *kernel*** that is responsible for parsing and executing the code itself.
-The Microsoft Sentinel notebook's kernel runs on an Azure virtual machine (VM). Several licensing options exist to use more powerful virtual machines if your notebooks include complex machine learning models.
+The Microsoft Sentinel notebook's kernel runs on an Azure virtual machine (VM). The VM instance can support running many notebooks at once. If your notebooks include complex machine learning models, several licensing options exist to use more powerful virtual machines.
+
+## Understand Python packages
The Microsoft Sentinel notebooks use many popular Python libraries such as *pandas*, *matplotlib*, *bokeh*, and others. There are a great many other Python packages for you to choose from, covering areas such as:
MSTICPy tools are designed specifically to help with creating notebooks for hunt
- [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md) - [Advanced configurations for Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebooks-msticpy-advanced.md)
-The [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/) is the location for any future Microsoft Sentinel notebooks built by Microsoft or contributed from the community.
+## Find notebooks
+
+From the Azure portal, go to **Microsoft Sentinel** > **Threat management** > **Notebooks**, to see notebooks that Microsoft Sentinel provides. For more notebooks built by Microsoft or contributed from the community, go to [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/).
## Manage access to Microsoft Sentinel notebooks
While you can run Microsoft Sentinel notebooks in JupyterLab or Jupyter classic,
|**Microsoft Sentinel permissions** | Like other Microsoft Sentinel resources, to access notebooks on Microsoft Sentinel Notebooks blade, a Microsoft Sentinel Reader, Microsoft Sentinel Responder, or Microsoft Sentinel Contributor role is required. <br><br>For more information, see [Permissions in Microsoft Sentinel](roles.md).| |**Azure Machine Learning permissions** | An Azure Machine Learning workspace is an Azure resource. Like other Azure resources, when a new Azure Machine Learning workspace is created, it comes with default roles. You can add users to the workspace and assign them to one of these built-in roles. For more information, see [Azure Machine Learning default roles](../machine-learning/how-to-assign-roles.md) and [Azure built-in roles](../role-based-access-control/built-in-roles.md). <br><br> **Important**: Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a workspace may not have owner access to the resource group that contains the workspace. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md). <br><br>If you're an owner of an Azure ML workspace, you can add and remove roles for the workspace and assign roles to users. For more information, see:<br> - [Azure portal](../role-based-access-control/role-assignments-portal.md)<br> - [PowerShell](../role-based-access-control/role-assignments-powershell.md)<br> - [Azure CLI](../role-based-access-control/role-assignments-cli.md)<br> - [REST API](../role-based-access-control/role-assignments-rest.md)<br> - [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)<br> - [Azure Machine Learning CLI ](../machine-learning/how-to-assign-roles.md#manage-workspace-access)<br><br>If the built-in roles are insufficient, you can also create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that workspace. You can make the role available at a specific workspace level, a specific resource group level, or a specific subscription level. For more information, see [Create custom role](../machine-learning/how-to-assign-roles.md#create-custom-role). | -
-## Create an Azure ML workspace from Microsoft Sentinel
-
-This procedure describes how to create an Azure ML workspace from Microsoft Sentinel for your Microsoft Sentinel notebooks.
-
-**To create your workspace**:
-
-Select one of the following tabs, depending on whether you'll be using a public or private endpoint.
--- We recommend using a *public endpoint* if your Microsoft Sentinel workspace has one, to avoid potential issues in the network communication.-- If you want to use an Azure ML workspace in a virtual network, use a *private endpoint*.-
-# [Public endpoint](#tab/public-endpoint)
-
-1. From the Azure portal, go to **Microsoft Sentinel** > **Threat management** > **Notebooks** and then select **Create a new AML workspace**.
-
-1. Enter the following details, and then select **Next**.
-
- |Field|Description|
- |--|--|
- |**Subscription**|Select the Azure subscription that you want to use.|
- |**Resource group**|Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution.|
- |**Workspace name**|Enter a unique name that identifies your workspace. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others.|
- |**Region**|Select the location closest to your users and the data resources to create your workspace.|
- |**Storage account**| A storage account is used as the default datastore for the workspace. You may create a new Azure Storage resource or select an existing one in your subscription.|
- |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.|
- |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.|
- |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
--
-1. On the **Networking** tab, select **Public endpoint (all networks)**.
-
- Define any relevant settings in the **Advanced** or **Tags** tabs, and then select **Review + create**.
-
-1. On the **Review + create** tab, review the information to verify that it's correct, and then select **Create** to start deploying your workspace. For example:
-
- :::image type="content" source="media/notebooks/machine-learning-create-last-step.png" alt-text="Review + create your Machine Learning workspace from Microsoft Sentinel.":::
-
- It can take several minutes to create your workspace in the cloud. During this time, the workspace **Overview** page shows the current deployment status, and updates when the deployment is complete.
--
-# [Private endpoint](#tab/private-endpoint)
-
-The steps in this procedure reference specific articles in the Azure Machine Learning documentation when relevant. For more information, see [How to create a secure Azure ML workspace](../machine-learning/tutorial-create-secure-workspace.md).
-
-1. Create a VM jump box within a VNet. Since the VNet restricts access from the public internet, the jump box is used as a way to connect to resources behind the VNet.
-
-1. Access the jump box, and then go to your Microsoft Sentinel workspace. We recommend using [Azure Bastion](../bastion/bastion-overview.md) to access the VM.
-
-1. In Microsoft Sentinel, select **Threat management** > **Notebooks** and then select **Create a new AML workspace**.
-
-1. Enter the following details, and then select **Next**.
-
- |Field|Description|
- |--|--|
- |**Subscription**|Select the Azure subscription that you want to use.|
- |**Resource group**|Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution.|
- |**Workspace name**|Enter a unique name that identifies your workspace. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others.|
- |**Region**|Select the location closest to your users and the data resources to create your workspace.|
- |**Storage account**| A storage account is used as the default datastore for the workspace. You may create a new Azure Storage resource or select an existing one in your subscription.|
- |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.|
- |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.|
- |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
--
-1. On the **Networking** tab, select **Private endpoint**. Make sure to use the same VNet as you have in the VM jump box. For example:
-
- :::image type="content" source="media/notebooks/create-private-endpoint.png" alt-text="Screenshot of the Create private endpoint page in Microsoft Sentinel." lightbox="media/notebooks/create-private-endpoint.png":::
-
-1. Define any relevant settings in the **Advanced** or **Tags** tabs, and then select **Review + create**.
-
-1. On the **Review + create** tab, review the information to verify that it's correct, and then select **Create** to start deploying your workspace. For example:
-
- :::image type="content" source="media/notebooks/machine-learning-create-last-step.png" alt-text="Review + create your Machine Learning workspace from Microsoft Sentinel.":::
-
- It can take several minutes to create your workspace in the cloud. During this time, the workspace **Overview** page shows the current deployment status, and updates when the deployment is complete.
-
-1. In the Azure Machine Learning studio, on the **Compute** page, create a new compute. On the **Advanced Settings** tab, make sure to select the same VNet that you'd used for your VM jump box. For more information, see [Create and manage an Azure Machine Learning compute instance](../machine-learning/how-to-create-manage-compute-instance.md?tabs=python).
-
-1. Configure your network traffic to access Azure ML from behind a firewall. For more information, see [Configure inbound and outbound network traffic](../machine-learning/how-to-access-azureml-behind-firewall.md?tabs=ipaddress%2cpublic).
-
-Continue with one of the following sets of steps:
--- **If you have one private link only**: You can now access the notebooks via any of the following methods:-
- - Clone and launch notebooks from Microsoft Sentinel to Azure Machine Learning
- - Upload notebooks to Azure Machine Learning manually
- - Clone the [Microsoft Sentinel notebooks GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) on the Azure Machine learning terminal
--- **If you have another private link, that uses a different VNET**, do the following:-
- 1. In the Azure portal, go to the resource group of your Azure Machine Learning workspace, and then search for the **Private DNS zone** resources named **privatelink.api.azureml.ms** and **privatelink.notebooks.azure.ms**. For example:
-
- :::image type="content" source="media/notebooks/select-private-dns-zone.png" alt-text="Screenshot of a private DNS zone resource selected." lightbox="media/notebooks/select-private-dns-zone.png":::
-
- 1. For each resource, including both **privatelink.api.azureml.ms** and **privatelink.notebooks.azure.ms**, add a virtual network link.
-
- Select the resource > **Virtual network links** > **Add**. For more information, see [Link the virtual network](../dns/private-dns-getstarted-portal.md).
-
-For more information, see:
--- [Network traffic flow when using a secured workspace](../machine-learning/concept-secure-network-traffic-flow.md)-- [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](../machine-learning/how-to-network-security-overview.md)---
-After your deployment is complete, you can go back to the Microsoft Sentinel **Notebooks** and launch notebooks from your new Azure ML workspace.
-
-If you have multiple notebooks, make sure to select a default AML workspace to use when launching your notebooks. For example:
---
-## Launch a notebook in your Azure ML workspace
-
-After you've created an AML workspace, start launching your notebooks in your Azure ML workspace, from Microsoft Sentinel.
-
-> [!NOTE]
-> You can view a notebook as a static document, such as in the GitHub built-in static notebook renderer. However, to run code in a notebook, you must attach the notebook to a backend process called a Jupyter kernel. The kernel runs the code and holds all the variables and objects the code creates. The browser is the viewer for this data.
->
-> In Azure ML, the kernel runs on a virtual machine called an Azure ML Compute. The Compute instance can support running many notebooks at once.
->
-
-**To launch your notebook from Microsoft Sentinel**:
-
-1. From the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Notebooks**, where you can see notebooks that Microsoft Sentinel provides.
-
- > [!TIP]
- > At the top of the **Notebooks** page, select **Guides & Feedback** to show more resources and guidance in a pane on the right.
-
-1. Select a notebook to view its description, required data types, and data sources.
-
- When you've found the notebook you want to use, select **Save notebook** to clone it into your own workspace.
-
- Edit the name as needed. If the notebook already exists in your workspace, you can overwrite the existing notebook or create a new one.
-
- :::image type="content" source="media/notebooks/save-notebook.png" alt-text="Save a notebook to clone it to your own workspace.":::
-
-1. After the notebook is saved, the **Save notebook** button changes to **Launch notebook**. Select **Launch notebook** to open it in your AML workspace.
-
- For example:
-
- :::image type="content" source="media/notebooks/sentinel-notebooks-on-machine-learning.png" alt-text="Launch your notebook in your AML workspace.":::
-
-1. At the top of the page, select a **Compute** instance to use for your notebook server.
-
- If you don't have a compute instance, [create a new one](../machine-learning/how-to-create-manage-compute-instance.md?tabs=#use-the-script-in-the-studio). If your compute instance is stopped, make sure to start it. For more information, see [Run a notebook in the Azure Machine Learning studio](../machine-learning/how-to-run-jupyter-notebooks.md).
-
- Only you can see and use the compute instances you create. Your user files are stored separately from the VM and are shared among all compute instances in the workspace.
-
- > [!TIP]
- > If you are creating a new compute instance in order to test your notebooks, create your compute instance with the **General Purpose** category.
- >
- > The kernel is also shown at the top right of your Azure ML window. If the kernel you need isn't selected, select a different version from the dropdown list.
- >
-
-1. Once your notebook server is created and started, you can starting running your notebook cells. In each cell, select the **Run** icon to run your notebook code.
-
- For more information, see [Command mode shortcuts.](../machine-learning/how-to-run-jupyter-notebooks.md)
-
-1. If your notebook hangs or you want to start over, you can restart the kernel and rerun the notebook cells from the beginning. Select **Kernel operations** > **Restart kernel**. For example:
-
- :::image type="content" source="media/notebooks/sentinel-notebooks-restart-kernel.png" alt-text="Restart a notebook kernel.":::
-
- > [!IMPORTANT]
- > Restarting the kernel wipes all variables and other state. You need to rerun any initialization and authentication cells after restarting.
- >
-
-## Run code in your notebook
-
-In a notebook:
--- **Markdown** cells have text, including HTML, and static images.-- **Code** cells contain code. After you select a code cell, run the code in the cell by selecting the **Play** icon to the left of the cell, or by pressing **SHIFT+ENTER**.-
-> [!IMPORTANT]
-> Always run notebook code cells in sequence. Skipping cells can result in errors.
->
-
-For example, run the following code cell in your notebook:
-
-```python
-# This is your first code cell. This cell contains basic Python code.
-
-# You can run a code cell by selecting it and then selecting
-# the Play button to the left of the cell, or by pressing SHIFT+ENTER.
-# Code output displays below the code.
-
-print("Congratulations, you just ran this code cell")
-
-y = 2 + 2
-
-print("2 + 2 =", y)
-
-```
-
-The sample code shown above produces this output:
-
-```python
-Congratulations, you just ran this code cell
-
-2 + 2 = 4
-```
-
-Variables set within a notebook code cell persist between cells, so you can chain cells together. For example, the following code cell uses the value of `y` from the previous cell:
-
-```python
-# Note that output from the last line of a cell is automatically
-# sent to the output cell, without needing the print() function.
-
-y + 2
-```
-
-The output is:
-
-```output
-6
-```
-
-## Download all Microsoft Sentinel notebooks
-
-This section describes how to use Git to download all the notebooks available in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/), from inside a Microsoft Sentinel notebook, directly to your Azure ML workspace.
-
-Having Microsoft Sentinel notebooks stored in your Azure ML workspace allows you to keep them updated easily.
-
-1. From a Microsoft Sentinel notebook, enter the following code into an empty cell, and then run the cell:
-
- ```python
- !git clone https://github.com/Azure/Azure-Sentinel-Notebooks.git azure-sentinel-nb
- ```
-
- A copy of the GitHub repository contents is created in the **azure-Sentinel-nb** directory on your user folder in your Azure ML workspace.
-
-1. Copy the notebooks you want from this folder to your working directory.
-
-1. To update your notebooks with any recent changes from GitHub, run:
-
- ```python
- !cd azure-sentinel-nb && git pull
- ```
-
-## Troubleshooting
-
-Usually, a notebook creates or attaches to a kernel seamlessly, and you don't need to make any manual changes. If you get errors, or the notebook doesn't seem to be running, you might need to check the version and state of the kernel.
-
-If you run into issues with your notebooks, see the [Azure Machine Learning notebook troubleshooting](../machine-learning/how-to-run-jupyter-notebooks.md#troubleshooting).
-
-### Force caching for user accounts and credentials between notebook runs
-
-By default, user accounts and credentials are not cached between notebook runs, even for the same session.
-
-**To force caching for the duration of your session**:
-
-1. Authenticate using Azure CLI. In an empty notebook cell, enter and run the following code:
-
- ```python
- !az login
- ```
-
- The following output appears:
-
- ```python
- To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the 9-digit device code to authenticate.
- ```
-
-1. Select and copy the nine-character token from the output, and select the `devicelogin` URL to go to the indicated page.
-
-1. Paste the token into the dialog and continue with signing in as prompted.
-
- When sign-in successfully completes, you see the following output:
-
- ```python
- Subscription <subscription ID> 'Sample subscription' can be accessed from tenants <tenant ID>(default) and <tenant ID>. To select a specific tenant when accessing this subscription, use 'az login --tenant TENANT_ID'.
-
-> [!NOTE]
-> The following tenants don't contain accessible subscriptions. Use 'az login --allow-no-subscriptions' to have tenant level access.
->
-> ```
-> <tenant ID> 'foo'
-><tenant ID> 'bar'
->[
-> {
-> "cloudName": "AzureApp",
-> "homeTenantId": "<tenant ID>",
-> "id": "<ID>",
-> "isDefault": true,
-> "managedByTenants": [
-> ....
->```
->
-### Error: *Runtime dependency of PyGObject is missing*
-
-If the *Runtime dependency of PyGObject is missing* error appears when you load a query provider, try troubleshooting using the following steps:
-
-1. Proceed to the cell with the following code and run it:
-
- ```python
- qry_prov = QueryProvider("AzureSentinel")
- ```
-
- A warning similar to the following message is displayed, indicating a missing Python dependency (`pygobject`):
-
- ```output
- Runtime dependency of PyGObject is missing.
-
- Depends on your Linux distribution, you can install it by running code similar to the following:
- sudo apt install python3-gi python3-gi-cairo gir1.2-secret-1
-
- If necessary, see PyGObject's documentation: https://pygobject.readthedocs.io/en/latest/getting_started.html
-
- Traceback (most recent call last):
- File "/anaconda/envs/azureml_py36/lib/python3.6/site-packages/msal_extensions/libsecret.py", line 21, in <module>
- import gi # https://github.com/AzureAD/microsoft-authentication-extensions-for-python/wiki/Encryption-on-Linux
- ModuleNotFoundError: No module named 'gi'
- ```
-
-1. Use the [aml-compute-setup.sh](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/tutorials-and-examples/how-tos/aml-compute-setup.sh) script, located in the Microsoft Sentinel Notebooks GitHub repository, to automatically install the `pygobject` in all notebooks and Anaconda environments on the Compute instance.
-
-> [!TIP]
-> You can also fix this Warning by running the following code from a notebook:
->
-> ```python
-> !conda install --yes -c conda-forge pygobject
-> ```
->
-- ## Next steps
-Integrate your notebook experience with big data analytics in Azure Synapse. For more information, see [Integrate notebooks with Azure Synapse (Public preview)](notebooks-with-synapse.md).
+- [Hunt for security threats with Jupyter notebooks](notebooks-hunt.md)
+- [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)
+- [Integrate notebooks with Azure Synapse (Public preview)](notebooks-with-synapse.md)
-Other notebooks shared in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) are intended as useful tools, illustrations, and code samples that you can use when developing your own notebooks.
+Other resources:
+- Use notebooks shared in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) as useful tools, illustrations, and code samples that you can use when developing your own notebooks.
-We welcome feedback, suggestions, requests for features, contributed notebooks, bug reports or improvements and additions to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.
+- Submit feedback, suggestions, requests for features, contributed notebooks, bug reports or improvements and additions to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.
-- **Learn more** about using notebooks in threat hunting and investigation by exploring some notebook templates, such as [**Credential Scan on Azure Log Analytics**](https://www.youtube.com/watch?v=OWjXee8o04M) and **Guided Investigation - Process Alerts**.
+- Learn more about using notebooks in threat hunting and investigation by exploring some notebook templates, such as [Credential Scan on Azure Log Analytics](https://www.youtube.com/watch?v=OWjXee8o04M) and Guided Investigation - Process Alerts.
Find more notebook templates in the Microsoft Sentinel > **Notebooks** > **Templates** tab.
We welcome feedback, suggestions, requests for features, contributed notebooks,
For more information, see: - [Create your first Microsoft Sentinel notebook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/creating-your-first-microsoft-sentinel-notebook/ba-p/2977745) (Blog series)-- [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)+ - [Tutorial: Microsoft Sentinel notebooks - Getting started](https://www.youtube.com/results?search_query=azazure+sentinel+notebooks) (Video) - [Tutorial: Edit and run Jupyter notebooks without leaving Azure ML studio](https://www.youtube.com/watch?v=AAj-Fz0uCNk) (Video) - [Webinar: Microsoft Sentinel notebooks fundamentals](https://www.youtube.com/watch?v=rewdNeX6H94)
sentinel Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/prerequisites.md
Before deploying Microsoft Sentinel, make sure that your Azure tenant has the fo
- A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) is required to house all of the data that Microsoft Sentinel will be ingesting and using for its detections, analytics, and other features. For more information, see [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md). Microsoft Sentinel doesn't support Log Analytics workspaces with a resource lock applied.
-We recommend that when you set up your Microsoft Sentinel workspace, [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) that's dedicated to Microsoft Sentinel and the resources that Microsoft Sentinel users including the Log Analytics workspace, any playbooks, workbooks, and so on.
+We recommend that when you set up your Microsoft Sentinel workspace, [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) that's dedicated to Microsoft Sentinel and the resources that Microsoft Sentinel uses, including the Log Analytics workspace, any playbooks, workbooks, and so on.
A dedicated resource group allows for permissions to be assigned once, at the resource group level, with permissions automatically applied to any relevant resources. Managing access via a resource group helps to ensure that you're using Microsoft Sentinel efficiently without potentially issuing improper permissions. Without a resource group for Microsoft Sentinel, where resources are scattered among multiple resource groups, a user or service principal may find themselves unable to perform a required action or view data due to insufficient permissions. To implement more access control to resources by tiers, use extra resource groups to house the resources that should be accessed only by those groups. Using multiple tiers of resource groups enables you to separate access between those tiers.
sentinel Watchlist Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlist-schemas.md
Title: Schemas for Microsoft Sentinel watchlist templates | Microsoft Docs description: Learn about the schemas used in each built-in watchlist template in Microsoft Sentinel.--++ Last updated 11/09/2021
The Terminated Employees watchlist lists user accounts of employees that have be
| **User On-Prem Sid** | SID | `S-1-12-1-4141952679-1282074057-123` | Optional | | **User Principal Name** | UPN | `JeffL@seccxp.ninja` | Mandatory | | **UserState** | String <br><br>We recommend using either `Notified` or `Terminated` | `Terminated` | Mandatory |
-| **Notification date** | Timestamp - day | `01.12.20` | Optional |
-| **Termination date** | Timestamp - day | `01.01.21` | Mandatory |
+| **Notification date** | Timestamp - day <br><br>We recommend using the UTC format | `2020-12-1` | Optional |
+| **Termination date** | Timestamp - day <br><br>We recommend using the UTC format | `2021-01-01` | Mandatory |
| **Tags** | List | `["SAW user","Amba Wolfs team"]` | Optional |
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cross-availability-zones.md
The Standard SKU load balancer and public IP introduce new abilities and differe
> Each node type in a Service Fabric cluster that uses a Standard SKU load balancer requires a rule allowing outbound traffic on port 443. This is necessary to complete cluster setup. Any deployment without this rule will fail.
-## 1. (Preview) Enable multiple Availability Zones in single virtual machine scale set
+## 1. Enable multiple Availability Zones in single virtual machine scale set
This solution allows users to span three Availability Zones in the same node type. This is the recommended deployment topology as it enables you to deploy across availability zones while maintaining a single virtual machine scale set..
-> [!NOTE]
-> Because this feature is currently in preview, it's not currently supported for production scenarios.
- A full sample template is available on [GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-Windows-Multiple-AZ-Secure). ![Diagram of the Azure Service Fabric Availability Zone architecture.][sf-multi-az-arch]
service-fabric Service Fabric Stateless Node Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-stateless-node-types.md
To enable stateless node types, you should configure the underlying virtual mach
``` ## Configuring Stateless node types with multiple Availability Zones
-To configure Stateless node type spanning across multiple availability zones follow the documentation [here](./service-fabric-cross-availability-zones.md#1-preview-enable-multiple-availability-zones-in-single-virtual-machine-scale-set), along with the few changes as follows:
+To configure Stateless node type spanning across multiple availability zones follow the documentation [here](./service-fabric-cross-availability-zones.md#1-enable-multiple-availability-zones-in-single-virtual-machine-scale-set), along with the few changes as follows:
* Set **singlePlacementGroup** : **false** if multiple placement groups is required to be enabled. * Set **upgradePolicy** : **Rolling** and add Application Health Extension/Health Probes as mentioned above.
spring-cloud How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-build-service.md
In Azure Spring Cloud, the existing Standard tier already supports compiling use
Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool during or after creating a new service instance of Azure Spring Cloud using the **VMware Tanzu settings**. The Build Agent Pool scale set sizes available are:
spring-cloud How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-marketplace-offer.md
To see the offering and read a detailed description, see [Azure Spring Cloud Ent
To see the supported plans in your market, select **Plans + Pricing**. > [!NOTE] > If you see "No plans are available for market '\<Location>'", that means none of your Azure subscriptions can purchase the SaaS offering. For more information, see [No plans are available for market '\<Location>'](./troubleshoot.md#no-plans-are-available-for-market-location) in [Troubleshooting](./troubleshoot.md).
spring-cloud Quickstart Provision Service Instance Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-provision-service-instance-enterprise.md
Use the following steps to provision an Azure Spring Cloud service instance:
> [!NOTE] > All Tanzu components are enabled by default. Be sure to carefully consider which Tanzu components you want to use or enable during the provisioning phase. After provisioning the Azure Spring Cloud instance, you can't enable or disable Tanzu components.
- :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Azure portal screenshot of Azure Spring Cloud creation page with VMware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Azure portal screenshot of Azure Spring Cloud creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
1. Select the **Application Insights** section, then select **Enable Application Insights**. You can also enable Application Insights after you provision the Azure Spring Cloud instance.
static-web-apps Bitbucket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/bitbucket.md
+
+ Title: "Tutorial: Deploy Bitbucket repositories on Azure Static Web Apps"
+description: Use Bitbucket with Azure Static Web Apps
++++ Last updated : 03/31/2021+++
+# Tutorial: Deploy Bitbucket repositories on Azure Static Web Apps
+
+Azure Static Web Apps has flexible deployment options that allow to work with various providers. In this tutorial, you deploy a web application hosted in Bitbucket to Azure Static Web Apps using a Linux virtual machine.
+
+> [!NOTE]
+> The Static Web Apps pipeline task currently only works on Linux machines.
+
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Import a repository to Bitbucket
+> * Create a static web app
+> * Configure the Bitbucket repo to deploy to Azure Static Web Apps
+
+## Prerequisites
+
+- [Bitbucket](https://bitbucket.org) account
+ - Ensure you have enabled [two-step verification](https://bitbucket.org/account/settings/two-step-verification/manage)
+- [Azure](https://portal.azure.com) account
+ - If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free).
+
+## Create a repository
+
+This article uses a GitHub repository as the source to import code into a Bitbucket repository.
+
+1. Sign in to [Bitbucket](https://bitbucket.org).
+1. Navigate to [https://bitbucket.org/repo/import](https://bitbucket.org/repo/import) to begin the import process.
+1. Under the *Old repository* label, in the *URL* box, enter the repository URL for your choice of framework.
+
+ # [No Framework](#tab/vanilla-javascript)
+
+ [https://github.com/staticwebdev/vanilla-basic.git](https://github.com/staticwebdev/vanilla-basic.git)
+
+ # [Angular](#tab/angular)
+
+ [https://github.com/staticwebdev/angular-basic.git](https://github.com/staticwebdev/angular-basic.git)
+
+ # [Blazor](#tab/blazor)
+
+ [https://github.com/staticwebdev/blazor-basic.git](https://github.com/staticwebdev/blazor-basic.git)
+
+ # [React](#tab/react)
+
+ [https://github.com/staticwebdev/react-basic.git](https://github.com/staticwebdev/react-basic.git)
+
+ # [Vue](#tab/vue)
+
+ [https://github.com/staticwebdev/vue-basic.git](https://github.com/staticwebdev/vue-basic.git)
+
+
+
+1. Next to the *Project* label, select **Create new project**.
+1. Enter **MyStaticWebApp**.
+1. Select the **Import repository** button and wait a moment while the website creates your repository.
+
+### Set main branch
+
+From time to time the template repository have more than one branch. Use the following steps to ensure Bitbucket maps the *main* tag to the main branch in the repository.
+
+1. Select **Repository settings**.
+1. Expand the **Advanced** section.
+1. Under the *Main branch* label, ensure **main** is selected in the drop down.
+1. If you made a change, select **Save changes**.
+1. Select the **Back** button on the left.
+
+## Create a static web app
+
+Now that the repository is created, you can create a static web app from the Azure portal.
+
+1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Select **Create a Resource**.
+1. Search for **Static Web Apps**.
+1. Select **Static Web Apps**.
+1. Select **Create**.
+1. In the _Basics_ section, begin by configuring your new app.
+
+ | Setting | Value |
+ |--|--|
+ | Azure subscription | Select your Azure subscription. |
+ | Resource Group | Select the **Create new** link and enter **static-web-apps-bitbucket**. |
+ | Name | Enter **my-first-static-web-app**. |
+ | Plan type | Select **Free**. |
+ | Region for Azure Functions API and staging environments | Select the region closest to you. |
+ | Source | Select **Other**. |
+
+1. Select **Review + create**.
+1. Select **Create**.
+1. Select the **Go to resource** button.
+1. Select the **Manage deployment token** button.
+1. Copy the deployment token value and set it aside in an editor for later use.
+1. Select the **Close** button on the *Manage deployment token* window.
+
+## Create the pipeline task in Bitbucket
+
+1. Navigate to the repository in Bitbucket.
+1. Select the **Source** menu item.
+1. Ensure the **main** branch is selected in the branch drop down.
+1. Select **Pipelines**.
+1. Select text link **Create your first pipeline**.
+1. On the *Starter pipeline* card, select the **Select** button.
+1. Enter the following YAML into the configuration file.
+
+ # [No Framework](#tab/vanilla-javascript)
+
+ ```yml
+ pipelines:
+ branches:
+ main:
+ - step:
+ name: Deploy to test
+ deployment: test
+ script:
+ - pipe: microsoft/azure-static-web-apps-deploy:main
+ variables:
+ APP_LOCATION: '$BITBUCKET_CLONE_DIR/src'
+ OUTPUT_LOCATION: '$BITBUCKET_CLONE_DIR/src'
+ API_TOKEN: $deployment_tokenΓÇï
+ ```
+
+ # [Angular](#tab/angular)
+
+ ```yml
+ pipelines:
+ branches:
+ main:
+ - step:
+ name: Deploy to test
+ deployment: test
+ script:
+ - pipe: microsoft/azure-static-web-apps-deploy:main
+ variables:
+ APP_LOCATION: '$BITBUCKET_CLONE_DIR'
+ OUTPUT_LOCATION: '$BITBUCKET_CLONE_DIR/dist/angular-basic'
+ API_TOKEN: $deployment_tokenΓÇï
+ ```
+
+ # [Blazor](#tab/blazor)
+
+ ```yml
+ pipelines:
+ branches:
+ main:
+ - step:
+ name: Deploy to test
+ deployment: test
+ script:
+ - pipe: microsoft/azure-static-web-apps-deploy:main
+ variables:
+ APP_LOCATION: '$BITBUCKET_CLONE_DIR/Client'
+ OUTPUT_LOCATION: '$BITBUCKET_CLONE_DIR/wwwroot'
+ API_TOKEN: $deployment_tokenΓÇï
+ ```
+
+ # [React](#tab/react)
+
+ ```yml
+ pipelines:
+ branches:
+ main:
+ - step:
+ name: Deploy to test
+ deployment: test
+ script:
+ - pipe: microsoft/azure-static-web-apps-deploy:main
+ variables:
+ APP_LOCATION: '$BITBUCKET_CLONE_DIR'
+ OUTPUT_LOCATION: '$BITBUCKET_CLONE_DIR/build'
+ API_TOKEN: $deployment_tokenΓÇï
+ ```
+
+ # [Vue](#tab/vue)
+
+ ```yml
+ pipelines:
+ branches:
+ main:
+ - step:
+ name: Deploy to test
+ deployment: test
+ script:
+ - pipe: microsoft/azure-static-web-apps-deploy:main
+ variables:
+ APP_LOCATION: '$BITBUCKET_CLONE_DIR'
+ OUTPUT_LOCATION: '$BITBUCKET_CLONE_DIR/dist'
+ API_TOKEN: $deployment_tokenΓÇï
+ ```
+
+
+
+ > [!NOTE]
+ > In this example the value for `pipe` is set to `microsoft/azure-static-web-apps-deploy:main`. Replace `main` with your desired branch name if you want your pipeline to work with a different branch.
+
+ The following configuration properties are used in the configuration file for your static web app.
+
+ The `$BITBUCKET_CLONE_DIR` variable maps to the repository's root folder location during the build process.
+
+ | Property | Description | Example | Required |
+ |--|--|--|--|
+ | `app_location` | Location of your application code. | Enter `/` if your application source code is at the root of the repository, or `/app` if your application code is in a directory named `app`. | Yes |
+ | `api_location` | Location of your Azure Functions code. | Enter `/api` if your api code is in a folder named `api`. If no Azure Functions app is detected in the folder, the build doesn't fail, the workflow assumes you don't want an API. | No |
+ | `output_location` | Location of the build output directory relative to the `app_location`. | If your application source code is located at `/app`, and the build script outputs files to the `/app/build` folder, then set build as the `output_location` value. | No |
+
+Next, define value for the `API_TOKEN` variable.
+
+1. Select **Add variables**.
+1. In the *Name* box, enterΓÇ»**deployment_token**, which matches the name in the workflow.
+1. In the *Value* box, paste in the deployment token value you set aside in a previous step.
+1. Check the **Secured** checkbox.
+1. Select the **Add** button.
+1. Select **Commit file** and return to your pipelines tab.
+
+Wait a moment on the *Pipelines* window and you'll see your deployment status appear. Once the deployment is finished running, you can view the website in your browser.
+
+## View the website
+
+There are two aspects to deploying a static app. The first step creates the underlying Azure resources that make up your app. The second is a Bitbucket workflow that builds and publishes your application.
+
+Before you can navigate to your new static site, the deployment build must first finish running.
+
+The Static Web Apps overview window displays a series of links that help you interact with your web app.
+
+1. Return to your static web app in the Azure portal.
+1. Navigate to the **Overview** window.
+1. Select the link under the *URL* label. Your website will load in a new tab.
+
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance and all the associated services by removing the resource group.
+
+1. Select the **static-web-apps-bitbucket** resource group from the *Overview* section.
+1. Select the **Delete resource group** button at the top of the resource group *Overview*.
+1. Enter the resource group name **static-web-apps-bitbucket** in the *Are you sure you want to delete "static-web-apps-bitbucket"?* confirmation dialog.
+1. Select **Delete**.
+
+The process to delete the resource group may take a few minutes to complete.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Add an API](add-api.md)
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
Keep in mind the following points about Azure role assignments in Azure Storage:
- If the storage account is locked with an Azure Resource Manager read-only lock, then the lock prevents the assignment of Azure roles that are scoped to the storage account or a container. - If you have set the appropriate allow permissions to access data via Azure AD and are unable to access the data, for example you are getting an "AuthorizationPermissionMismatch" error. Be sure to allow enough time for the permissions changes you have made in Azure AD to replicate, and be sure that you do not have any deny assignments that block your access, see [Understand Azure deny assignments](../../role-based-access-control/deny-assignments.md).
+> [!NOTE]
+> You also can make your own Azure custom roles to access blob data. For more information, see [Azure custom roles](../../role-based-access-control/custom-roles.md).
+ ## Next steps - [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
The updated policy takes up to 24 hours to go into effect. Once the policy is in
When a blob is moved from one access tier to another, its last modification time doesn't change. If you manually rehydrate an archived blob to hot tier, it would be moved back to archive tier by the lifecycle management engine. Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. You may also copy the blob to another location if it needs to stay in hot or cool tier permanently.
+**The blob prefix match string did not apply your actions to the blobs that you expected it to**
+
+The blob prefix match field of a policy is a full or partial blob path, which is used to match the blobs you want the policy actions to apply to. The path must start with the blob container name. If no prefix match is specified, then the policy will apply to all the blobs in the storage account. The prefix match string format is [container name]/[blob name], where the container name or blob name can be a full or partial container name.
+Here are some common misconceptions about the prefix match string:
+- A prefix match string of container1/ applies to all blobs in the blob container named container1. A prefix match string of container1 (note that there is no trailing / character in the prefix string) applies to all blobs in all containers where the blob container name starts with the string container1. This includes blob containers named container11, container1234, container1ab, and so on.
+- A prefix match string of container1/sub1/ would apply to all blobs in the container with the name container1, whose blob names that start with the string sub1/ like container1/sub1/test.txt or container1/sub1/sub2/test.txt.
+- Wildcard character * - This doesn't mean 'matches one or more occurrences of any character'. The asterisk character * is a valid character in a blob name in Azure Storage. If added in a rule, it means match the blobs with the asterisk in the blob name.
+- Wildcard character ? - This doesn't mean 'match a single occurrence of any character'. The question mark character ? is a valid character in a blob name in Azure Storage. If added in a rule, it means match the blobs with a question mark in the blob name.
+- prefixMatch with != - The prefixMatch rules only consider positive (=) logical comparisons. Therefore, negative (!=) logical comparisons are ignored.
++ ## Next steps - [Configure a lifecycle management policy](lifecycle-management-policy-configure.md)
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--| | Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
+<sup>2</sup> Feature is supported at the preview level.
+ ## Billing Object replication incurs additional costs on read and write transactions against the source and destination accounts, as well as egress charges for the replication of data from the source account to the destination account and read charges to process change feed.
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
This guide shows you how to use blobfuse, and mount a Blob storage container on
Blobfuse binaries are available on [the Microsoft software repositories for Linux](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software) for Ubuntu, Debian, SUSE, CentOS, Oracle Linux and RHEL distributions. To install blobfuse on those distributions, configure one of the repositories from the list. You can also build the binaries from source code following the [Azure Storage installation steps](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation#option-2build-from-source) if there are no binaries available for your distribution.
-Blobfuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, OracleLinux 8.1 . Run this command to make sure that you have one of those versions deployed:
+Blobfuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 7.9, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, OracleLinux 8.1 . Run this command to make sure that you have one of those versions deployed:
``` lsb_release -a
sudo chown <youruser> /mnt/ramdisk/blobfusetmp
### Use an SSD as a temporary path
-In Azure, you may use the ephemeral disks (SSD) available on your VMs to provide a low-latency buffer for blobfuse. In Ubuntu distributions, this ephemeral disk is mounted on '/mnt'. In Red Hat and CentOS distributions, the disk is mounted on '/mnt/resource/'.
+In Azure, you may use the ephemeral disks (SSD) available on your VMs to provide a low-latency buffer for blobfuse. Depending on the provisioning agent used, the ephemeral disk would be mounted on '/mnt' for cloud-init or '/mnt/resource' for waagent VMs.
Make sure your user has access to the temporary path:
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-performance-checklist.md
Each load-balancing operation may impact the latency of storage calls during the
You can follow some best practices to reduce the frequency of such operations. -- If possible, use blob or block sizes greater than 4 MiB for standard storage accounts and greater than 256 KiB for premium storage accounts. Larger blob or block sizes automatically activate high-throughput block blobs. High-throughput block blobs provide high-performance ingest that is not affected by partition naming.
+- If possible, use blob or block sizes greater than 256 KiB for standard and premium storage accounts. Larger blob or block sizes automatically activate high-throughput block blobs. High-throughput block blobs provide high-performance ingest that is not affected by partition naming.
- Examine the naming convention you use for accounts, containers, blobs, tables, and queues. Consider prefixing account, container, or blob names with a three-digit hash using a hashing function that best suits your needs. - If you organize your data using timestamps or numerical identifiers, make sure that you are not using an append-only (or prepend-only) traffic pattern. These patterns are not suitable for a range-based partitioning system. These patterns may lead to all traffic going to a single partition and limiting the system from effectively load balancing.
While parallelism can be great for performance, be careful about using unbounded
For best performance, always use the latest client libraries and tools provided by Microsoft. Azure Storage client libraries are available for a variety of languages. Azure Storage also supports PowerShell and Azure CLI. Microsoft actively develops these client libraries and tools with performance in mind, keeps them up-to-date with the latest service versions, and ensures that they handle many of the proven performance practices internally.
+> [!TIP]
+> The [ABFS driver](data-lake-storage-abfs-driver.md) was designed to overcome the inherent deficiencies of WASB. Favor using the ABFS driver over the WASB driver, as the ABFS driver is optimized specifically for big data analytics.
+ ## Handle service errors Azure Storage returns an error when the service cannot process a request. Understanding the errors that may be returned by Azure Storage in a given scenario is helpful for optimizing performance.
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
EndpointSuffix=core.chinacloudapi.cn;
## Next steps -- [Use the Azurite emulator for local Azure Storage development](../common/storage-use-azurite.md)-- [Azure Storage explorers](storage-explorers.md)-- [Using Shared Access Signatures (SAS)](storage-sas-overview.md)
+- [Use the Azurite emulator for local Azure Storage development](storage-use-azurite.md)
+- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md)
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
description: Learn how to deploy Azure File Sync, from start to finish, using th
Previously updated : 04/15/2021 Last updated : 04/12/2022
If you'd like to configure your Azure File sync to work with firewall and virtua
![Configuring firewall and virtual network settings to work with Azure File sync](media/storage-sync-files-deployment-guide/firewall-and-vnet.png)
+## SMB over QUIC on a server endpoint
+Although the Azure file share (cloud endpoint) is a full SMB endpoint capable of direct access from the cloud or on-premises, customers that desire accessing the file share data cloud-side often deploy an Azure File Sync server endpoint on a Windows Server instance hosted on an Azure VM. The most common reason to have an additional server endpoint rather than accessing the Azure file share directly is that changes made directly on the Azure file share may take up to 24 hours or longer to be discovered by Azure File Sync, while changes made on a server endpoint are discovered nearly immediately and synced to all other server and cloud-endpoints.
+
+This configuration is extremely common in environments where a substantial portion of users are not on-premises, such as when users are working from home or from the road. Traditionally, accessing any file share with SMB over the public internet, including both file shares hosted on Windows File Server or on Azure Files directly, is very difficult since most organizations and ISPs block port 445. You can work around this limitation with [private endpoints and VPNs](file-sync-networking-overview.md#private-endpoints), however Windows Server 2022 Azure Edition provides an additional access strategy: SMB over the QUIC transport protocol.
+
+SMB over QUIC communicates over port 443, which most organizations and ISPs have open to support HTTPS traffic. Using SMB over QUIC greatly simplifies the networking required to access a file share hosted on an Azure File Sync server endpoint for clients using Windows 11 or greater. To learn more about how to setup and configure SMB over QUIC on Windows Server Azure Edition, see [SMB over QUIC for Windows File Server](/windows-server/storage/file-server/smb-over-quic).
+ ## Onboarding with Azure File Sync The recommended steps to onboard on Azure File Sync for the first time with zero downtime while preserving full file fidelity and access control list (ACL) are as follows:
storage File Sync Storsimple Cost Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-storsimple-cost-comparison.md
+
+ Title: Comparing the costs of StorSimple to Azure File Sync | Microsoft Docs
+description: Learn how you can save money and modernize your storage infrastructure by migrating from StorSimple to Azure File Sync.
+++ Last updated : 4/18/2022++++
+# Comparing the costs of StorSimple to Azure File Sync
+StorSimple is a discontinued physical and virtual appliance product offered by Microsoft to help customers manage their on-premises storage footprint by tiering data to Azure. The [StorSimple 8000 series appliance](/lifecycle/products/azure-storsimple-8000-series) and the [StorSimple 1200 series appliance](/lifecycle/products/azure-storsimple-1200-series) will reach their end of life on December 31, 2022. It is imperative that you begin planning and executing your migration from StorSimple now.
+
+For most use cases of StorSimple, Azure File Sync is the recommended migration target for file shares being used with StorSimple. Azure File Sync supports similar capabilities to StorSimple, such as the ability to tier to the cloud. However, it provides additional features that StorSimple does not have, such as:
+
+- Storing data in a native file format accessible to administrators and users (Azure file shares) instead of a proprietary format only accessible through the StorSimple device
+- Multi-site sync
+- Integration with Azure services such as Azure Backup and Microsoft Defender for Storage
+
+To learn more about Azure File Sync, see [Introduction to Azure File Sync](file-sync-introduction.md). To learn how to seamlessly migrate to Azure File Sync from StorSimple, see [StorSimple 8100 and 8600 migration to Azure File Sync](../files/storage-files-migration-storsimple-8000.md) or [StorSimple 1200 migration to Azure File Sync](../files/storage-files-migration-storsimple-1200.md).
+
+Although Azure File Sync supports additional functionality not supported by StorSimple, administrators familiar with StorSimple may be concerned about how much Azure File Sync will cost relative to their current solution. This document covers how to compare the costs of StorSimple to Azure File Sync to correctly determine the costs of each. Although the cost situation may vary by customer depending on the customer's usage and configuration of StorSimple, most customers will pay the same or less with Azure File Sync than they currently pay with StorSimple.
+
+## Cost comparison principles
+To ensure a fair comparison of StorSimple to Azure File Sync and other services, you must consider the following principles:
+
+- **All costs of the solutions are accounted for.** Both StorSimple and Azure File Sync have multiple cost components. To do a fair comparison, all cost components must be considered.
+
+- **Cost comparison doesn't include the cost of features StorSimple doesn't support.** Azure File Sync supports multiple features that StorSimple does not. Some of the features of Azure File Sync, like multi-site sync, might increase the total cost of ownership of an Azure File Sync solution. It is reasonable to take advantage of new features as part of a migration; however, this should be viewed as an upgrade benefit of moving to Azure File Sync. Therefore, you should compare the costs of StorSimple and Azure File Sync *before* considering adopting new capabilities of Azure File Sync that StorSimple doesn't have.
+
+- **Cost comparison considers as-is configuration of StorSimple.** StorSimple supports multiple configurations that might increase or decrease the price of a StorSimple solution. To perform a fair cost comparison to Azure File Sync, you should consider only your current configuration of StorSimple. For example:
+ - **Use the same redundancy settings when comparing StorSimple and Azure File Sync.** If your StorSimple solution uses locally redundant storage (LRS) for its storage usage in Azure Blob storage, you should compare it to the cost of locally redundant storage in Azure Files, even if you would like to switch to zonally redundant (ZRS) or geo-redundant (GRS) storage when you adopt Azure File Sync.
+
+ - **Use the Azure Blob storage pricing you are currently using.** Azure Blob storage supports a v1 and a v2 pricing model. Most StorSimple customers would save money if they adopted the v2 pricing; however, most StorSimple customers are currently using the v1 pricing. Because StorSimple is going away, to perform a fair comparison, use the pricing for the pricing model you are currently using.
+
+## StorSimple pricing components
+StorSimple has the following pricing components that you should consider in the cost comparison analysis:
+
+- **Capital and operational costs of servers fronting/running StorSimple.** Capital costs relate to the upfront cost of the physical, on-premises hardware, while operating costs relate to ongoing costs you must bear to run your solution, such as labor, maintenance, and power costs. Capital costs vary slightly depending on whether you have a StorSimple 8000 series appliance or a StorSimple 1200 series appliance:
+ - **StorSimple 8000 series.** StorSimple 8000 series appliances are physical appliances that provide an iSCSI target that must be fronted by a file server. Although you may have purchased and configured this file server a long time ago, you should consider the capital and operational costs of running this server, in addition to the operating costs of running the StorSimple appliance. If your file server is hosted as a virtual machine (VM) on an on-premises hypervisor that hosts other workloads, to capture the opportunity cost of running the file server instead of other workloads, you should consider the file server VM as a fractional cost of the capital expenditure and operating costs for the host, in addition to the operating costs of the file server VM. Finally, you should include the cost of any StorSimple 8000 series virtual appliances and other VMs you might have deployed in Azure.
+
+ - **StorSimple 1200 series.** StorSimple 1200 series appliances are virtual appliances that you can run on-premises in the hypervisor of your choice. StorSimple 1200 series appliances can be an iSCSI target for a file server or can directly be a file server without the need for an additional server. If you have the StorSimple 1200 series appliance configured as an iSCSI target, you should include both the cost of hosting the virtual appliance and the cost of the file server fronting it. Although your StorSimple 1200 series appliance may be hosted on a hypervisor that hosts other workloads, to capture the opportunity cost of running the StorSimple 1200 series appliance instead of other workloads, you should consider the virtual appliance as a fractional cost of the capital expenditure of the host, in addition to the operating costs of the virtual appliance.
+
+- **StorSimple service costs.** The StorSimple management service in Azure is a major component of most customers' Azure bill for StorSimple. There are two billing models for the StorSimple management service. Which one you are using likely depends on how and when you purchased your StorSimple appliance (consult your bill for more detail):
+ - **StorSimple management fee per GiB of storage.** The StorSimple management fee per GiB of storage is the older billing model, and the one that most customers are using. In this model, you are charged for every logical GiB stored in StorSimple. You can see the price of management fee per GiB of storage on [the StorSimple pricing page](https://azure.microsoft.com/pricing/details/storsimple/), beneath the first table in the text (described as the "old pricing model"). It is important to note that the pricing page commentary is incorrect - customers were not transitioned to the per device billing model in December 2021.
+
+ - **StorSimple management fee per device.** The StorSimple management fee per device is the newer model, but fewer customers are using it. In this model, you are charged a daily fee for each day you have your device active. The fee expense depends on whether you have a physical or virtual appliance, and which specific appliance you have. You can see the price of management fee per device on [the StorSimple pricing page](https://azure.microsoft.com/pricing/details/storsimple/) (first table).
+
+- **Azure Blob storage costs.** StorSimple stores all of the data in its proprietary format in Azure Blob storage. When considering your Azure Blob storage costs, you should consider the storage utilization, which may be less or equal to the logical size of your data due to deduplication and compression done as part of StorSimple's proprietary data format, and also the transaction on storage, which is done whenever files are changed or ranges are recalled to on-premises from the device. Depending on when you deployed your StorSimple appliance, you may be subject to one of two blob storage pricing models:
+ - **Blob storage pricing v1, available in general purpose version 1 storage accounts.** Based on the age of most StorSimple deployments, most StorSimple customers are using the v1 Azure Blob storage pricing. This pricing has higher per GiB prices and lower transaction prices than the v2 model, and lacks the storage tiers that the Blob storage v2 pricing has. To see the Blob storage v1 prices, visit the [Azure Blob storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) and select the *Other* tab.
+
+ - **Blob storage pricing v2, available in general purpose version 2 storage accounts.** Blob storage v2 has lower GiB prices and higher transaction prices than the v1 model. Although some StorSimple customers could save money by switching to the v2 pricing, most StorSimple customers are currently using the v1 pricing. Since StorSimple is reaching end of life, you should stay with the pricing model that you are currently using, rather than pricing out the cost comparison with the v2 pricing. To see the Blob storage v2 prices, visit the [Azure Blob storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) and select the **Recommended** tab (the default when you load the page).
+
+## Azure File Sync pricing components
+Azure File Sync has the following pricing components you should consider in the cost comparison analysis:
++
+### Translating quantities from StorSimple
+If you are trying to estimate the costs of Azure File Sync based on the expenses you see in StorSimple, be careful with the following items:
+
+- **Azure Files bills on logical size (standard file shares).** Unlike StorSimple, which encodes your data in the StorSimple proprietary format before storing it to Azure Blob storage, Azure Files stores the data from Azure File Sync in the same form as you see it on your Windows File Server. This means that if you are trying to figure out how much storage you will consume in Azure Files, you should look at the logical size of the data from StorSimple, rather than the amount stored in Azure Blob storage. Although this may look like it will cause you to pay more when using Azure File Sync, you need to do the complete analysis including all aspects of StorSimple costs to see the true comparison. Additionally, Azure Files offers capacity reservations that enable you to buy storage at an up-to 36% discount over the list price. See [Capacity reservations in Azure Files](../files/understanding-billing.md#reserve-capacity).
+
+- **Don't assume a 1:1 ratio between transactions on StorSimple and transactions in Azure File Sync.** It might be tempting to look at the number of transactions done by StorSimple in Azure Blob storage and assume that number will be similar to the number of transactions that Azure File Sync will do on Azure Files. This number may overstate or understate the number of transactions Azure File Sync will do, so it's not a good way to estimate transaction costs. The best way to estimate transaction costs is to do a small proof-of-concept in Azure File Sync with a live file share similar to the file shares stored in StorSimple.
+
+## See also
+- [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/)
+- [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md)
+- [Create a file share](../files/storage-how-to-create-file-share.md?toc=/azure/storage/file-sync/toc.json) and [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md)
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
For encryption in transit, Azure provides a layer of encryption for all data in
- [Point-to-site (P2S) VPN](../../vpn-gateway/point-to-site-about.md) - [Site-to-Site](../../vpn-gateway/design.md#s2smulti) - [ExpressRoute](../../expressroute/expressroute-introduction.md)-- [A restricted public endpoint](storage-files-networking-overview.md#storage-account-firewall-settings)
+- [A restricted public endpoint](storage-files-networking-overview.md#public-endpoint-firewall-settings)
For more details on the available networking options, see [Azure Files networking considerations](storage-files-networking-overview.md).
storage Storage Files Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md
description: An overview of networking options for Azure Files.
Previously updated : 07/02/2021 Last updated : 04/12/2022
-# Azure Files networking considerations
-You can connect to an Azure file share in two ways:
+# Azure Files networking considerations
+You can access your Azure file shares over the public internet accessible endpoint, over one or more private endpoints on your network(s), or by caching your Azure file share on-premises with Azure File Sync (SMB file shares only). This article focuses on how to configure Azure Files for direct access over the public and/or private endpoints. To learn more about how to cache your Azure file share on-premises with Azure File Sync, see [Introduction to Azure File Sync](../file-sync/file-sync-introduction.md).
-- Accessing the share directly via the Server Message Block (SMB), Network File System (NFS), or FileREST protocols. This access pattern is primarily employed when to eliminate as many on-premises servers as possible.-- Creating a cache of the Azure file share on an on-premises server (or on an Azure VM) with Azure File Sync, and accessing the file share's data from the on-premises server with your protocol of choice (SMB, NFS, FTPS, etc.) for your use case. This access pattern is handy because it combines the best of both on-premises performance and cloud scale and serverless attachable services, such as Azure Backup.
+We recommend reading [Planning for an Azure Files deployment](storage-files-planning.md) prior to reading this conceptual guide.
-This article focuses on how to configure networking for when your use case calls for accessing the Azure file share directly rather than using Azure File Sync. For more information about networking considerations for an Azure File Sync deployment, see [Azure File Sync networking considerations](../file-sync/file-sync-networking-overview.md).
+Directly accessing the Azure file share often requires additional thought with respect to networking:
-Networking configuration for Azure file shares is done on the Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. Storage accounts expose multiple settings that help you secure network access to your file shares: network endpoints, storage account firewall settings, and encryption in transit.
+- SMB file shares communicate over port 445, which many organizations and internet service providers (ISPs) block for outbound (internet) traffic. This practice originates from legacy security guidance about deprecated and non-internet safe versions of the SMB protocol. Although SMB 3.x is an internet-safe protocol, organizational or ISP policies may not be possible to change. Therefore, mounting an SMB file share often requires additional networking configuration to use outside of Azure.
-> [!Important]
-> Azure Files supports multiple network routing options. The default option, Microsoft routing, works with all Azure Files configurations. The internet routing option does not support AD domain join scenarios or Azure File Sync.
+- NFS file shares rely on network-level authentication and are therefore only accessible via restricted networks. Using an NFS file share always requires some level of networking configuration.
-We recommend reading [Planning for an Azure Files deployment](storage-files-planning.md) prior to reading this conceptual guide.
+Configuration of the public and private endpoints for Azure Files is done on the top-level management object for Azure Files, the Azure storage account. A storage account is a management construct that represents a shared pool of storage in which you can deploy multiple Azure file shares, as well as the storage resources for other Azure storage services, such as blob containers or queues.
:::row::: :::column:::
We recommend reading [Planning for an Azure Files deployment](storage-files-plan
| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-## Accessing your Azure file shares
-SMB Azure file shares are immediately accessible via the storage account's public endpoint with SMB 3.1.1 and SMB 3.0. This means that authenticated requests, such as requests authorized by a user's logon identity, can originate securely from inside or outside of Azure. NFS Azure file shares are only accessible through the storage account's public endpoint if the public endpoint is restricted to Azure virtual networks.
+## Secure transfer
+By default, Azure storage accounts require secure transfer, regardless of data access is done over the public or private endpoint. For Azure Files, the **require secure transfer** setting is enforced for all protocol access to the data stored on Azure file shares, inclusive of SMB, NFS, and FileREST. The **require secure transfer** setting may be disabled to allow unencrypted traffic. You may also see this mislabeled as "require secure transfer for REST API operations".
-For many environments, you may wish to apply additional network configuration to their Azure file shares:
+The SMB, NFS, and FileREST protocols have slightly different behavior with respect to the **require secure transfer** setting:
-- With respect to SMB file shares, many organizations and internet service providers (ISPs) block the port that SMB uses to communicate, port 445. This practice originates from legacy security guidance about deprecated and non-internet safe versions of the SMB protocol. Although SMB 3.x is an internet-safe protocol, organizational or ISP policies may not be possible to change.
+- When require secure transfer is enabled on a storage account, all SMB file shares in that storage account will require the SMB 3.x protocol with AES-128-CCM, AES-128-GCM, or AES-256-GCM encryption algorithms, depending on the available/required encryption negotiation between the SMB client and Azure Files. You can toggle which SMB encryption algorithms are allowed via the [SMB security settings](files-smb-protocol.md#smb-security-settings). Disabling the **require secure transfer** setting enables SMB 2.1 and SMB 3.x mounts without encryption.
-- With respect to NFS file shares, restricted public endpoint access restricts mounts to inside of Azure only.
+- NFS file shares do not support an encryption mechanism, so in order to use the NFS protocol to access an Azure file share, you must disable **require secure transfer** for the storage account.
-- Some organizations require traffic to Azure to follow a deterministic path.
+- When secure transfer is required, the FileREST protocol may only be used with HTTPS. FileREST is only supported on SMB file shares today.
-### Tunneling traffic over a virtual private network or ExpressRoute
-When you establish a network tunnel between your on-premises network and Azure, you are peering your on-premises network with one or more virtual networks in Azure. A [virtual network](../../virtual-network/virtual-networks-overview.md), or VNet, is similar to a traditional network that you'd operate on-premises. Like an Azure storage account or an Azure VM, a VNet is an Azure resource that is deployed in a resource group.
+## Public endpoint
+The public endpoint for the Azure file shares within a storage account is an internet exposed endpoint. The public endpoint is the default endpoint for a storage account, however, it can be disabled if desired.
-Azure Files supports the following mechanisms to tunnel traffic between your on-premises workstations and servers and Azure SMB/NFS file shares:
+The SMB, NFS, and the FileREST protocols can all use the public endpoint. However, each has slightly different rules for access:
-- [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md): A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an alternate location (such as on-premises) over the internet. An Azure VPN Gateway is an Azure resource that can be deployed in a resource group along side of a storage account or other Azure resources. VPN gateways expose two different types of connections:
- - [Point-to-Site (P2S) VPN](../../vpn-gateway/point-to-site-about.md) gateway connections, which are VPN connections between Azure and an individual client. This solution is primarily useful for devices that are not part of your organization's on-premises network, such as telecommuters who want to be able to mount their Azure file share from home, a coffee shop, or hotel while on the road. To use a P2S VPN connection with Azure Files, a P2S VPN connection will need to be configured for each client that wants to connect. To simplify the deployment of a P2S VPN connection, see [Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files](storage-files-configure-p2s-vpn-windows.md) and [Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md).
- - [Site-to-Site (S2S) VPN](../../vpn-gateway/design.md#s2smulti), which are VPN connections between Azure and your organization's network. A S2S VPN connection enables you to configure a VPN connection once, for a VPN server or device hosted on your organization's network, rather than doing for every client device that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see [Configure a Site-to-Site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md).
-- [ExpressRoute](../../expressroute/expressroute-introduction.md), which enables you to create a defined route between Azure and your on-premises network that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-premises datacenter and Azure, ExpressRoute may be useful when network performance is a consideration. ExpressRoute is also a good option when your organization's policy or regulatory requirements require a deterministic path to your resources in the cloud.
+- SMB file shares are accessible from anywhere in the world via the storage account's public endpoint with SMB 3.x with encryption. This means that authenticated requests, such as requests authorized by a user's logon identity, can originate securely from inside or outside of Azure region. If SMB 2.1 or SMB 3.x without encryption is desired, two conditions must be met:
+ 1. The storage account's **require secure transfer** setting must be disabled.
+ 2. The request must originate from inside of the Azure region. As previously mentioned, encrypted SMB requests are allowed from anywhere, inside or outside of the Azure region.
+
+- NFS file shares are accessible from the storage account's public endpoint if and only if the storage account's public endpoint is restricted to specific virtual networks using *service endpoints*. See [public endpoint firewall settings](#public-endpoint-firewall-settings) for additional information on *service endpoints*.
+
+- FileREST is accessible via the public endpoint. If secure transfer is required, only HTTPS requests are accepted. If secure transfer is disabled, HTTP requests are accepted by the public endpoint regardless of origin
-Regardless of which tunneling method you use to access your Azure file shares, you need a mechanism to ensure the traffic to your storage account goes over the tunnel rather than your regular internet connection. It is technically possible to route to the public endpoint of the storage account, however this requires hard-coding all of the IP addresses for the Azure storage clusters in a region, since storage accounts may be moved between storage clusters at any time. This also requires constantly updating the IP address mappings since new clusters are added all the time.
+### Public endpoint firewall settings
+The storage account firewall restricts access to the public endpoint for a storage account. Using the storage account firewall, you can restrict access to certain IP addresses/IP address ranges, to specific virtual networks, or disable the public endpoint entirely.
-Rather than hard-coding the IP address of your storage accounts into your VPN routing rules, we recommend using private endpoints, which give your storage account an IP address from the address space of an Azure virtual network. Since creating a tunnel to Azure establishes peering between your on-premises network and one or more virtual network, this enables the correct routing in a durable way.
+When you restrict the traffic of the public endpoint to one or more virtual networks, you are using a capability of the virtual network called *service endpoints*. Requests directed to the service endpoint of Azure Files are still going to the storage account public IP address, however the networking layer is doing additional verification of the request to validate that it is coming from an authorized virtual network. SMB, NFS, and FileREST all support service endpoints, however, unlike SMB and FileREST, NFS file shares can only be access with the public endpoint through use of a service endpoint.
-### Private endpoints
+To learn more about how to configure the storage account firewall, see [configure Azure storage firewalls and virtual networks](storage-files-networking-endpoints.md#restrict-access-to-the-public-endpoint-to-specific-virtual-networks).
+
+### Public endpoint network routing
+Azure Files supports multiple network routing options. The default option, Microsoft routing, works with all Azure Files configurations. The internet routing option does not support AD domain join scenarios or Azure File Sync.
+
+## Private endpoints
In addition to the default public endpoint for a storage account, Azure Files provides the option to have one or more private endpoints. A private endpoint is an endpoint that is only accessible within an Azure virtual network. When you create a private endpoint for your storage account, your storage account gets a private IP address from within the address space of your virtual network, much like how an on-premises file server or NAS device receives an IP address within the dedicated address space of your on-premises network. An individual private endpoint is associated with a specific Azure virtual network subnet. A storage account may have private endpoints in more than one virtual network.
Using private endpoints with Azure Files enables you to:
- Secure your Azure file shares by configuring the storage account firewall to block all connections on the public endpoint. By default, creating a private endpoint does not block connections to the public endpoint. - Increase security for the virtual network by enabling you to block exfiltration of data from the virtual network (and peering boundaries).
-To create a private endpoint, see [Configuring private endpoints for Azure Files](storage-files-networking-endpoints.md).
+To create a private endpoint, see [Configuring private endpoints for Azure Files](storage-files-networking-endpoints.md#create-a-private-endpoint).
-### Private endpoints and DNS
+### Tunneling traffic over a virtual private network or ExpressRoute
+To make use of private endpoints to access SMB or NFS file shares from on-premises, you must establish a network tunnel between your on-premises network and Azure. A [virtual network](../../virtual-network/virtual-networks-overview.md), or VNet, is similar to a traditional network that you'd operate on-premises. Like an Azure storage account or an Azure VM, a VNet is an Azure resource that is deployed in a resource group.
+
+Azure Files supports the following mechanisms to tunnel traffic between your on-premises workstations and servers and Azure SMB/NFS file shares:
+
+- [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md): A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an alternate location (such as on-premises) over the internet. An Azure VPN Gateway is an Azure resource that can be deployed in a resource group along side of a storage account or other Azure resources. VPN gateways expose two different types of connections:
+ - [Point-to-Site (P2S) VPN](../../vpn-gateway/point-to-site-about.md) gateway connections, which are VPN connections between Azure and an individual client. This solution is primarily useful for devices that are not part of your organization's on-premises network, such as telecommuters who want to be able to mount their Azure file share from home, a coffee shop, or hotel while on the road. To use a P2S VPN connection with Azure Files, a P2S VPN connection will need to be configured for each client that wants to connect. To simplify the deployment of a P2S VPN connection, see [Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files](storage-files-configure-p2s-vpn-windows.md) and [Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md).
+ - [Site-to-Site (S2S) VPN](../../vpn-gateway/design.md#s2smulti), which are VPN connections between Azure and your organization's network. A S2S VPN connection enables you to configure a VPN connection once, for a VPN server or device hosted on your organization's network, rather than doing for every client device that needs to access your Azure file share. To simplify the deployment of a S2S VPN connection, see [Configure a Site-to-Site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md).
+- [ExpressRoute](../../expressroute/expressroute-introduction.md), which enables you to create a defined route between Azure and your on-premises network that doesn't traverse the internet. Because ExpressRoute provides a dedicated path between your on-premises datacenter and Azure, ExpressRoute may be useful when network performance is a consideration. ExpressRoute is also a good option when your organization's policy or regulatory requirements require a deterministic path to your resources in the cloud.
+
+> [!Note]
+> Although we recommend using private endpoints to assist in extending your on-premises network into Azure, it is technically possible to route to the public endpoint over the VPN connection, however this requires hard-coding the IP address for the public endpoint for Azure storage cluster that serves your storage account. Since storage accounts may be moved between storage clusters at any time and new clusters are added and removed all the time, this requires regularly hard-coding all possible the Azure storage IP addresses into your routing rules.
+
+### DNS configuration
When you create a private endpoint, by default we also create a (or update an existing) private DNS zone corresponding to the `privatelink` subdomain. Strictly speaking, creating a private DNS zone is not required to use a private endpoint for your storage account, but it is highly recommended in general and explicitly required when mounting your Azure file share with an Active Directory user principal or accessing from the FileREST API. > [!Note]
This reflects the fact that the storage account can expose both the public endpo
- Modifying the hosts file on your clients to make `storageaccount.file.core.windows.net` resolve to the desired private endpoint's private IP address. This is strongly discouraged for production environments, since you will need make these changes to every client that wants to mount your Azure file shares and changes to the storage account or private endpoint will not be automatically handled. - Creating an A record for `storageaccount.file.core.windows.net` in your on-premises DNS servers. This has the advantage that clients in your on-premises environment will be able to automatically resolve the storage account without needing to configure each client, however this solution is similarly brittle to modifying the hosts file because changes are not reflected. Although this solution is brittle, it may be the best choice for some environments.-- Forward the `core.windows.net` zone from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To workaround this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` on to the Azure private DNS zone. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](storage-files-networking-dns.md).-
-## Storage account firewall settings
-A firewall is a network policy which controls which requests are allowed to access the public endpoint for a storage account. Using the storage account firewall, you can restrict access to the storage account's public endpoint to certain IP addresses or ranges or to a virtual network. In general, most firewall policies for a storage account will restrict networking access to one or more virtual networks.
-
-There are two approaches to restricting access to a storage account to a virtual network:
-- Create one or more private endpoints for the storage account and restrict all access to the public endpoint. This ensures that only traffic originating from within the desired virtual networks can access the Azure file shares within the storage account.-- Restrict the public endpoint to one or more virtual networks. This works by using a capability of the virtual network called *service endpoints*. When you restrict the traffic to a storage account via a service endpoint, you are still accessing the storage account via the public IP address.
+- Forward the `core.windows.net` zone from your on-premises DNS servers to your Azure private DNS zone. The Azure private DNS host can be reached through a special IP address (`168.63.129.16`) that is only accessible inside virtual networks that are linked to the Azure private DNS zone. To work around this limitation, you can run additional DNS servers within your virtual network that will forward `core.windows.net` on to the Azure private DNS zone. To simplify this set up, we have provided PowerShell cmdlets that will auto-deploy DNS servers in your Azure virtual network and configure them as desired. To learn how to set up DNS forwarding, see [Configuring DNS with Azure Files](storage-files-networking-dns.md).
-> [!NOTE]
-> NFS file shares can only access the storage account's public endpoint via virtual networks. NFS shares may freely access the storage account's private endpoints.
+## SMB over QUIC
+Windows Server 2022 Azure Edition supports a new transport protocol called QUIC for the SMB server provided by the File Server role. QUIC is a replacement for TCP that is built on top of UDP, providing numerous advantages over TCP while still providing a reliable transport mechanism. Although there are multiple advantages to QUIC as a transport protocol, one key advantage for the SMB protocol is that all transport is done over port 443, which is widely open outbound to support HTTPS. This effectively means that SMB over QUIC offers a "SMB VPN" for file sharing over the public internet. Windows 11 ships with a SMB over QUIC capable client.
-To learn more about how to configure the storage account firewall, see [configure Azure storage firewalls and virtual networks](../common/storage-network-security.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+Unfortunately, Azure Files does not directly support SMB over QUIC, however, you can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md) and [SMB over QUIC](/windows-server/storage/file-server/smb-over-quic).
## See also - [Azure Files overview](storage-files-introduction.md)
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
Title: Planning for an Azure Files deployment | Microsoft Docs
-description: Understand planning for an Azure Files deployment. You can either direct mount an Azure file share, or cache Azure file share on-premises with Azure File Sync.
+description: Understand planning for an Azure Files deployment. You can either direct mount an Azure file share, or cache Azure file shares on-premises with Azure File Sync.
Previously updated : 07/02/2021 Last updated : 04/12/2022
# Planning for an Azure Files deployment [Azure Files](storage-files-introduction.md) can be deployed in two main ways: by directly mounting the serverless Azure file shares or by caching Azure file shares on-premises using Azure File Sync. Which deployment option you choose changes the things you need to consider as you plan for your deployment. -- **Direct mount of an Azure file share**: Since Azure Files provides either Server Message Block (SMB) or Network File System (NFS) access, you can mount Azure file shares on-premises or in the cloud using the standard SMB or NFS clients available in your OS. Because Azure file shares are serverless, deploying for production scenarios does not require managing a file server or NAS device. This means you don't have to apply software patches or swap out physical disks.
+- **Direct mount of an Azure file share**: Because Azure Files provides either Server Message Block (SMB) or Network File System (NFS) access, you can mount Azure file shares on-premises or in the cloud using the standard SMB or NFS clients available in your OS. Because Azure file shares are serverless, deploying for production scenarios does not require managing a file server or NAS device. This means you don't have to apply software patches or swap out physical disks.
-- **Cache Azure file share on-premises with Azure File Sync**: Azure File Sync enables you to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms an on-premises (or cloud) Windows Server into a quick cache of your Azure SMB file share.
+- **Cache Azure file share on-premises with Azure File Sync**: [Azure File Sync](../file-sync/file-sync-introduction.md) enables you to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms an on-premises (or cloud) Windows Server into a quick cache of your SMB Azure file share.
This article primarily addresses deployment considerations for deploying an Azure file share to be directly mounted by an on-premises or cloud client. To plan for an Azure File Sync deployment, see [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md).
For customers migrating from on-premises file servers, or creating new file shar
If you intend to use the storage account key to access your Azure file shares, we recommend using service endpoints as described in the [Networking](#networking) section. ## Networking
-Azure file shares are accessible from anywhere via the storage account's public endpoint. This means that authenticated requests, such as requests authorized by a user's logon identity, can originate securely from inside or outside of Azure. In many customer environments, an initial mount of the Azure file share on your on-premises workstation will fail, even though mounts from Azure VMs succeed. The reason for this is that many organizations and internet service providers (ISPs) block the port that SMB uses to communicate, port 445. To see the summary of ISPs that allow or disallow access from port 445, go to [TechNet](https://social.technet.microsoft.com/wiki/contents/articles/32346.azure-summary-of-isps-that-allow-disallow-access-from-port-445.aspx).
+Directly mounting your Azure file share often requires some thought about networking configuration because:
-To unblock access to your Azure file share, you have two main options:
+- The port that SMB file shares use for communication, port 445, is frequently blocked by many organizations and internet service providers (ISPs) for outbound (internet) traffic.
+- NFS file shares rely on network-level authentication and are therefore only accessible via restricted networks. Using an NFS file share always requires some level of networking configuration.
-- Unblock port 445 for your organization's on-premises network. Azure file shares may only be externally accessed via the public endpoint using internet safe protocols such as SMB 3.x and the FileREST API. This is the easiest way to access your Azure file share from on-premises since it doesn't require advanced networking configuration beyond changing your organization's outbound port rules, however, we recommend you remove legacy and deprecated versions of the SMB protocol, namely SMB 1.0. To learn how to do this, see [Securing Windows/Windows Server](/windows-server/storage/file-server/troubleshoot/detect-enable-and-disable-smbv1-v2-v3) and [Securing Linux](files-remove-smb1-linux.md).
+To configure networking, Azure Files provides an internet accessible public endpoint and integration with Azure networking features like *service endpoints*, which help restrict the public endpoint to specified virtual networks, and *private endpoints*, which give your storage account a private IP address from within a virtual network IP address space.
-- Access Azure file shares over an ExpressRoute or VPN connection. When you access your Azure file share via a network tunnel, you are able to mount your Azure file share like an on-premises file share since SMB traffic does not traverse your organizational boundary.
+From a practical perspective, this means you will need to consider the following network configurations:
-Although from a technical perspective it's considerably easier to mount your Azure file shares via the public endpoint, we expect most customers will opt to mount their Azure file shares over an ExpressRoute or VPN connection. Mounting with these options is possible with both SMB and NFS shares. To do this, you will need to configure the following for your environment:
+- If the required protocol is SMB, and all access over SMB is from clients in Azure, no special networking configuration is required.
+- If the required protocol is SMB, and the access is from clients on-premises, a VPN or ExpressRoute connection from on-premises to your Azure network is required, with Azure Files exposed on your internal network using private endpoints.
+- If the required protocol is NFS, you can use either service endpoints or private endpoints to restrict the network to specified virtual networks.
-- **Network tunneling using ExpressRoute, Site-to-Site, or Point-to-Site VPN**: Tunneling into a virtual network allows accessing Azure file shares from on-premises, even if port 445 is blocked.-- **Private endpoints**: Private endpoints give your storage account a dedicated IP address from within the address space of the virtual network. This enables network tunneling without needing to open on-premises networks up to all the of the IP address ranges owned by the Azure storage clusters. -- **DNS forwarding**: Configure your on-premises DNS to resolve the name of your storage account (`storageaccount.file.core.windows.net` for the public cloud regions) to resolve to the IP address of your private endpoints.
+To learn more about how to configure networking for Azure Files, see [Azure Files networking considerations](storage-files-networking-overview.md).
-> [!Important]
-> Azure Files supports multiple network routing options. The default option, Microsoft routing, works with all Azure Files configurations. The internet routing option does not support AD domain join scenarios or Azure File Sync.
-
-To plan for the networking associated with deploying an Azure file share, see [Azure Files networking considerations](storage-files-networking-overview.md).
+In addition to directly connecting to the file share using the public endpoint or using a VPN/ExpressRoute connection with a private endpoint, SMB provides an additional client access strategy: SMB over QUIC. SMB over QUIC offers zero-config "SMB VPN" for SMB access over the QUIC transport protocol. Although Azure Files does not directly support SMB over QUIC, you can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see [SMB over QUIC with Azure File Sync](storage-files-networking-overview.md#smb-over-quic).
## Encryption Azure Files supports two different types of encryption: encryption in transit, which relates to the encryption used when mounting/accessing the Azure file share, and encryption at rest, which relates to how the data is encrypted when it is stored on disk.
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Title: Understand Azure Files billing | Microsoft Docs
description: Learn how to interpret the provisioned and pay-as-you-go billing models for Azure file shares. - Previously updated : 3/21/2022+ Last updated : 4/16/2022
For Azure Files pricing information, see [Azure Files pricing page](https://azur
| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-## Storage units
-Azure Files uses base-2 units of measurement to represent storage capacity: KiB, MiB, GiB, and TiB. Your operating system may or may not use the same unit of measurement or counting system.
+## Storage units
+Azure Files uses the base-2 units of measurement to represent storage capacity: KiB, MiB, GiB, and TiB.
-### Windows
-Both the Windows operating system and Azure Files measure storage capacity using the base-2 counting system, but there is a difference when labeling units. Azure Files labels its storage capacity with base-2 units of measurement while Windows labels its storage capacity in base-10 units of measurement. When reporting storage capacity, Windows doesn't convert its storage capacity from base-2 to base-10.
+| Acronym | Definition | Unit |
+|||-|
+| KiB | 1,024 bytes | kibibyte |
+| MiB | 1,024 KiB (1,048,576 bytes) | mebibyte |
+| GiB | 1024 MiB (1,073,741,824 bytes) | gibibyte |
+| TiB | 1024 GiB (1,099,511,627,776 bytes) | tebibyte |
-| Acronym | Definition | Unit | Windows displays as |
-|||-||
-| KiB | 1,024 bytes | kibibyte | KB (kilobyte) |
-| MiB | 1,024 KiB (1,048,576 bytes) | mebibyte | MB (megabyte) |
-| GiB | 1024 MiB (1,073,741,824 bytes) | gibibyte | GB (gigabyte) |
-| TiB | 1024 GiB (1,099,511,627,776 bytes) | tebibyte | TB (terabyte) |
+Although these are the units commonly used by most operating systems and tools, they are frequently mislabeled as the base-10 units, which you may be more familiar with: KB, MB, GB, and TB. Although the rationale may vary, the common reason why operating systems like Windows mislabel the storage units is because many operating systems began using these acronyms before they were standardized by the IEC, BIPM, and NIST.
-### macOS
-See [How iOS and macOS report storage capacity](https://support.apple.com/HT201402) on Apple's website to determine which counting system is used.
+The following table shows how common operating systems measure and label storage:
-### Linux
-A different counting system could be used by each operating system or individual piece of software. See their documentation to determine how they report storage capacity.
+| Operating system | Measurement system | Labeling |
+|-|-|-|-|
+| Windows | Base-2 | Consistently mislabels as base-10. |
+| Linux distributions | Commonly base-2, some software may use base-10 | Inconsistent labeling, alignment between measurement and labeling depends on the software package. |
+| macOS, iOS, and iPad OS | Base-10 | [Consistently labels as base-10](https://support.apple.com/HT201402). |
+
+Check with your operating system vendor if your operating system is not listed.
+
+## File share total cost of ownership checklist
+If you are migrating to Azure Files from on-premises or comparing Azure Files to other cloud storage solutions, you should consider the following factors to ensure a fair, apples-to-apples comparison:
+
+- **How do you pay for storage, IOPS, and bandwidth?** With Azure Files, the billing model you use depends on whether you are deploying [premium](#provisioned-model) or [standard](#pay-as-you-go-model) file shares. Most cloud solutions have models that align with the principles of either provisioned storage (price determinism, simplicity) or pay-as-you-go storage (pay only for what you actually use). Of particular interest for provisioned models are minimum provisioned share size, the provisioning unit, and the ability to increase and decrease provisioning.
+
+- **Are there any methods to optimized storage costs?** With Azure Files, you can use [capacity reservations](#reserve-capacity) to achieve an up to 36% discount on storage. Other solutions may employ storage efficiency strategies like deduplication or compression to optionally optimized storage, but remember, these storage optimization strategies often have non-monetary costs, such as reducing performance. Azure Files capacity reservations have no side-effects on performance.
+
+- **How do you achieve storage resiliency and redundancy?** With Azure Files, storage resiliency and redundancy are baked into the product offering. All tiers and redundancy levels ensure that data is highly available and at least three copies of your data are accessible. When considering other file storage options, consider whether storage resiliency and redundancy is built-in or something you must assemble yourself.
+
+- **What do you need to manage?** With Azure Files, the basic unit of management is a storage account. Other solutions may require additional management, such as operating system updates or virtual resource management (VMs, disks, network IP addresses, etc.).
+
+- **What are the costs of value-added products, like backup, security, etc.?** Azure Files supports integrations with multiple first- and third-party [value-added services](#value-added-services), such as Azure Backup, Azure File Sync, and Azure Defender that provide backup, replication and caching, and additional security functionality for Azure Files. Value-added solutions on-premises or with other cloud storage solutions will have their own licensing and product costs, and should be considered consistently as part of the total cost of ownership for file storage.
## Reserve capacity Azure Files supports storage capacity reservations, which enable you to achieve a discount on storage by pre-committing to storage utilization. You should consider purchasing reserved instances for any production workload, or dev/test workloads with consistent footprints. When you purchase reserved capacity, your reservation must specify the following dimensions:
Similarly, if you put a highly accessed workload in the cool tier, you will pay
Your workload and activity level will determine the most cost efficient tier for your standard file share. In practice, the best way to pick the most cost efficient tier involves looking at the actual resource consumption of the share (data stored, write transactions, etc.).
-### Logical size versus physical size
-The data at-rest capacity charge for Azure Files is billed based on the logical size, often colloquially called "size" or "content length", of the file. The logical size of the file is distinct from the physical size of the file on disk, often called "size on disk" or "used size". The physical size of the file may be large or smaller than the logical size of the file.
+### Choosing a tier
+Regardless of how you migrate existing data into Azure Files, we recommend initially creating the file share in transaction optimized tier due to the large number of transactions incurred during migration. After your migration is complete and you've operated for a few days/weeks with regular usage, you can plug in your transaction counts into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to figure out which tier is best suited for your workload.
+
+Because standard file shares only show transaction information at the storage account level, using the storage metrics to estimate which tier is cheaper at the file share level is an imperfect science. If possible, we recommend deploying only one file share in each storage account to ensure full visibility into billing.
+
+To see previous transactions:
+
+1. Go to your storage account and select **Metrics** in the left navigation bar.
+2. Select **Scope** as your storage account name, **Metric Namespace** as "File", **Metric** as "Transactions", and **Aggregation** as "Sum".
+3. Select **Apply Splitting**.
+4. Select **Values** as "API Name". Select your desired **Limit** and **Sort**.
+5. Select your desired time period.
+
+> [!Note]
+> Make sure you view transactions over a period of time to get a better idea of average number of transactions. Ensure that the chosen time period does not overlap with initial provisioning. Multiply the average number of transactions during this time period to get the estimated transactions for an entire month.
### What are transactions? Transactions are operations or requests against Azure Files to upload, download, or otherwise manipulate the contents of the file share. Every action taken on a file share translates to one or more transactions, and on standard shares that use the pay-as-you-go billing model, that translates to transaction costs.
There are five basic transaction categories: write, list, read, other, and delet
> [!Note] > NFS 4.1 is only available for premium file shares, which use the provisioned billing model, transactions do not affect billing for premium file shares.
-## Value-add services
+## Provisioned/quota, logical size, and physical size
+Azure Files tracks three distinct quantities with respect to share capacity:
+
+- **Provisioned size or quota**: With both premium and standard file shares, you specify the maximum size that the file share is allowed to grow to. In premium file shares, this value is called the provisioned size of the file share and whatever you amount you provision is what you pay for, regardless of how much you actually use. In standard file shares, this value is called quota and does not directly affect your bill. Provisioned size is a required field for premium file shares, while standard file shares will default if not directly specified to the maximum value supported by the storage account, either 5 TiB or 100 TiB, depending on the storage account type and settings.
+
+- **Logical size**: The logical size of a file share or of a particular file relates to how big the file is without considering how the file is actually stored, where additional optimizations may be applied. One way to think about this is that the logical size of the file is how many KiB/MiB/GiB will be transferred over the wire if you copy it to a different location. In both premium and standard file shares, the total logical size of the file share is what is used for enforcement against provisioned size/quota. In standard file shares, the logical size is the quantity used for the data at-rest usage billing. Logical size is referred to as "size" in the Windows properties dialog for a file/folder and as "content length" by Azure Files metrics.
+
+- **Physical size**: The physical size of the file relates to the size of the file as encoded on disk. This may align with the file's logical size, or it may be smaller, depending on how the file has been written to by the operating system. A common reason for the logical size and physical size to be different is through the use of [sparse files](/windows/win32/fileio/sparse-files). The physical size of the files in the share is used for snapshot billing, although allocated ranges are shared between snapshots if they are unchanged (differential storage). To learn more about how snapshots are billed in Azure Files, see [Snapshots](#snapshots).
+
+## Snapshots
+Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server. Snapshots are always differential from the live share and from each other, meaning that you are always paying only for what's different in each snapshot. For more information on share snapshots, see [Overview of snapshots for Azure Files](storage-snapshots-files.md).
+
+Snapshots do not count against file share size limits, although you are limited to a specific number of snapshots. To see the current snapshot limits, see [Azure file share scale targets](storage-files-scale-targets.md#azure-file-share-scale-targets).
+
+Snapshots are always billed based on the differential storage utilization of each snapshot, however this looks slightly different between premium file shares and standard file shares:
+
+- In premium file shares, snapshots are billed against their own snapshot meter, which has a reduced price over the provisioned storage price. This means that you will see a separate line item on your bill representing snapshots for premium file shares for each FileStorage storage account on your bill.
+
+- In standard file shares, snapshots are billed as part of the normal used storage meter, although you are still only billed for the differential cost of the snapshot. This means that you will not see a separate line item on your bill representing snapshots for each standard storage account containing Azure file shares. This also means that differential snapshot usage counts against capacity reservations that are purchased for standard file shares.
+
+Value-added services for Azure Files may use snapshots as part of their value proposition. See [value-added services for Azure Files](#value-added-services) for more information on how snapshots are used.
+
+## Value-added services
+Like on-premises storage solutions which offer first- and third-party features/product integrations to bring additional value to the hosted file shares, Azure Files provides integration points for first- and third-party products to integrate with customer-owned file shares. Although these solutions may provide considerable extra value to Azure Files, you should consider the additional costs that these services add to the total cost of an Azure Files solution.
+
+Costs are generally broken down into three buckets:
+
+- **Licensing costs for the value-added service.** These may come in the form of a fixed cost per customer, end-user (sometimes referred to as a "head cost"), Azure file share or storage account, or in units of storage utilization, such as a fixed cost for every 500 GiB chunk of data in the file share.
+
+- **Transaction costs for the value-added service.** Some value-added services have their own concept of transactions distinct from what Azure Files views as a transaction. These transactions will show up on your bill under the value-added service's charges; however, relate directly to how you use the value-added service with your file share.
+
+- **Azure Files costs for using a value-added service.** Azure Files does not directly charge customers costs for adding value-added services, but as part of adding value to the Azure file share, the value-added service might increase the costs that you see on your Azure file share. This is really easy to see with standard file shares, because standard file shares have a pay-as-you-go model with transaction charges. If the value-added service does transactions against the file share on your behalf, they will show up in your Azure Files transaction bill even though you didn't directly do those transactions yourself. This applies to premium file shares as well, although it may be less noticeable. Additional transactions against premium file shares from value-added services count against your provisioned IOPS numbers, meaning that value-added services may require provisioning additional storage to have enough IOPS or throughput available for your workload.
+
+When computing the total cost of ownership for your file share, you should consider the costs of Azure Files and of all value-added services that you would like to use with Azure Files.
+
+There are multiple value-added first- and third-party services. This document covers a subset of the common first-party services customers use with Azure file shares. You can learn more about services not listed here by reading the pricing page for that service.
### Azure File Sync
-If you are thinking about using Azure File Sync, consider the following when evaluating cost:
+Azure File Sync is a value-added service for Azure Files that synchronizes one or more on-premises Windows file shares with an Azure file share. Because the cloud Azure file share has a complete copy of the data in a synchronized file share that is available on-premises, you can transform your on-premises Windows File Server into a cache of the Azure file share to reduce your on-premises footprint. Learn more by reading [Introduction to Azure File Sync](../file-sync/file-sync-introduction.md).
-#### Server fee
-For each server that you have connected to a sync group, there is an additional $5 fee. This is independent of the number of server endpoints. For example, if you had one server that contained three different server endpoints, you would only have one $5 charge. One sync server is free per Storage Sync Service.
+When considering the total cost of ownership for a solution deployed using Azure File Sync, you should consider the following cost aspects:
-#### Data cost
-The cost of data at rest depends on the billing tier you choose. This is the cost of storing data in the Azure file share in the cloud including snapshot storage.
-#### Cloud enumeration scans cost
-Azure File Sync enumerates the Azure File Share in the cloud once per day to discover changes that were made directly to the share so that they can sync down to the server endpoints. This scan generates transactions which are billed to the storage account at a rate of one LIST transaction per directory per day. You can put this number into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the scan cost.
+To optimize costs for Azure Files with Azure File Sync, you should consider the tier of your file share. For more information on how to pick the tier for each file share, see [choosing a file share tier](#choosing-a-tier).
-> [!Tip]
-> If you don't know how many folders you have, check out the TreeSize tool from JAM Software GmbH.
+If you are migrating to Azure File Sync from StorSimple, see [Comparing the costs of StorSimple to Azure File Sync](../file-sync/file-sync-storsimple-cost-comparison.md).
-#### Churn and tiering costs
-As files change on server endpoints, the changes are uploaded to the cloud share, which generates transactions. When cloud tiering is enabled, additional transactions are generated for managing tiered files, including I/O happening on tiered files, in addition to egress costs. The quantity and type of transactions is difficult to predict due to churn rates and cache efficiency, but you can use your previous transaction patterns to predict future costs if you only have one file share in your storage account. See [Choosing a billing tier](#choosing-a-billing-tier) for details on how to view previous transactions.
+### Azure Backup
+Azure Backup provides a serverless backup solution for Azure Files that seamlessly integrates with your file shares, as well as other value-added services such as Azure File Sync. Azure Backup for Azure Files is a snapshot-based backup solution, meaning that Azure Backup provides a scheduling mechanism for automatically taking snapshots on an administrator-defined schedule and a user-friendly interface for restoring deleted files/folders or the entire share to a particular point in time. To learn more about Azure Backup for Azure Files, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json).
-#### Choosing a billing tier
-For Azure File Sync customers, we recommend choosing standard file shares over premium file shares. This is because with Azure File Sync, customers get that low latency on-premises that they always had, so the higher performance provided by premium file shares isnΓÇÖt necessary. When first migrating to Azure Files via Azure File Sync, we recommend the Transaction Optimized tier due to the large number of transactions incurred during migration. Once migration is complete, you can plug in your previous transactions into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to figure out which tier is best suited for your workload.
+When considering the costs of using Azure Backup to backup your Azure file shares, you should consider the following:
-To see previous transactions:
-1. Go to your storage account and select **Metrics** in the left navigation bar.
-2. Select **Scope** as your storage account name, **Metric Namespace** as "File", **Metric** as "Transactions", and **Aggregation** as "Sum".
-3. Select **Apply Splitting**.
-4. Select **Values** as "API Name". Select your desired **Limit** and **Sort**.
-5. Select your desired time period.
+- **Protected instance licensing cost for Azure file share data.** Azure Backup charges a protected instance licensing cost per storage account containing backed up Azure file shares. A protected instance is defined as 250 GiB of Azure file share storage. Storage accounts containing less than 250 GiB of Azure file share storage are subject to a fractional protected instance cost. See [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/) for more information (note that you must select *Azure Files* from the list of services Azure Backup can protect).
-> [!Note]
-> Make sure you view transactions over a period of time to get a better idea of average number of transactions. Ensure that the chosen time period does not overlap with initial provisioning. Multiply the average number of transactions during this time period to get the estimated transactions for an entire month.
+- **Azure Files costs.** Azure Backup increases the costs of Azure Files in the following ways:
+ - **Differential costs from Azure file share snapshots.** Azure Backup automates taking Azure file share snapshots on an administrator-defined schedule. Snapshots are always differential; however, the additional cost added to the total bill depends on the length of time snapshots are kept and the amount of churn on the file share during that time, because that dictates how different the snapshot is from the live file share and therefore how much additional data is stored by Azure Files.
-## File storage comparison checklist
-To correctly evaluate the cost of Azure Files compared to other file storage options, consider the following questions:
+ - **Transaction costs from restore operations.** Restore operations from the snapshot to the live share will cause transactions. For standard file shares, this means that reads from snapshots/writes from restores will be billed as normal file share transactions. For premium file shares, these operations are counted against the provisioned IOPS for the file share.
-- **How do you pay for storage, IOPS, and bandwidth?**
- With Azure Files, the billing model you use depends on whether you are deploying [premium](#provisioned-model) or [standard](#pay-as-you-go-model) file shares. Most cloud solutions have models that align with the principles of either provisioned storage (price determinism, simplicity) or pay-as-you-go storage (pay only for you actually use). Of particular interest for provisioned models is minimum provisioned share size, the provisioning unit, and the ability to increase and decrease the provisioning.
+### Microsoft Defender for Storage
+Microsoft Defender provides support for Azure Files as part of its Microsoft Defender for Storage product. Microsoft Defender for Storage detects unusual and potentially harmful attempts to access or exploit your Azure file shares over SMB or FileREST. Microsoft Defender for Storage is enabled on the subscription level for all file shares in storage accounts in that subscription.
-- **How do you achieve storage resiliency and redundancy?**
- With Azure Files, storage resiliency and redundancy are baked into the product offering. All tiers and redundancy levels ensure that data is highly available and at least three copies of your data are accessible. When considering other file storage options, consider whether storage resiliency and redundancy is built-in or something you must assemble yourself.
+Microsoft Defender for Storage does not support antivirus capabilities for Azure file shares.
-- **What do you need to manage?**
- With Azure Files, the basic unit of management is a storage account. Other solutions may require additional management, such as operating system updates or virtual resource management (VMs, disks, network IP addresses, etc.).
+The main cost from Microsoft Defender for Storage is an additional set of transaction costs that the product levies on top of the transactions that are done against the Azure file share. Although these costs are based on the transactions incurred in Azure Files, they are not part of the billing for Azure Files, but rather are part of the Microsoft Defender pricing. Microsoft Defender for Storage charges a transaction rate even on premium file shares, where Azure Files includes transactions as part of IOPS provisioning. The current transaction rate can be found on [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) under the *Microsoft Defender for Storage* table row.
-- **What are the backup costs?**
- With Azure Files, Azure Backup integration is easily enabled and is backup storage is billed as part of the cost share (backups are stored as differential snapshots). Other solutions may require backup software licensing and additional backup storage costs.
+Transaction heavy file shares will incur significant costs using Microsoft Defender for Storage. Based on these costs, you may wish to opt-out of Microsoft Defender for Storage for specific storage accounts. For more information, see [Exclude a storage account from Microsoft Defender for Storage protections](../../defender-for-cloud/defender-for-storage-exclude.md).
## See also - [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
storsimple Storsimple 8000 Support Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-support-options.md
description: Describes support options for StorSimple 8000 series enterprise sto
Previously updated : 08/13/2019 Last updated : 04/15/2022
In order to receive StorSimple support, customer must purchase Standard or Premi
Upon the purchase of StorSimple 8000 Series Storage Arrays, support is provided through the next EA anniversary. Customer must renew StorSimple support at EA anniversary. StorSimple support plan orders are coterminous. Customers are notified via e-mail about impending support expiry for StorSimple 8000 Series Storage Arrays and are expected to follow up with the Microsoft account/sales teams or their Microsoft Licensing Solution Partner (LSP) to renew StorSimple support.
-Standard Azure support does not cover StorSimple hardware support. If you are covered under Premier or Unified Microsoft support, you must still purchase Standard StorSimple support renewal. StorSimple support renewal can be aligned with EA anniversary date by acquiring the required support SKU with the license quantity equal to the number of the appliances and the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date if all the units have the same support contract expiration date. If the units have different support contract expiration dates, each appliance must be covered with one support SKU with the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date per each appliance.
+Standard Azure support does not cover StorSimple hardware support. If you are covered under Premier or Unified Microsoft support, you must still purchase Standard StorSimple support renewal. StorSimple support renewal can be aligned with EA anniversary date by acquiring the required support SKU with the license quantity equal to the number of the appliances and the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date if all the units have the same support contract expiration date. If the units have different support contract expiration dates, each appliance must be covered with one support SKU with the unit quantity ordered being the remaining number of months of support needed until the EA anniversary date per each appliance.
+
+> [!NOTE]
+> StorSimple 8000 series reaches its end-of-life in December 2022. Purchase hardware support for only the months you need, not the full year. Any support purchased after December 2022 will not be used and is not eligible for refund.
StorSimple 8000 Series Storage Arrays support is provided based on how the StorSimple array was purchased.
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
NA Previously updated : 11/02/2021 Last updated : 04/15/2022
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
After you save all changes, you can select **Create release** to manually create
In this section, you'll learn how to create GitHub workflows by using GitHub Actions for Azure Synapse workspace deployment.
-You can use the [GitHub Action for Azure Resource Manager template](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying an ARM template to Azure for the workspace and compute pools.
+You can use the [GitHub Actions for Azure Resource Manager template](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template) to automate deploying an ARM template to Azure for the workspace and compute pools.
### Workflow file
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Previously updated : 03/11/2022 Last updated : 04/15/2022 # Previous monthly updates in Azure Synapse Analytics This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information.
+## Feb 2022 update
+
+The following updates are new to Azure Synapse Analytics this month.
+
+### SQL
+
+* Serverless SQL Pools now support more consistent query execution times. [Learn how Serverless SQL pools automatically detect spikes in read latency and support consistent query execution time.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_2)
+
+* [The `OPENJSON` function makes it easy to get array element indexes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_3). To learn more, see how the OPENJSON function in a serverless SQL pool allows you to [parse nested arrays and return one row for each JSON array element with the index of each element](/sql/t-sql/functions/openjson-transact-sql?view=azure-sqldw-latest&preserve-view=true#array-element-identity).
+
+### Data integration
+
+* [Upserting data is now supported by the copy activity](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_5). See how you can natively load data into a temporary table and then merge that data into a sink table with [upsert.](../data-factory/connector-azure-sql-database.md?tabs=data-factory#upsert-data)
+
+* [Transform Dynamics Data Visually in Synapse Data Flows.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_6) Learn more on how to use a [Dynamics dataset or an inline dataset as source and sink types to transform data at scale.](../data-factory/connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties)
+
+* [Connect to your SQL sources in data flows using Always Encrypted](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_7). To learn more, see [how to securely connect to your SQL databases from Synapse data flows using Always Encrypted.](../data-factory/connector-azure-sql-database.md?tabs=data-factory)
+
+* [Capture descriptions from asserts in Data Flows](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_8) To learn more, see [how to define your own dynamic descriptive messages](../data-factory/data-flow-expressions-usage.md#assertErrorMessages) in the assert data flow transformation at the row or column level.
+
+* [Easily define schemas for complex type fields.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_9) To learn more, see how you can make the engine to [automatically detect the schema of an embedded complex field inside a string column](../data-factory/data-flow-parse.md).
+ ## Jan 2022 update The following updates are new to Azure Synapse Analytics this month.
Improvements to the Synapse Machine Learning library v0.9.5 (previously called M
* The Azure Synapse Analytics security overview - A whitepaper that covers the five layers of security. The security layers include authentication, access control, data protection, network security, and threat protection. [Understand each security feature in detailed](./guidance/security-white-paper-introduction.md) to implement an industry-standard security baseline and protect your data on the cloud.
-* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Login attempts to a newly created Synapse workspace from connections using a TLS versions lower than 1.2 will fail.
+* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Login attempts to a newly created Synapse workspace from connections using TLS versions lower than 1.2 will fail.
### Data Integration
-* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by leveraging Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](../data-factory/data-flow-assert.md) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8).
+* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by using Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](../data-factory/data-flow-assert.md) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8).
* Native data flow connector for Dynamics - Synapse data flows can now read and write data directly to Dynamics through the new data flow Dynamics connector. Learn more on how to [Create data sets in data flows to read, transform, aggregate, join, etc. using this article](../data-factory/connector-dynamics-crm-office-365.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_9). You can then write the data back into Dynamics using the built-in Synapse Spark compute.
Improvements to the Synapse Machine Learning library v0.9.5 (previously called M
* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how GitHub leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927).
-* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13).
+* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function, which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13).
## December 2021 update
The following updates are new to Azure Synapse Analytics this month.
* Pipeline Fail activity [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1827125525) [article](../data-factory/control-flow-fail-activity.md) * Mapping Data Flow gets new native connectors [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-717833003) [article](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754)
-* Additional notebook export formats: HTML, Python, and LaTeX [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF3)
+* More notebook export formats: HTML, Python, and LaTeX [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF3)
* Three new chart types in notebook view: box plot, histogram, and pivot table [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF4) * Reconnect to lost notebook session [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF5)
The following updates are new to Azure Synapse Analytics this month.
* Delta Lake support for serverless SQL is generally available [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-564486367) [article](./sql/query-delta-lake-format.md) * Query multiple file paths using OPENROWSET in serverless SQL [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1242968096) [article](./sql/query-single-csv-file.md)
-* Serverless SQL queries can now return up to 200GB of results [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1110860013) [article](./sql/resources-self-help-sql-on-demand.md)
+* Serverless SQL queries can now return up to 200 GB of results [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1110860013) [article](./sql/resources-self-help-sql-on-demand.md)
* Handling invalid rows with OPENROWSET in serverless SQL [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--696594450) [article](./sql/develop-openrowset.md) ### Apache Spark for Synapse
The following updates are new to Azure Synapse Analytics this month.
### Security * All Synapse RBAC roles are now generally available for use in production [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac) [article](./security/synapse-workspace-synapse-rbac-roles.md)
-* Leverage User-Assigned Managed Identities for Double Encryption [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#user-assigned-managed-identities) [article](./security/workspaces-encryption.md)
+* Apply User-Assigned Managed Identities for Double Encryption [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#user-assigned-managed-identities) [article](./security/workspaces-encryption.md)
* Synapse Administrators now have elevated access to dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#elevated-access) [article](./security/synapse-workspace-access-control-overview.md) ### Governance
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Previously updated : 03/11/2022 Last updated : 04/15/2022 # What's new in Azure Synapse Analytics?
-This article lists updates to Azure Synapse Analytics that are published in Feb 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
+This article lists updates to Azure Synapse Analytics that are published in Mar 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
The following updates are new to Azure Synapse Analytics this month.
+## Developer Experience
+
+* Code cells in Synapse notebooks that result in exception will now show standard output along with the exception message. This feature is supported for Python and Scala languages. To learn more, see the [example output when a code statement fails](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).
+
+* Synapse notebooks now support partial output when running code cells. To learn more, see the [examples at this blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1)
+
+* You can now dynamically control Spark session configuration for the notebook activity with pipeline parameters. To learn more, see the [variable explorer feature of Synapse notebooks.](./spark/apache-spark-development-using-notebooks.md?tabs=classical#parameterized-session-configuration-from-pipeline)
+
+* You can now reuse and manage notebook sessions without having to start a new one. You can easily connect a selected notebook to an active session in the list started from another notebook. You can detach a session from a notebook, stop the session, and monitor it. To learn more, see [how to manage your active notebook sessions.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3)
+
+* Synapse notebooks now capture anything written through the Python logging module, in addition to the driver logs. To learn more, see [support for Python logging.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4)
+ ## SQL
-* Serverless SQL Pools now support more consistent query execution times. [Learn how Serverless SQL pools automatically detect spikes in read latency and support consistent query execution time.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_2)
+* Column Level Encryption for Azure Synapse dedicated SQL Pools is now Generally Available. With column level encryption, you can use different protection keys for each column with each key having its own access permissions. The data in CLE-enforced columns are encrypted on disk and remain encrypted in memory until the DECRYPTBYKEY function is used to decrypt it. To learn more, see [how to encrypt a data column](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true).
+
+* Serverless SQL pools now support better performance for CETAS (Create External Table as Select) and subsequent SELECT queries. The performance improvements include, a parallel execution plan resulting in faster CETAS execution and outputting multiple files. To learn more, see [CETAS with Synapse SQL](./sql/develop-tables-cetas.md) article and the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7)
+
+## Apache Spark for Synapse
+
+* Synapse Spark Common Data Model (CDM) Connector is now Generally Available. The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md).
+
+* Synapse Spark Dedicated SQL Pool (DW) Connector now supports improved performance. The new architecture eliminates redundant data movement and uses COPY-INTO instead of PolyBase. You can authenticate through SQL basic authentication or opt into the Azure Active Directory/Azure AD based authentication method. It now has ~5x improvements over the previous version. To learn more, see [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md)
+
+* Synapse Spark Dedicated SQL Pool (DW) Connector now supports all Spark Dataframe SaveMode choices. It supports Append, Overwrite, ErrorIfExists, and Ignore modes. The Append and Overwrite are critical for managing data ingestion at scale. To learn more, see [DataFrame write SaveMode support](./spark/synapse-spark-sql-pool-import-export.md#dataframe-write-savemode-support)
+
+* Accelerate Spark execution speed using the new Intelligent Cache feature. This feature is currently in public preview. Intelligent Cache automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12)
+
+## Security
-* [The `OPENJSON` function makes it easy to get array element indexes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_3). To learn more, see how the OPENJSON function in a serverless SQL pool allows you to [parse nested arrays and return one row for each JSON array element with the index of each element](/sql/t-sql/functions/openjson-transact-sql?view=azure-sqldw-latest&preserve-view=true#array-element-identity).
+* Azure Synapse Analytics now supports Azure Active Directory (Azure AD) authentication. You can turn on Azure AD authentication during the workspace creation or after the workspace is created. To learn more, see [how to use Azure AD authentication with Synapse SQL](./sql/active-directory-authentication.md).
-## Data integration
+* API support to raise or lower minimal TLS version for workspace managed SQL Server Dedicated SQL. To learn more, see [how to update the minimum TLS setting](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) or read the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_15) for more details.
-* [Upserting data is now supported by the copy activity](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_5). See how you can natively load data into a temporary table and then merge that data into a sink table with [upsert.](../data-factory/connector-azure-sql-database.md?tabs=data-factory#upsert-data)
+## Data Integration
-* [Transform Dynamics Data Visually in Synapse Data Flows.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_6) Learn more on how to use a [Dynamics dataset or an inline dataset as source and sink types to transform data at scale.](../data-factory/connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties)
+* Flowlets and CDC Connectors are now Generally Available. Flowlets in Synapse Data Flows allow for reusable and composable ETL logic. To learn more, see [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md) or see the [blog post.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_17)
-* [Connect to your SQL sources in data flows using Always Encrypted](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_7). To learn more, see [how to securely connect to your SQL databases from Synapse data flows using Always Encrypted.](../data-factory/connector-azure-sql-database.md?tabs=data-factory)
+* sFTP connector for Synapse data flows. You can read and write data while transforming data from sftp using the visual low-code data flows interface in Synapse. To learn more, see [source transformation](../data-factory/connector-sftp.md#source-transformation)
-* [Capture descriptions from asserts in Data Flows](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_8) To learn more, see [how to define your own dynamic descriptive messages](../data-factory/data-flow-expressions-usage.md#assertErrorMessages) in the assert data flow transformation at the row or column level.
+* Data flow improvements to Data Preview. To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng)
-* [Easily define schemas for complex type fields.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_9) To learn more, see how you can make the engine to [automatically detect the schema of an embedded complex field inside a string column](../data-factory/data-flow-parse.md).
+* Pipeline script activity. The Script Activity enables data engineers to build powerful data integration pipelines that can read from and write to Synapse databases, and other database types. To learn more, see [Transform data by using the Script activity in Azure Data Factory or Synapse Analytics](../data-factory/transform-data-using-script.md)
## Next steps
virtual-machines Agent Dependency Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-dependency-windows.md
The following JSON shows the schema for the Azure VM Dependency agent extension
"properties": { "publisher": "Microsoft.Azure.Monitoring.DependencyAgent", "type": "DependencyAgentWindows",
- "typeHandlerVersion": "9.5",
+ "typeHandlerVersion": "9.10",
"autoUpgradeMinorVersion": true } }
The following JSON shows the schema for the Azure VM Dependency agent extension
| apiVersion | 2015-01-01 | | publisher | Microsoft.Azure.Monitoring.DependencyAgent | | type | DependencyAgentWindows |
-| typeHandlerVersion | 9.5 |
+| typeHandlerVersion | 9.10 |
## Template deployment
The following example assumes the Dependency agent extension is nested inside th
"properties": { "publisher": "Microsoft.Azure.Monitoring.DependencyAgent", "type": "DependencyAgentWindows",
- "typeHandlerVersion": "9.5",
+ "typeHandlerVersion": "9.10",
"autoUpgradeMinorVersion": true } }
When you place the extension JSON at the root of the template, the resource name
"properties": { "publisher": "Microsoft.Azure.Monitoring.DependencyAgent", "type": "DependencyAgentWindows",
- "typeHandlerVersion": "9.5",
+ "typeHandlerVersion": "9.10",
"autoUpgradeMinorVersion": true } }
Set-AzVMExtension -ExtensionName "Microsoft.Azure.Monitoring.DependencyAgent" `
-VMName "myVM" ` -Publisher "Microsoft.Azure.Monitoring.DependencyAgent" ` -ExtensionType "DependencyAgentWindows" `
- -TypeHandlerVersion 9.5 `
+ -TypeHandlerVersion 9.10 `
-Location WestUS ```
virtual-machines Hbv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-HBv3-series VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (Milan) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.675 GHz.
+HBv3-series VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7V73X (Milan-X) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth (amplified up to 630 GB/s), up to 96 MB of L3 cache per core (1.536 GB total per VM), up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.5 GHz.
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. The HDR InfiniBand fabric also supports Adaptive Routing and the Dynamic Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is strongly recommended.
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to
|Size |vCPU |Processor |Memory (GiB) |Memory bandwidth GB/s |Base CPU frequency (GHz) |All-cores frequency (GHz, peak) |Single-core frequency (GHz, peak) |RDMA performance (Gb/s) |MPI support |Temp storage (GiB) |Max data disks |Max Ethernet vNICs | |-|-|-|-|-|-|-|-|-|-|-|-|-|
-|Standard_HB120rs_v3 |120 |AMD EPYC 7V13 |448 |350 |2.45 |3.1 |3.675 |200 |All |2 * 960 |32 |8 |
-|Standard_HB120-96rs_v3 |96 |AMD EPYC 7V13 |448 |350 |2.45 |3.1 |3.675 |200 |All |2 * 960 |32 |8 |
-|Standard_HB120-64rs_v3 |64 |AMD EPYC 7V13 |448 |350 |2.45 |3.1 |3.675 |200 |All |2 * 960 |32 |8 |
-|Standard_HB120-32rs_v3 |32 |AMD EPYC 7V13 |448 |350 |2.45 |3.1 |3.675 |200 |All |2 * 960 |32 |8 |
-|Standard_HB120-16rs_v3 |16 |AMD EPYC 7V13 |448 |350 |2.45 |3.1 |3.675 |200 |All |2 * 960 |32 |8 |
+|Standard_HB120rs_v3 |120 |AMD EPYC 7V73X |448 |350 |1.9 |3.0 |3.5 |200 |All |2 * 960 |32 |8 |
+|Standard_HB120-96rs_v3 |96 |AMD EPYC 7V73X |448 |350 |1.9 |3.0 |3.5 |200 |All |2 * 960 |32 |8 |
+|Standard_HB120-64rs_v3 |64 |AMD EPYC 7V73X |448 |350 |1.9 |3.0 |3.5 |200 |All |2 * 960 |32 |8 |
+|Standard_HB120-32rs_v3 |32 |AMD EPYC 7V73X |448 |350 |1.9 |3.0 |3.5 |200 |All |2 * 960 |32 |8 |
+|Standard_HB120-16rs_v3 |16 |AMD EPYC 7V73X |448 |350 |1.9 |3.0 |3.5 |200 |All |2 * 960 |32 |8 |
Learn more about the: - [Architecture and VM topology](./workloads/hpc/hbv3-series-overview.md)
virtual-machines Large Instance Os Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/large-instance-os-backup.md
This article walks through the steps to do an operating system (OS) file-level b
## OS backup and restore for Type II SKUs of Revision 3 stamps
-The information below describes the steps to do an operating system file-level backup and restore for **Type II SKUs** of HANA Large Instances Revision 3.
-
->[!Important]
-> **This article does not apply to Type II SKU deployments in Revision 4 HANA Large Instance stamps.** Boot LUNS of Type II HANA Large Instances deployed in Revision 4 HANA Large Instance stamps can be backed up with storage snapshots, which is true of Type I SKUs already in Revision 3 stamps.
--
->[!NOTE]
->The OS backup scripts uses the ReaR software, which is pre-installed in the server.
-
-After provisioning is complete by the Microsoft Service Management team, by default, the server is configured with two schedules to back up the file-system level of the OS. You can check the schedules of the backup jobs by using the following command:
-
-```
-#crontab ΓÇôl
-```
-You can change the backup schedule anytime by using the following command:
-```
-#crontab -e
-```
-### Take a manual backup
-
-The OS file system backup is scheduled using a **cron job** already. However, you can do the operating system file-level backup manually as well. To do a manual backup, run the following command:
-
-```
-#rear -v mkbackup
-```
-The following screen show shows the sample manual backup:
-
-![how](media/HowToHLI/OSBackupTypeIISKUs/HowtoTakeManualBackup.PNG)
--
-### Restore a backup
-
-You can restore a full backup or an individual file from a backup. To restore, use the following command:
-
-```
-#tar -xvf <backup file> [Optional <file to restore>]
-```
-After the restore, the file is recovered in the current working directory.
-
-The following command shows the restore of the file */etc/fstab* from the backup file *backup.tar.gz*:
-```
-#tar -xvf /osbackups/hostname/backup.tar.gz etc/fstab
-```
->[!NOTE]
->You need to copy the file to the desired location after it is restored from the backup.
-
-The following screenshot shows the restore of a complete backup.
-
-![Screenshot shows a command prompt window with the restore.](media/HowToHLI/OSBackupTypeIISKUs/HowtoRestoreaBackup.PNG)
-
-### Install the ReaR tool and change the configuration
-
-The Relax-and-Recover (ReaR) packages are **pre-installed** in the **Type II SKUs** of HANA Large Instances. No action is needed from you. You can directly start using the ReaR tool for the operating system backup.
-
-However, in circumstances where you need to install the packages on your own, you can use the following steps to install and configure the ReaR tool.
-
-To install the **ReaR** backup packages, use the following commands:
-
-For the **SLES** operating system, use the following command:
-```
-#zypper install <rear rpm package>
-```
-For the **RHEL** operating system, use the following command:
-
-```
-#yum install rear -y
-```
-To configure the ReaR tool, you need to update parameters **OUTPUT_URL** and **BACKUP_URL** in the *file /etc/rear/local.conf*.
-
-```
-OUTPUT=ISO
-ISO_MKISOFS_BIN=/usr/bin/ebiso
-BACKUP=NETFS
-OUTPUT_URL="nfs://nfsip/nfspath/"
-BACKUP_URL="nfs://nfsip/nfspath/"
-BACKUP_OPTIONS="nfsvers=4,nolock"
-NETFS_KEEP_OLD_BACKUP_COPY=
-EXCLUDE_VG=( vgHANA-data-HC2 vgHANA-data-HC3 vgHANA-log-HC2 vgHANA-log-HC3 vgHANA-shared-HC2 vgHANA-shared-HC3 )
-BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/media' '/var/tmp/*' '/var/crash' '/hana' '/usr/sap' ΓÇÿ/procΓÇÖ)
-```
-
-The following screenshot shows the restore of a complete backup:
-![Screenshot shows a command prompt window with the restore using the ReaR tool.](media/HowToHLI/OSBackupTypeIISKUs/RearToolConfiguration.PNG)
+Refer this documentation: [OS backup and restore for Type II SKUs of Revision 3 stamps](./os-backup-hli-type-ii-skus.md)
## OS backup and restore for all other SKUs
virtual-wan High Availability Vpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/high-availability-vpn-client.md
+
+ Title: 'Configure High Availability connections for P2S User VPN clients'
+
+description: Learn how to configure High Availability connections for Virtual WAN P2S User VPN clients.
+++++ Last updated : 04/18/2022+++
+# Configure High Availability connections for Virtual WAN P2S User VPN clients
+
+This article helps you configure and connect using the High Availability setting for Virtual WAN point-to-site (P2S) User VPN clients. This feature is only available for P2S clients connecting to Virtual WAN VPN gateways using the OpenVPN protocol.
+
+By default, every Virtual WAN VPN gateway consists of two instances in an active-active configuration. If anything happens to the gateway instance that the VPN client is connected to, the tunnel will be disconnected. P2S VPN clients must then initiate a connection to the new active instance.
+
+When **High Availability** is configured for the Azure VPN Client, if a failover occurs, the client connection isn't interrupted.
+
+> [!NOTE]
+> High Availability is supported for OpenVPN® protocol connections only and requires the Azure VPN Client.
+
+## <a name = "windows"></a>Windows
+
+### <a name = "download"></a>Download the Azure VPN Client
+
+To use this feature, you must install version **2.1901.41.0** or later of the Azure VPN Client.
++
+### <a name = "import"></a>Configure VPN client settings
+
+1. Use the [Point-to-site VPN for Azure AD authentication](virtual-wan-point-to-site-azure-ad.md#download-profile) article as a general guideline to generate client profile files. The OpenVPN® tunnel type is required for High Availability. If the generated client profile files don't contain an **OpenVPN** folder, your point-to-site User VPN configuration settings need to be modified to use the OpenVPN tunnel type.
+
+1. Configure the Azure VPN Client using the steps in the [Configure the Azure VPN Client](virtual-wan-point-to-site-azure-ad.md#configure-client) article as a guideline.
+
+### <a name = "HA"></a>Configure High Availability settings
+
+1. Open the Azure VPN Client and go to **Settings**.
+
+ :::image type="content" source="./media/high-availability-vpn-client/settings.png" alt-text="Screenshot shows VPN client with settings selected." lightbox="./media/high-availability-vpn-client/settings-expand.png":::
+
+1. On the **Settings** page, select **Enable High Availability**.
+
+ :::image type="content" source="./media/high-availability-vpn-client/enable.png" alt-text="Screenshot shows High Availability checkbox." lightbox="./media/high-availability-vpn-client/enable-expand.png":::
+
+1. On the home page for the client, save your settings.
+
+1. Connect to the VPN. After connecting, you'll see **Connected (HA)** in the left pane. You can also see the connection in the **Status logs**.
+
+ :::image type="content" source="./media/high-availability-vpn-client/ha-logs.png" alt-text="Screenshot shows High Availability in left pane and in status logs." lightbox="./media/high-availability-vpn-client/ha-logs-expand.png":::
+
+1. If you later decide that you don't want to use HA, deselect the **Enable High Availability** checkbox on the Azure VPN Client and reconnect to the VPN.
+
+## <a name = "macOS"></a>macOS
+
+1. Use the steps in the [Azure AD - macOS](openvpn-azure-ad-client-mac.md) article as a configuration guideline. The settings you configure may be different than the configuration example in the article, depending on what type of authentication you're using. Configure the Azure VPN Client with the settings specified in the VPN client profile.
+
+1. Open the **Azure VPN Client** and click **Settings** at the bottom of the page.
+
+ :::image type="content" source="./media/high-availability-vpn-client/mac-settings.png" alt-text="Screenshot click Settings button." lightbox="./media/high-availability-vpn-client/mac-settings.png":::
+
+1. On the **Settings** page, select **Enable High Availability**. Settings are automatically saved.
+
+ :::image type="content" source="./media/high-availability-vpn-client/mac-ha-settings.png" alt-text="Screenshot shows Enable High Availability." lightbox="./media/high-availability-vpn-client/mac-ha-settings-expand.png":::
+
+1. Click **Connect**. Once you're connected, you can view the connection status in the left pane and in the **Status logs**.
+
+ :::image type="content" source="./media/high-availability-vpn-client/mac-connected.png" alt-text="Screenshot mac logs and H A connection status." lightbox="./media/high-availability-vpn-client/mac-connected-expand.png":::
+
+## Next steps
+
+For VPN client profile information, see [Global and hub-based profiles](global-hub-profile.md).