Updates from: 03/21/2023 02:13:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
Define the OpenId Connect identity provider by adding it to the **ClaimsProvider
<OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" /> <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" /> <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="oid"/>
</OutputClaims> <OutputClaimsTransformations> <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName"/>
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Previously updated : 02/17/2023 Last updated : 03/20/2023
zone_pivot_groups: b2c-policy-type
# Configure SAML identity provider options with Azure Active Directory B2C
-Azure Active Directory B2C (Azure AD B2C) supports federation with SAML 2.0 identity providers. This article describes the configuration options that are available when enabling sign-in with a SAML identity provider.
+Azure Active Directory B2C (Azure AD B2C) supports federation with SAML 2.0 identity providers. This article describes how to parse the security assertions, and the configuration options that are available when enabling sign-in with a SAML identity provider.
+ [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
Azure Active Directory B2C (Azure AD B2C) supports federation with SAML 2.0 iden
::: zone pivot="b2c-custom-policy" + ## Claims mapping The **OutputClaims** element contains a list of claims returned by the SAML identity provider. You need to map the name of the claim defined in your policy to the name defined in the identity provider. Check your identity provider for the list of claims (assertions). You can also check the content of the SAML response your identity provider returns. For more information, see [Debug the SAML messages](#debug-saml-protocol). To add a claim, first [define a claim](claimsschema.md), then add the claim to the output claims collection.
-You can also include claims that aren't returned by the identity provider, as long as you set the `DefaultValue` attribute. The default value can be static or dynamic, using [context claims](#enable-use-of-context-claims).
+You can also include claims that not return by the identity provider, as long as you set the `DefaultValue` attribute. The default value can be static or dynamic, using [context claims](#enable-use-of-context-claims).
The output claim element contains the following attributes: - **ClaimTypeReferenceId** is the reference to a claim type. - **PartnerClaimType** is the name of the property that appears SAML assertion. -- **DefaultValue** is a predefined default value. If the claim is empty, the default value will be used. You can also use a [claim resolvers](claim-resolver-overview.md) with a contextual value, such as the correlation ID, or the user IP address.
+- **DefaultValue** is a predefined default value. If the claim is empty, the default value is used. You can also use a [claim resolvers](claim-resolver-overview.md) with a contextual value, such as the correlation ID, or the user IP address.
### Subject name
-To read the SAML assertion **NameId** in the **Subject** as a normalized claim, set the claim **PartnerClaimType** to the value of the `SPNameQualifier` attribute. If the `SPNameQualifier`attribute is not presented, set the claim **PartnerClaimType** to value of the `NameQualifier` attribute.
+To read the SAML assertion **NameId** in the **Subject** as a normalized claim, set the claim **PartnerClaimType** to the value of the `SPNameQualifier` attribute. If the `SPNameQualifier`attribute isn't presented, set the claim **PartnerClaimType** to value of the `NameQualifier` attribute.
SAML assertion:
Output claim:
<OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="http://your-idp.com/unique-identifier" /> ```
-If both `SPNameQualifier` or `NameQualifier` attributes are not presented in the SAML assertion, set the claim **PartnerClaimType** to `assertionSubjectName`. Make sure the **NameId** is the first value in assertion XML. When you define more than one assertion, Azure AD B2C picks the subject value from the last assertion.
+If both `SPNameQualifier` or `NameQualifier` attributes aren't presented in the SAML assertion, set the claim **PartnerClaimType** to `assertionSubjectName`. Make sure the **NameId** is the first value in assertion XML. When you define more than one assertion, Azure AD B2C picks the subject value from the last assertion.
## Configure SAML protocol bindings The SAML requests are sent to the identity provider as specified in the identity provider's metadata `SingleSignOnService` element. Most of the identity providers' authorization requests are carried directly in the URL query string of an HTTP GET request (as the messages are relatively short). Refer to your identity provider documentation for how to configure the bindings for both SAML requests.
-The following is an example of an Azure AD metadata single sign-on service with two bindings. The `HTTP-Redirect` takes precedence over the `HTTP-POST` because it appears first in the SAML identity provider metadata.
+The following XML is an example of an Azure AD metadata single sign-on service with two bindings. The `HTTP-Redirect` takes precedence over the `HTTP-POST` because it appears first in the SAML identity provider metadata.
```xml <IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
The following is an example of an Azure AD metadata single sign-on service with
### Assertion consumer service
-The Assertion Consumer Service (or ACS) is where the identity provider SAML responses can be sent and received by Azure AD B2C. SAML responses are transmitted to Azure AD B2C via HTTP POST binding. The ACS location points to your relying party's base policy. For example, if the relying policy is *B2C_1A_signup_signin*, the ACS is the base policy of the *B2C_1A_signup_signin*, such as *B2C_1A_TrustFrameworkBase*.
+The Assertion Consumer Service (or ACS) is where the identity provider SAML responses are sent and received by Azure AD B2C. SAML responses are transmitted to Azure AD B2C via HTTP POST binding. The ACS location points to your relying party's base policy. For example, if the relying policy is *B2C_1A_signup_signin*, the ACS is the base policy of the *B2C_1A_signup_signin*, such as *B2C_1A_TrustFrameworkBase*.
-The following is an example of an Azure AD B2C policy metadata assertion consumer service element.
+The following XML is an example of an Azure AD B2C policy metadata assertion consumer service element.
```xml <SPSSODescriptor AuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
Azure AD B2C uses `Sha1` to sign the SAML request. Use the **XmlSignatureAlgorit
### Include key info
-When the identity provider indicates that Azure AD B2C binding is set to `HTTP-POST`, Azure AD B2C includes the signature and the algorithm in the body of the SAML request. You can also configure Azure AD to include the public key of the certificate when the binding is set to `HTTP-POST`. Use the **IncludeKeyInfo** metadata to `true`, or `false`. In the following example, Azure AD will not include the public key of the certificate.
+When the identity provider indicates that Azure AD B2C binding is set to `HTTP-POST`, Azure AD B2C includes the signature and the algorithm in the body of the SAML request. You can also configure Azure AD to include the public key of the certificate when the binding is set to `HTTP-POST`. Use the **IncludeKeyInfo** metadata to `true`, or `false`. In the following example, Azure AD doesn't include the public key of the certificate.
```xml <Metadata>
You can use [context claims](claim-resolver-overview.md), such as `{OIDC:LoginHi
### Name ID policy format
-By default, the SAML authorization request specifies the `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` policy. This indicates that any type of identifier supported by the identity provider for the requested subject can be used.
+By default, the SAML authorization request specifies the `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified` policy. This name ID indicates that any type of identifier supported by the identity provider for the requested subject can be used.
To change this behavior, refer to your identity providerΓÇÖs documentation for guidance about which name ID policies are supported. Then add the `NameIdPolicyFormat` metadata in the corresponding policy format. For example:
The following example demonstrates an authorization request with **AllowCreate**
You can force the external SAML IDP to prompt the user for authentication by passing the `ForceAuthN` property in the SAML authentication request. Your identity provider must also support this property.
-The `ForceAuthN` property is a Boolean `true` or `false` value. By default, Azure AD B2C sets the ForceAuthN value to `false`. If the session is then reset (for example by using the `prompt=login` in OIDC) then the ForceAuthN value will be set to `true`. Setting the metadata item as shown below will force the value for all requests to the external IDP.
+The `ForceAuthN` property is a Boolean `true` or `false` value. By default, Azure AD B2C sets the ForceAuthN value to `false`. If the session is then reset (for example by using the `prompt=login` in OIDC), then the `ForceAuthN` value is set to `true`. Setting the `ForceAuthN` metadata to `true` forces the value for all requests to the external IDP.
The following example shows the `ForceAuthN` property set to `true`:
The following example shows the `ForceAuthN` property in an authorization reques
### Provider name
-You can optionally include the `ProviderName` attribute in the SAML authorization request. Set the metadata item as shown below to include the provider name for all requests to the external SAML IDP. The following example shows the `ProviderName` property set to `Contoso app`:
+You can optionally include the `ProviderName` attribute in the SAML authorization request. Set the `ProviderName` metadata to include the provider name for all requests to the external SAML IDP. The following example shows the `ProviderName` property set to `Contoso app`:
```xml <Metadata>
The following example illustrates the use of extension data:
``` > [!NOTE]
-> Per the SAML specification, the extension data must be namespace-qualified XML (for example, 'urn:ext:custom' shown in the sample above), and it must not be one of the SAML-specific namespaces.
+> Per the SAML specification, the extension data must be namespace-qualified XML (for example, 'urn:ext:custom' shown in the sample), and it must not be one of the SAML-specific namespaces.
-When using the SAML protocol message extension, the SAML response will look like the following example:
+With the SAML protocol message extension, the SAML response looks like the following example:
```xml <samlp:AuthnRequest ... >
When using the SAML protocol message extension, the SAML response will look like
## Require signed SAML responses
-Azure AD B2C requires all incoming assertions to be signed. You can remove this requirement by setting the **WantsSignedAssertions** to `false`. The identity provider shouldnΓÇÖt sign the assertions in this case, but even if it does, Azure AD B2C wonΓÇÖt validate the signature.
+Azure AD B2C requires all incoming assertions to be signed. You can remove this requirement by setting the **WantsSignedAssertions** to `false`. The identity provider shouldnΓÇÖt sign the assertions in this case, but even if it does, Azure AD B2C doesn't validate the signature.
The **WantsSignedAssertions** metadata controls the SAML metadata flag **WantAssertionsSigned**, which is included in the metadata of the Azure AD B2C technical profile that is shared with the identity provider.
The **WantsSignedAssertions** metadata controls the SAML metadata flag **WantAss
<SPSSODescriptor AuthnRequestsSigned="true" WantAssertionsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> ```
-If you disable the assertions validation, you might also want to disable the response message signature validation. Set the **ResponsesSigned** metadata to `false`. The identity provider shouldnΓÇÖt sign the SAML response message in this case, but even if it does, Azure AD B2C wonΓÇÖt validate the signature.
+If you disable the assertions validation, you might also want to disable the response message signature validation. Set the **ResponsesSigned** metadata to `false`. The identity provider shouldnΓÇÖt sign the SAML response message in this case, but even if it does, Azure AD B2C doesn't validate the signature.
The following example removes both the message and the assertion signature:
To encrypt the SAML response assertion:
## Enable use of context claims
-In the input and output claims collection, you can include claims that aren't returned by the identity provider as long as you set the `DefaultValue` attribute. You can also use [context claims](claim-resolver-overview.md) to be included in the technical profile. To use a context claim:
+In the input and output claims collection, you can include claims that not return by the identity provider as long as you set the `DefaultValue` attribute. You can also use [context claims](claim-resolver-overview.md) to be included in the technical profile. To use a context claim:
1. Add a claim type to the [ClaimsSchema](claimsschema.md) element within [BuildingBlocks](buildingblocks.md). 2. Add an output claim to the input or output collection. In the following example, the first claim sets the value of the identity provider. The second claim uses the user IP address [context claims](claim-resolver-overview.md).
Upon an application sign-out request, Azure AD B2C attempts to sign out from you
## Debug SAML protocol
-To help configure and debug federation with a SAML identity provider, you can use a browser extension for the SAML protocol, such as [SAML DevTools extension](https://chrome.google.com/webstore/detail/saml-devtools-extension/jndllhgbinhiiddokbeoeepbppdnhhio) for Chrome, [SAML-tracer](https://addons.mozilla.org/es/firefox/addon/saml-tracer/) for FireFox, or [Edge or IE Developer tools](https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/gathering-a-saml-token-using-edge-or-ie-developer-tools/ba-p/320957).
+To help configure and debug federation with a SAML identity provider, you can use a browser extension for the SAML protocol, such as [SAML DevTools extension](https://chrome.google.com/webstore/detail/saml-devtools-extension/jndllhgbinhiiddokbeoeepbppdnhhio) for Chrome, [SAML-tracer](https://addons.mozilla.org/es/firefox/addon/saml-tracer/) for FireFox, or [Microsoft Edge or IE Developer tools](https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/gathering-a-saml-token-using-edge-or-ie-developer-tools/ba-p/320957).
Using these tools, you can check the integration between Azure AD B2C and your SAML identity provider. For example:
-* Check if the SAML request contains a signature and determine what algorithm is used to sign in the authorization request.
-* Get the claims (assertions) under the `AttributeStatement` section.
-* Check if the identity provider returns an error message.
-* Check if the assertion section is encrypted.
+- Check if the SAML request contains a signature and determine what algorithm is used to sign in the authorization request.
+- Get the claims (assertions) under the `AttributeStatement` section.
+- Check if the identity provider returns an error message.
+- Check if the assertion section is encrypted.
+
+## SAML request and response samples
+
+Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between an identity provider and a service provider. When Azure AD B2C federates with a SAML identity provider, it acts as a **service provider** initiating a SAML request to the SAML **identity provider**, and waiting for a SAML response.
+
+A success SAML response contains security **assertions** which are statements that made by the external SAML identity providers. Azure AD B2C parses and [maps the assertions](#claims-mapping) into claims.
+
+### Authorization request
+
+To request a user authentication, Azure AD B2C sends an `AuthnRequest` element to the external SAML identity provider. A sample SAML 2.0 `AuthnRequest` could look like the following example:
+
+```xml
+<samlp:AuthnRequest
+ xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
+ ID="_11111111-0000-0000-0000-000000000000"
+ Version="2.0"
+ IssueInstant="2023-03-20T07:10:00.0000000Z"
+ Destination="https://fabrikam.com/saml2"
+ ForceAuthn="false"
+ IsPassive="false"
+ ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
+ AssertionConsumerServiceURL="https://contoso.b2clogin.com/contoso.onmicrosoft.com/B2C_1A_TrustFrameworkBase/samlp/sso/assertionconsumer"
+ ProviderName="https://fabrikam.com"
+ xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
+ <saml:Issuer
+ Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://contoso.b2clogin.com/contoso.onmicrosoft.com/B2C_1A_TrustFrameworkBase
+ </saml:Issuer>
+</samlp:AuthnRequest>
+```
+
+### Response
+
+When a requested sign-on completes successfully, the external SAML identity provider posts a response to Azure AD B2C [assertion consumer service](#assertion-consumer-service) endpoint. A response to a successful sign-on attempt looks like the following sample:
+
+```xml
+<samlp:Response
+ ID="_98765432-0000-0000-0000-000000000000"
+ Version="2.0"
+ IssueInstant="2023-03-20T07:11:30.0000000Z"
+ Destination="https://contoso.b2clogin.com/contoso.onmicrosoft.com/B2C_1A_TrustFrameworkBase/samlp/sso/assertionconsumer"
+ InResponseTo="_11111111-0000-0000-0000-000000000000"
+ xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
+ <Issuer
+ xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://fabrikam.com/
+ </Issuer>
+ <samlp:Status>
+ <samlp:StatusCode
+ Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
+ </samlp:Status>
+ <Assertion
+ ID="_55555555-0000-0000-0000-000000000000"
+ IssueInstant="2023-03-20T07:40:45.505Z"
+ Version="2.0"
+ xmlns="urn:oasis:names:tc:SAML:2.0:assertion">
+ <Issuer>https://fabrikam.com/</Issuer>
+ <Signature
+ xmlns="http://www.w3.org/2000/09/xmldsig#">
+ ...
+ </Signature>
+ <Subject>
+ <NameID
+ Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent">ABCDEFG
+ </NameID>
+ ...
+ </Subject>
+ <AttributeStatement>
+ <Attribute Name="uid">
+ <AttributeValue>12345</AttributeValue>
+ </Attribute>
+ <Attribute Name="displayname">
+ <AttributeValue>David</AttributeValue>
+ </Attribute>
+ <Attribute Name="email">
+ <AttributeValue>david@contoso.com</AttributeValue>
+ </Attribute>
+ ....
+ </AttributeStatement>
+ <AuthnStatement
+ AuthnInstant="2023-03-20T07:40:45.505Z"
+ SessionIndex="_55555555-0000-0000-0000-000000000000">
+ <AuthnContext>
+ <AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</AuthnContextClassRef>
+ </AuthnContext>
+ </AuthnStatement>
+ </Assertion>
+</samlp:Response>
+```
+
+### Logout request
+
+Upon an application sign-out request, Azure AD B2C attempts to sign out from your SAML identity provider. Azure AD B2C sends a `LogoutRequest` message to the external IDP to indicate that a session has been terminated. The following excerpt shows a sample `LogoutRequest` element.
+
+The value of the `NameID` element matches the `NameID` of the user that is being signed out. The `SessionIndex` element matches the `SessionIndex` attribute of `AuthnStatement` in the sign-in SAML response.
+
+```xml
+<samlp:LogoutRequest
+ xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ ID="_22222222-0000-0000-0000-000000000000"
+ Version="2.0"
+ IssueInstant="2023-03-20T08:21:07.3679354Z"
+ Destination="https://fabrikam.com/saml2/logout"
+ xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
+ <saml:Issuer
+ Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://contoso.b2clogin.com/contoso.onmicrosoft.com/B2C_1A_TrustFrameworkBase
+ </saml:Issuer>
+ <saml:NameID>ABCDEFG</saml:NameID>
+ <samlp:SessionIndex>_55555555-0000-0000-0000-000000000000</samlp:SessionIndex>
+</samlp:LogoutRequest>
+```
## Next steps
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 10/06/2022 Last updated : 03/20/2023
Microsoft Azure AD provides support for user provisioning to third-party SaaS applications such as Salesforce, G Suite and others. If you enable user provisioning for a third-party SaaS application, the Azure portal controls its attribute values through attribute-mappings.
-Before you get started, make sure you are familiar with app management and **single sign-on (SSO)** concepts. Check out the following links:
+Before you get started, make sure you're familiar with app management and **single sign-on (SSO)** concepts. Check out the following links:
- [Quickstart Series on App Management in Azure AD](../manage-apps/view-applications-portal.md) - [What is single sign-on (SSO)?](../manage-apps/what-is-single-sign-on.md)
Along with this property, attribute-mappings also support the following attribut
- **Source attribute** - The user attribute from the source system (example: Azure Active Directory). - **Target attribute** ΓÇô The user attribute in the target system (example: ServiceNow).-- **Default value if null (optional)** - The value that will be passed to the target system if the source attribute is null. This value will only be provisioned when a user is created. The "default value when null" will not be provisioned when updating an existing user. If for example, you want to provision all existing users in the target system with a particular Job Title (when it is null in the source system), you can use the following [expression](../app-provisioning/functions-for-customizing-application-data.md): Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle]). Make sure to replace the "Default Value" with what you would like to provision when null in the source system.
+- **Default value if null (optional)** - The value that will be passed to the target system if the source attribute is null. This value will only be provisioned when a user is created. The "default value when null" won't be provisioned when updating an existing user. If for example, you want to provision all existing users in the target system with a particular Job Title (when it's null in the source system), you can use the following [expression](../app-provisioning/functions-for-customizing-application-data.md): Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle]). Make sure to replace the "Default Value" with what you would like to provision when null in the source system.
- **Match objects using this attribute** ΓÇô Whether this mapping should be used to uniquely identify users between the source and target systems. It's typically set on the userPrincipalName or mail attribute in Azure AD, which is typically mapped to a username field in a target application.-- **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they're evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated. While you can set as many matching attributes as you would like, consider whether the attributes you are using as matching attributes are truly unique and need to be matching attributes. Generally customers have 1 or 2 matching attributes in their configuration.
+- **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they're evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated. While you can set as many matching attributes as you would like, consider whether the attributes you're using as matching attributes are truly unique and need to be matching attributes. Generally customers have 1 or 2 matching attributes in their configuration.
- **Apply this mapping** - **Always** ΓÇô Apply this mapping on both user creation and update actions. - **Only during creation** - Apply this mapping only on user creation actions. ## Matching users in the source and target systems
-The Azure AD provisioning service can be deployed in both "green field" scenarios (where users do not exist in the target system) and "brownfield" scenarios (where users already exist in the target system). To support both scenarios, the provisioning service uses the concept of matching attributes. Matching attributes allow you to determine how to uniquely identify a user in the source and match the user in the target. As part of planning your deployment, identify the attribute that can be used to uniquely identify a user in the source and target systems. Things to note:
+The Azure AD provisioning service can be deployed in both "green field" scenarios (where users don't exist in the target system) and "brownfield" scenarios (where users already exist in the target system). To support both scenarios, the provisioning service uses the concept of matching attributes. Matching attributes allow you to determine how to uniquely identify a user in the source and match the user in the target. As part of planning your deployment, identify the attribute that can be used to uniquely identify a user in the source and target systems. Things to note:
- **Matching attributes should be unique:** Customers often use attributes such as userPrincipalName, mail, or object ID as the matching attribute. - **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they are evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service will not evaluate the third attribute. The service will evaluate matching attributes in the order specified and stop evaluating when a match is found.
active-directory Scim Validator Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-validator-tutorial.md
Previously updated : 03/17/2023 Last updated : 03/20/2023
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
Previously updated : 03/17/2023 Last updated : 03/20/2023
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
description: Learn how to use system-preferred multifactor authentication
Previously updated : 03/16/2023 Last updated : 03/20/2023
Content-Type: application/json
### How does system-preferred MFA determine the most secure method?
-When a user signs in, the authentication process checks which authentication methods are registered for the user. The user is prompted to sign-in with the most secure method according to the following order. The order of authentication methods is dynamic. It's updated as the security landscape changes, and as better authentication methods emerge.
-
-1. Temporary Access Pass
-1. Certificate-based authentication
-1. FIDO2 security key
-1. Microsoft Authenticator notification
-1. Companion app notification
-1. Microsoft Authenticator time-based one-time password (TOTP)
-1. Companion app TOTP
-1. Hardware token based TOTP
-1. Software token based TOTP
-1. SMS over mobile
-1. OnewayVoiceMobileOTP
-1. OnewayVoiceAlternateMobileOTP
-1. OnewayVoiceOfficeOTP
-1. TwowayVoiceMobile
-1. TwowayVoiceAlternateMobile
-1. TwowayVoiceOffice
-1. TwowaySMSOverMobile
+When a user signs in, the authentication process checks which authentication methods are registered for the user. The user is prompted to sign-in with the most secure method according to the following order. The order of authentication methods is dynamic. It's updated as the security landscape changes, and as better authentication methods emerge. Click the link for information about each method.
+
+1. [Temporary Access Pass](howto-authentication-temporary-access-pass.md)
+1. [Certificate-based authentication](concept-certificate-based-authentication.md)
+1. [FIDO2 security key](concept-authentication-passwordless.md#fido2-security-keys)
+1. [Time-based one-time password (TOTP)](concept-authentication-oath-tokens.md)<sup>1</sup>
+1. [Telephony](concept-authentication-phone-options.md)<sup>2</sup>
+
+<sup>1</sup> Includes hardware or software TOTP from Microsoft Authenticator, Authenticator Lite, or third-party applications.
+<sup>2</sup> Includes SMS and voice calls.
+ ### How does system-preferred MFA affect AD FS or NPS extension?
active-directory Msal Error Handling Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-js.md
The following error types are available:
- `InteractionRequiredAuthError`: Error class, extends `ServerError` to represent server errors, which require an interactive call. This error is thrown by `acquireTokenSilent` if the user is required to interact with the server to provide credentials or consent for authentication/authorization. Error codes include `"interaction_required"`, `"login_required"`, and `"consent_required"`.
-For error handling in authentication flows with redirect methods (`loginRedirect`, `acquireTokenRedirect`), you'll need to register the callback, which is called with success or failure after the redirect using `handleRedirectCallback()` method as follows:
+For error handling in authentication flows with redirect methods (`loginRedirect`, `acquireTokenRedirect`), you'll need to handle the redirect promise, which is called with success or failure after the redirect using the `handleRedirectPromise()` method as follows:
```javascript
-function authCallback(error, response) {
- //handle redirect response
-}
-
-var myMSALObj = new Msal.UserAgentApplication(msalConfig);
+const msal = require('@azure/msal-browser');
+const myMSALObj = new msal.PublicClientApplication(msalConfig);
// Register Callbacks for redirect flow
-myMSALObj.handleRedirectCallback(authCallback);
+myMSALObj.handleRedirectPromise()
+ .then(function (response) {
+ //success response
+ })
+ .catch((error) => {
+ console.log(error);
+ })
myMSALObj.acquireTokenRedirect(request); ```
myMSALObj.acquireTokenSilent(request).then(function (response) {
// call API }).catch( function (error) { // call acquireTokenPopup in case of acquireTokenSilent failure
- // due to consent or interaction required
- if (error.errorCode === "consent_required"
- || error.errorCode === "interaction_required"
- || error.errorCode === "login_required") {
+ // due to interaction required
+ if (error instanceof InteractionRequiredAuthError) {
myMSALObj.acquireTokenPopup(request).then( function (response) { // call API
myMSALObj.acquireTokenSilent(accessTokenRequest).then(function(accessTokenRespon
}).catch(function(error) { if (error instanceof InteractionRequiredAuthError) {
- // extract, if exists, claims from error message
- if (error.ErrorMessage.claims) {
- accessTokenRequest.claimsRequest = JSON.stringify(error.ErrorMessage.claims);
- }
+ // extract, if exists, claims from the error object
+ if (error.claims) {
+ accessTokenRequest.claims = error.claims,
// call acquireTokenPopup in case of InteractionRequiredAuthError failure myMSALObj.acquireTokenPopup(accessTokenRequest).then(function(accessTokenResponse) {
myMSALObj.acquireTokenSilent(accessTokenRequest).then(function(accessTokenRespon
Interactively acquiring the token prompts the user and gives them the opportunity to satisfy the required Conditional Access policy.
-When calling an API requiring Conditional Access, you can receive a claims challenge in the error from the API. In this case, you can pass the claims returned in the error to the `claimsRequest` field of the `AuthenticationParameters.ts` class to satisfy the appropriate policy.
-
-See [Requesting Additional Claims](active-directory-optional-claims.md) for more detail.
+When calling an API requiring Conditional Access, you can receive a claims challenge in the error from the API. In this case, you can pass the claims returned in the error to the `claims` parameter in the [access token request object](https://learn.microsoft.com/azure/active-directory/develop/msal-js-pass-custom-state-authentication-request) to satisfy the appropriate policy.
+See [How to use Continuous Access Evaluation enabled APIs in your applications](./app-resilience-continuous-access-evaluation.md) for more detail.
[!INCLUDE [Active directory error handling retries](../../../includes/active-directory-develop-error-handling-retries.md)] ## Next steps
-Consider enabling [Logging in MSAL.js](msal-logging-js.md) to help you diagnose and debug issues.
+Consider enabling [Logging in MSAL.js](msal-logging-js.md) to help you diagnose and debug issues
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly.
+## February 2023
+
+### General Availability - Filter and transform group names in token claims configuration using regular expression
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+Filter and transform group names in token claims configuration using regular expression. Many application configurations on ADFS and other IdPs rely on the ability to create authorization claims based on the content of Group Names using regular expression functions in the claim rules. Azure AD now has the capability to use a regular expression match and replace function to create claim content based on Group **onpremisesSAMAccount** names. This functionality will allow those applications to be moved to Azure AD for authentication using the same group management patterns. For more information, see: [Configure group claims for applications by using Azure Active Directory](../hybrid/how-to-connect-fed-group-claims.md).
+++
+### General Availability - Filter groups in tokens using a substring match
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+Azure AD now has the capability to filter the groups included in the token using substring match on the display name or **onPremisesSAMAccountName** attributes of the group object. Only Groups the user is a member of will be included in the token.This was a blocker for some of our customers to migrate their apps from ADFS to Azure AD. This feature will unblock those challenges.
+
+For more information, see:
+- [Group Filter](../develop/reference-claims-mapping-policy-type.md#group-filter).
+- [Configure group claims for applications by using Azure Active Directory](../hybrid/how-to-connect-fed-group-claims.md).
+++++
+### General Availability - New SSO claims transformation features
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+Azure AD now supports claims transformations on multi-valued attributes and can emit multi-valued claims. More functions to allow match and string operations on claims processing to enable apps to be migrated from other IdPs to Azure AD. This includes: Match on Empty(), NotEmpty(), Prefix(), Suffix(), and extract substring operators. For more information, see: [Claims mapping policy type](../develop/reference-claims-mapping-policy-type.md).
+++
+### General Availability - New Detection for Service Principal Behavior Anomalies
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Security & Protection
+
+Post-authentication anomalous activity detection for workload identities. This detection focuses specifically on detection of post authenticated anomalous behavior performed by a workload identity (service principal). Post-authentication behavior will be assessed for anomalies based on an action and/or sequence of actions occurring for the account. Based on the scoring of anomalies identified, the offline detection may score the account as low, medium, or high risk. The risk allocation from the offline detection will be available within the Risky workload identities reporting blade. A new detection type identified as Anomalous service principal activity will appear in filter options. For more information, see: [Securing workload identities](../identity-protection/concept-workload-identity-risk.md).
+++
+### General Availability - Microsoft cloud settings for Azure AD B2B
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds:
+
+- Microsoft Azure commercial and Microsoft Azure Government
+- Microsoft Azure commercial and Microsoft Azure China 21Vianet
+
+For more information about Microsoft cloud settings for B2B collaboration., see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
+++
+### Public Preview - Support for Directory Extensions using Azure AD cloud sync
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Azure AD Connect Cloud Sync
+
+Hybrid IT Admins now can sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure AD, allowing customers to map the needed attributes using Cloud Sync's attribute mapping experience.
+
+For more information on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](../cloud-sync/custom-attribute-mapping.md)
++++
+### General Availability - On-premises application provisioning
+
+**Type:** Changed feature
+**Service category:** Provisioning
+**Product capability:** Outbound to On-premises Applications
+
+Azure AD supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](../app-provisioning/on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](../app-provisioning/on-premises-ldap-connector-configure.md) user store, or a [SQL](../app-provisioning/tutorial-ecma-sql-connector.md) database, Azure AD can support those as well.
++ ## January 2023
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
Required permissions | For permissions required to apply an update, see [Azure A
> > The following versions will retire on 15 March 2023: >
-> - 2.0.91.0
> - 2.0.89.0 > - 2.0.88.0 > - 2.0.28.0
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md
Previously updated : 12/07/2022 Last updated : 03/20/2023
This article provides information that you can use to plan your [single sign-on (SSO)](what-is-single-sign-on.md) deployment in Azure Active Directory (Azure AD). When you plan your SSO deployment with your applications in Azure AD, you need to consider the following questions: - What are the administrative roles required for managing the application?-- Does the certificate need to be renewed?
+- Does the Security Assertion Markup Language (SAML) application certificate need to be renewed?
- Who needs to be notified of changes related to the implementation of SSO? - What licenses are needed to ensure effective management of the application?-- Are shared user accounts used to access the application?
+- Are shared and guest user accounts used to access the application?
- Do I understand the options for SSO deployment? ## Administrative Roles
Always use the role with the fewest permissions available to accomplish the requ
| Persona | Roles | Azure AD role (if necessary) | | - | -- | |
-| Help desk admin | Tier 1 support | None |
-| Identity admin | Configure and debug when issues involve Azure AD | Global Administrator |
+| Help desk admin | Tier 1 support view the sign-in logs to resolve issues. | None |
+| Identity admin | Configure and debug when issues involve Azure AD | Cloud Application Administrator |
| Application admin | User attestation in application, configuration on users with permissions | None |
-| Infrastructure admins | Certificate rollover owner | Global Administrator |
+| Infrastructure admins | Certificate rollover owner | Cloud Application Administrator |
| Business owner/stakeholder | User attestation in application, configuration on users with permissions | None | To learn more about Azure AD administrative roles, see [Azure AD built-in roles](../users-groups-roles/directory-assign-admin-roles.md). ## Certificates
-When you enable federated SSO for your application, Azure AD creates a certificate that is by default valid for three years. You can customize the expiration date for that certificate if needed. Ensure that you have processes in place to renew certificates prior to their expiration.
+When you enable federation on SAML application, Azure AD creates a certificate that is by default valid for three years. You can customize the expiration date for that certificate if needed. Ensure that you have processes in place to renew certificates prior to their expiration.
You change that certificate duration in the Azure portal. Make sure to document the expiration and know how you'll manage your certificate renewal. ItΓÇÖs important to identify the right roles and email distribution lists involved with managing the lifecycle of the signing certificate. The following roles are recommended:
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
Title: Configure ServiceNow for automatic user provisioning with Azure Active Di
description: Learn how to automatically provision and deprovision user accounts from Azure AD to ServiceNow. --
+writer: twimmers
+
+ms.assetid: 5f03d8b7-c3a0-443e-91af-99cc3956fa18
-+ Last updated 3/10/2023
To configure automatic user provisioning for ServiceNow in Azure AD:
1. Set **Provisioning Mode** to **Automatic**. 1. In the **Admin Credentials** section, enter your ServiceNow tenant URL, Client ID, Client Secret and Authorization Endpoint. Select **Test Connection** to ensure that Azure AD can connect to ServiceNow. [This ServiceNow documentation](https://docs.servicenow.com/bundle/utah-platform-security/page/administer/security/task/t_CreateEndpointforExternalClients.html) outlines how to generate these values.
+ ![Screenshot that shows the Service Provisioning page, where you can enter admin credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
-- Tenant URL: https://**InsertInstanceName**.service-now.com/api/now/scim-- Authorization Endpoint: https://**InsertInstanceName**.service-now.com/oauth_auth.do?response_type=code&client_id=**InsertClientID**&state=1&scope=useraccount&redirect_uri=https%3A%2F%2Fportal.azure.com%2FTokenAuthorize-- Token Endoint: https://**InsertInstanceName**.service-now.com/api/now/scim-
-![Screenshot that shows the Service Provisioning page, where you can enter admin credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
-
+ > [!NOTE]
+ > - Tenant URL: https://**InsertInstanceName**.service-now.com/api/now/scim
+ > - Authorization Endpoint: https://**InsertInstanceName**.service-now.com/oauth_auth.do?response_type=code&client_id=**InsertClientID**&state=1&scope=useraccount&redirect_uri=https%3A%2F%2Fportal.azure.com%2FTokenAuthorize
+ > - Token Endoint: https://**InsertInstanceName**.service-now.com/api/now/scim
+
1. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Then select the **Send an email notification when a failure occurs** check box. 1. Select **Save**.
active-directory Workload Identities Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identities-faqs.md
intended to expand.
## Do these licenses require individual workload identities assignment?
-No, license assignment isn't required. One license in the tenant unlocks features for workload identities.
+No, license assignment isn't required.
## Can I get a free trial of Workload Identities Premium?
Yes, it's available.
## Is it possible to have a mix of Azure AD Premium P1, Azure AD Premium P2 and Workload Identities Premium licenses in one tenant?
-Yes, customers can have a mixture of license plans in one tenant.
+Yes, customers can have a mixture of license plans in one tenant.
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
## Create an AKS cluster 1. Sign in to the [Azure portal](https://portal.azure.com).-
-2. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-3. Select **Containers** > **Kubernetes Service**.
-
-4. On the **Basics** page, configure the following options:
+1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
+1. Select **Containers** > **Kubernetes Service**.
+1. On the **Basics** page, configure the following options:
- **Project details**: * Select an Azure **Subscription**. * Select or create an Azure **Resource group**, such as *myResourceGroup*. - **Cluster details**:
- * Ensure the the **Preset configuration** is *Standard ($$)*. For more details on preset configurations, see [Cluster configuration presets in the Azure portal][preset-config].
+ * Ensure the **Preset configuration** is *Standard ($$)*. For more details on preset configurations, see [Cluster configuration presets in the Azure portal][preset-config].
* Enter a **Kubernetes cluster name**, such as *myAKSCluster*. * Select a **Region** for the AKS cluster, and leave the default value selected for **Kubernetes version**. * Select **99.5%** for **API server availability**.
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
> You can change the preset configuration when creating your cluster by selecting *Learn more and compare presets* and choosing a different option. > :::image type="content" source="media/quick-kubernetes-deploy-portal/cluster-preset-options.png" alt-text="Screenshot of Create AKS cluster - portal preset options.":::
-5. Select **Next: Node pools** when complete.
-
-6. Keep the default **Node pools** options. At the bottom of the screen, click **Next: Access**.
-
-7. On the **Access** page, configure the following options:
+1. Select **Next: Node pools** when complete.
+1. Keep the default **Node pools** options. At the bottom of the screen, click **Next: Access**.
+1. On the **Access** page, configure the following options:
- The default value for **Resource identity** is **System-assigned managed identity**. Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. For more details about managed identities, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md). - The Kubernetes role-based access control (RBAC) option is the default value to provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster. By default, *Basic* networking is used, and [Container insights](../../azure-monitor/containers/container-insights-overview.md) is enabled.
-8. Click **Review + create**. When you navigate to the **Review + create** tab, Azure runs validation on the settings that you have chosen. If validation passes, you can proceed to create the AKS cluster by selecting **Create**. If validation fails, then it indicates which settings need to be modified.
-
-9. It takes a few minutes to create the AKS cluster. When your deployment is complete, navigate to your resource by either:
+1. Select **Next: Networking** when complete.
+1. Keep the default **Networking** options. At the bottom of the screen, click **Next: Integrations**.
+1. On the **Integrations** page, if you want to enable the [recommended out-of-the-box alerts](../../azure-monitor/alerts/alerts-overview.md#recommended-alert-rules) for AKS clusters, select **Enable recommended alert rules**. You can see the list of alerts that are automatically enabled if you select this option.
+1. Click **Review + create**. When you navigate to the **Review + create** tab, Azure runs validation on the settings that you have chosen. If validation passes, you can proceed to create the AKS cluster by selecting **Create**. If validation fails, then it indicates which settings need to be modified.
+1. It takes a few minutes to create the AKS cluster. When your deployment is complete, navigate to your resource by either:
* Selecting **Go to resource**, or * Browsing to the AKS cluster resource group and selecting the AKS resource. In this example you browse for *myResourceGroup* and select the resource *myAKSCluster*.
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
Title: Managed NAT Gateway
+ Title: Create a managed or user-assigned NAT gateway
-description: Learn how to create an AKS cluster with managed NAT integration
+description: Learn how to create an AKS cluster with managed NAT integration and user-assigned NAT gateway.
Last updated 10/26/2021
-# Managed NAT Gateway
+# Create a managed or user-assigned NAT gateway
While you can route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic you can have. Azure NAT Gateway allows up to 64,512 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses.
-This article shows you how to create an AKS cluster with a Managed NAT Gateway for egress traffic and how to disable OutboundNAT on Windows.
+This article shows you how to create an AKS cluster with a managed NAT gateway and a user-assigned NAT gateway for egress traffic and how to disable OutboundNAT on Windows.
## Before you begin
This article shows you how to create an AKS cluster with a Managed NAT Gateway f
* Make sure you're using Kubernetes version 1.20.x or above. * Managed NAT Gateway is incompatible with custom virtual networks.
-## Create an AKS cluster with a Managed NAT Gateway
+## Create an AKS cluster with a managed NAT gateway
-To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` when running `az aks create`. If you want the NAT gateway to be able to operate out of availability zones, specify the zones using `--zones`.
+To create an AKS cluster with a new managed NAT Gateway, use `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` when running `az aks create`. If you want the NAT gateway to be able to operate out of availability zones, specify the zones using `--zones`.
The following example creates a *myResourceGroup* resource group, then creates a *natCluster* AKS cluster in *myResourceGroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 30 seconds.
az aks update \
--nat-gateway-managed-outbound-ip-count 5 ```
-## Create an AKS cluster with a user-assigned NAT Gateway
+## Create an AKS cluster with a user-assigned NAT gateway
-To create an AKS cluster with a user-assigned NAT Gateway, use `--outbound-type userAssignedNATGateway` when running `az aks create`. This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-kubenet] or [Azure CNI][byo-vnet-azure-cni]) and that the NAT Gateway is preconfigured on the subnet. The following commands create the required resources for this scenario. Make sure to run them all in the same session so that the values stored to variables are still available for the `az aks create` command.
+To create an AKS cluster with a user-assigned NAT gateway, use `--outbound-type userAssignedNATGateway` when running `az aks create`. This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-kubenet] or [Azure CNI][byo-vnet-azure-cni]) and that the NAT Gateway is preconfigured on the subnet. The following commands create the required resources for this scenario. Make sure to run them all in the same session so that the values stored to variables are still available for the `az aks create` command.
1. Create the resource group.
aks Trusted Access Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md
Title: Enable Azure resources to access Azure Kubernetes Service (AKS) clusters
description: Learn how to use the Trusted Access feature to enable Azure resources to access Azure Kubernetes Service (AKS) clusters. Previously updated : 03/03/2023 Last updated : 03/20/2023
Trusted Access enables you to give explicit consent to your system-assigned MSI
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Resource types that support [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md).
-* Pre-defined Roles with appropriate [AKS permissions](concepts-identity.md).
- * To learn about what Roles to use in various scenarios, see [AzureML access to AKS clusters with special configurations](https://github.com/Azure/AML-Kubernetes/blob/master/docs/azureml-aks-ta-support.md).
-* If you're using Azure CLI, the **aks-preview** extension version **0.5.74 or later** is required.
+* * If you're using Azure CLI, the **aks-preview** extension version **0.5.74 or later** is required.
+* To learn about what Roles to use in various scenarios, see:
+ * [AzureML access to AKS clusters with special configurations](https://github.com/Azure/AML-Kubernetes/blob/master/docs/azureml-aks-ta-support.md).
+ * [AKS backup using Azure Backup][aks-azure-backup]
+ First, install the aks-preview extension by running the following command:
For more information on AKS, see:
[az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show [az-provider-register]: /cli/azure/provider#az-provider-register
+[aks-azure-backup]: ../backup/azure-kubernetes-service-backup-overview.md
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 03/17/2023 Last updated : 03/20/2023
To demonstrate the cost saving opportunity for this scenario, use the [pricing c
When you migrate this App Service Environment using the migration feature, your new App Service Environment v3 has 1 I2v2 instance, which means you have four cores and 16-GB RAM. If you don't change anything, your monthly payment is the following.
-[1(I2v2) = $563.56](https://azure.com/e/0a042f33d87548bfb966bdff74e35715)
+[1(I2v2) = $563.56](https://azure.com/e/17946ea2c4db483d882526ba515a6771)
Your monthly cost is reduced, but you don't need that much compute and capacity. You scale down your instance to I1v2 and your monthly cost is reduced even further.
-[1(I1v2) = $281.78](https://azure.com/e/c400e2c91ed44cadbf849923b902dded)
+[1(I1v2) = $281.78](https://azure.com/e/9d481c3af3cd407d975017c2b8158bbd)
### Break even point
To demonstrate this scenario, you have an App Service Environment v2 with a sing
If you migrate this environment to App Service Environment v3, your monthly cost is:
-[1(I1v2) = **$281.78**](https://azure.com/e/c2cfb6f810374f31b563e2f8a2c877e7)
+[1(I1v2) = **$281.78**](https://azure.com/e/9d481c3af3cd407d975017c2b8158bbd)
This change is a significant cost reduction, but you're over-provisioned since you have double the cores and RAM, which you may not need. This excess isn't an issue for this scenario since the new environment is cheaper. However, when you increase your I1 instances in a single App Service Environment, you see how migrating to App Service Environment v3 can increase your monthly cost.
For this scenario, your App Service Environment v2 has 14 I1 instances. Your mon
When you migrate this environment to App Service Environment v3, your monthly cost is:
-[14(I1v2) = **$3,944.92**](https://azure.com/e/a7b6240644824273bebd358c5919ae4f)
+[14(I1v2) = **$3,944.92**](https://azure.com/e/e0f1ebacf937479ba073a9c32cb2452f)
Your App Service Environment v3 is now more expensive than your App Service Environment v2. As you start add more I1 instances, and therefore need more I1v2 instances when you migrate, the difference in price becomes more significant. If this scenario is a requirement for your environment, you may need to plan for an increase in your monthly cost. The following graph visually depicts the point where App Service Environment v3 becomes more expensive than App Service Environment v2 for this specific scenario.
For more scenarios on cost changes and savings opportunities with App Service En
- **What if my App Service Environment is zone pinned?** Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. To migrate to App Service Environment v3, see the [manual migration options](migration-alternatives.md). - **What properties of my App Service Environment will change?**
- You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+ You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md).
- **What happens if migration fails or there is an unexpected issue during the migration?** If there's an unexpected issue, support teams are on hand. It's recommended to migrate dev environments before touching any production environments. - **What happens to my old App Service Environment?**
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 03/01/2023 Last updated : 03/20/2023
App Service Environment v3 is available in the following regions:
| Germany North | ✅ | | ✅ | | Germany West Central | ✅ | ✅ | ✅ | | Japan East | ✅ | ✅ | ✅ |
-| Japan West | | | ✅ |
+| Japan West | ✅ | | ✅ |
| Jio India West | | | ✅ | | Korea Central | ✅ | ✅ | ✅ | | Korea South | ✅ | | ✅ |
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
Title: 'App Service Environment version comparison' description: This article provides an overview of the App Service Environment versions and feature differences between them. Previously updated : 3/16/2023 Last updated : 3/20/2023
App Service Environment v3 runs on the latest [Virtual Machine Scale Sets](../..
|Front-end scaling management |[Manual](app-service-web-scale-a-web-app-in-an-app-service-environment.md) |[Manual](using-an-ase.md#front-end-scaling) |Managed by platform | |Scaling operations |Blocks other scaling operations |Blocks other scaling operations |Doesn't block other scale operations |
-### Pricing
-
-App Service Environment v3 is often cheaper than previous versions due to the removal of the stamp fee and larger instance sizes. For information and example scenarios on how migrating to App Service Environment v3 can affect your costs, see the [migration pricing samples](migrate.md#pricing) and [Estimate your cost savings by migrating to App Service Environment v3](https://azure.github.io/AppService/2023/03/02/App-service-environment-v3-pricing.html).
-
-|Feature |[App Service Environment v1](app-service-app-service-environment-intro.md) |[App Service Environment v2](intro.md) |[App Service Environment v3](overview.md) |
-|||||
-|Pricing |Pay for each vCPU |Stamp fee plus cost per Isolated instance, reservations are available for the stamp fee |No stamp fee and the Isolated v2 rate has 1-3 year reserved instance pricing. Azure Savings Plans for Compute are also available. |
- ### Certificates and domains |Feature |[App Service Environment v1](app-service-app-service-environment-intro.md) |[App Service Environment v2](intro.md) |[App Service Environment v3](overview.md) |
App Service Environment v3 is often cheaper than previous versions due to the re
||||| |Perform a backup and restore operation on a storage account behind a firewall |Yes |Yes |No |
+### Logging and monitoring
+
+|Feature |[App Service Environment v1](app-service-app-service-environment-intro.md) |[App Service Environment v2](intro.md) |[App Service Environment v3](overview.md) |
+|||||
+|Application logging to storage account over virtual network |Yes |Yes |No. The recommendation is to use [diagnostics logging](../overview-diagnostics.md) instead. If you need to use a firewall for the logging storage account, the storage account must be in a different region and use the outbound public addresses of the App Service Environment in the rules. For more information, see [networking considerations](../troubleshoot-diagnostic-logs.md#networking-considerations). |
+|Azure Policy integration|Yes |Yes |Yes |
+|Azure Advisor integration|Yes |Yes |Yes |
+
+### Pricing
+
+App Service Environment v3 is often cheaper than previous versions due to the removal of the stamp fee and larger instance sizes. For information and example scenarios on how migrating to App Service Environment v3 can affect your costs, see the [migration pricing samples](migrate.md#pricing) and [Estimate your cost savings by migrating to App Service Environment v3](https://azure.github.io/AppService/2023/03/02/App-service-environment-v3-pricing.html).
+
+|Feature |[App Service Environment v1](app-service-app-service-environment-intro.md) |[App Service Environment v2](intro.md) |[App Service Environment v3](overview.md) |
+|||||
+|Pricing |Pay for each vCPU |Stamp fee plus cost per Isolated instance, reservations are available for the stamp fee |No stamp fee and the Isolated v2 rate has 1-3 year reserved instance pricing. Azure Savings Plans for Compute are also available. |
+ ## Frequently asked questions - [What SKUs are available on App Service Environment v1, v2, and v3?](#what-skus-are-available-on-app-service-environment-v1-v2-and-v3)
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
description: Create a Python Django or Flask web app with a PostgreSQL database
ms.devlang: python Last updated 02/28/2023-+
+zone_pivot_groups: deploy-python-web-app-postgressql
# Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
In this tutorial, you'll deploy a data-driven Python web app (**[Django](https:/
* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). * Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/) + ## Sample application Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part.
When you're finished, you can delete all of the resources from your Azure subscr
:::column-end::: :::row-end:::
+## Troubleshooting
+
+Listed below are issues you may encounter while trying to work through this tutorial and steps to resolve them.
+
+#### I can't connect to the SSH session
+
+If you can't connect to the SSH session, then the app itself has failed to start. Check the [diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you see an error like `KeyError: 'AZURE_POSTGRESQL_CONNECTIONSTRING'`, it may mean that the environment variable is missing (you may have removed the app setting).
+
+#### I get an error when running database migrations
+
+If you encounter any errors related to connecting to the database, check if the app settings (`AZURE_POSTGRESQL_CONNECTIONSTRING`) have been changed. Without that connection string, the migrate command can't communicate with the database.
+++
+## Provision and deploy using the Azure Developer CLI
+
+Sample Python application templates using the Flask and Django framework are provided for this tutorial. The [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) greatly streamlines the process of provisioning application resources and deploying code on Azure. For a more step-by-step approach using the Azure portal and other tools, toggle to the **Azure portal** approach at the top of the page.
+
+The Azure Developer CLI (azd) provides end-to-end support for project initialization, provisioning, deploying, monitoring and scaffolding a CI/CD pipeline to run against real Azure resources. You can use `azd` to provision and deploy the resources for the sample application in an automated and streamlined way.
+
+Follow the steps below to setup the Azure Developer CLI and provision and deploy the sample application:
+
+1. Install the Azure Developer CLI. For a full list of supported installation options and tools, visit the [installation guide](/azure/developer/azure-developer-cli/install-azd).
+
+ ### [Windows](#tab/windows)
+
+ ```azdeveloper
+ powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression"
+ ```
+
+ ### [Linux/MacOS](#tab/linuxmac)
+
+ ```azdeveloper
+ curl -fsSL https://aka.ms/install-azd.sh | bash
+ ```
+
+
+
+1. Run the `azd up` command to clone, provision and deploy the app resources. Provide the name of the template you wish to use for the `--template` parameter. The `azd up` command will also prompt you to login to Azure and provide a name and location for the app.
+
+ ### [Flask](#tab/flask)
+
+ ```bash
+ azd up --template msdocs-flask-postgresql-sample-app
+ ```
+
+ ### [Django](#tab/django)
+
+ ```bash
+ azd up --template msdocs-django-postgresql-sample-app
+ ```
+
+1. When the `azd up` command finishes running, the URL for your deployed web app in the console will be printed. Click, or copy and paste the web app URL into your browser to explore the running app and verify that it is working correctly. All of the Azure resources and application code were set up for you by the `azd up` command.
+
+ The name of the resource group that was created is also displayed in the console output. Locate the resource group in the Azure portal to see all of the provisioned resources.
+
+ :::image type="content" border="False" source="./media/tutorial-python-postgresql-app/azd-resources-small.png" lightbox="./media/tutorial-python-postgresql-app/azd-resources.png" alt-text="A screenshot showing the resources deployed by the Azure Developer CLI.":::
+
+The Azure Developer CLI also enables you to configure your application to use a CI/CD pipeline for deployments, setup monitoring functionality, and even remove the provisioned resources if you want to tear everything down. For more information about these additional workflows, visit the project [README](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md).
+
+## Explore the completed azd project template workflow
+
+The sections ahead review the steps that `azd` handled for you in more depth. You can explore this workflow to better understand the requirements for deploying your own apps to Azure. When you ran `azd up`, the Azure Developer CLI completed the following steps:
+
+> [!NOTE]
+> You can also use the steps outlined in the **Azure portal** version of this flow to gain additional insights into the tasks that `azd` completed for you.
+
+### 1. Cloned and initialized the project
+
+The `azd up` command cloned the sample app project template to your machine. The project template includes the following components:
+
+* **Source code**: The code and assets for a Flask or Django web app that can be used for local development or deployed to Azure.
+* **Bicep files**: Infrastructure as code (IaC) files that are used by `azd` to create the necessary resources in Azure.
+* **Configuration files**: Essential configuration files such as `azure.yaml` that are used by `azd` to provision, deploy and wire resources together to produce a fully-fledged application.
+
+### 2. Provisioned the Azure resources
+
+The `azd up` command created all of the resources for the sample application in Azure using the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project template. [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep) is a declarative language used to manage Infrastructure as Code in Azure. Some of the key resources and configurations created by the template include:
+
+* **Resource group**: A resource group was created to hold all of the other provisioned Azure resources. The resource group keeps your resources well organized and easier to manage. The name of the resource group is based off of the environment name you specified during the `azd up` initialization process.
+* **Azure Virtual Network**: A virtual network was created to enable the provisioned resources to securely connect and communicate with one another. Related configurations such as setting up a private DNS zone link were also applied.
+* **Azure App Service plan**: An App Service plan was created to host App Service instances. App Service plans define what compute resources are available for one or more web apps.
+* **Azure App Service**: An App Service instance was created in the new App Service plan to host and run the deployed application. In this case a Linux instance was created and configured to run Python apps. Additional configurations were also applied to the app service, such as setting the Postgres connection string and secret keys.
+* **Azure Database for PostgresSQL**: A Postgres database and server were created for the app hosted on App Service to connect to. The required admin user, network and connection settings were also configured.
+* **Azure Application Insights**: Application insights was setup and configured for the app hosted on the App Service. This service enables detailed telemetry and monitoring for your application.
+
+You can inspect the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project to understand how each of these resources were provisioned in more detail. The `resources.bicep` file defines most of the different services created in Azure. For example, the App Service plan and App Service web app instance were created and connected using the following Bicep code:
+
+### [Flask](#tab/flask)
+
+```yaml
+resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = {
+ name: '${prefix}-service-plan'
+ location: location
+ tags: tags
+ sku: {
+ name: 'B1'
+ }
+ properties: {
+ reserved: true
+ }
+}
+
+resource web 'Microsoft.Web/sites@2022-03-01' = {
+ name: '${prefix}-app-service'
+ location: location
+ tags: union(tags, { 'azd-service-name': 'web' })
+ kind: 'app,linux'
+ properties: {
+ serverFarmId: appServicePlan.id
+ siteConfig: {
+ alwaysOn: true
+ linuxFxVersion: 'PYTHON|3.10'
+ ftpsState: 'Disabled'
+ appCommandLine: 'startup.sh'
+ }
+ httpsOnly: true
+ }
+ identity: {
+ type: 'SystemAssigned'
+ }
+```
+
+### [Django](#tab/django)
+
+```yml
+resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = {
+ name: '${prefix}-service-plan'
+ location: location
+ tags: tags
+ sku: {
+ name: 'B1'
+ }
+ properties: {
+ reserved: true
+ }
+}
+
+resource web 'Microsoft.Web/sites@2022-03-01' = {
+ name: '${prefix}-app-service'
+ location: location
+ tags: union(tags, { 'azd-service-name': 'web' })
+ kind: 'app,linux'
+ properties: {
+ serverFarmId: appServicePlan.id
+ siteConfig: {
+ alwaysOn: true
+ linuxFxVersion: 'PYTHON|3.10'
+ ftpsState: 'Disabled'
+ appCommandLine: 'startup.sh'
+ }
+ httpsOnly: true
+ }
+ identity: {
+ type: 'SystemAssigned'
+ }
+
+```
+++
+The Azure Database for PostgreSQL was also created using the following Bicep:
+
+```yml
+resource postgresServer 'Microsoft.DBforPostgreSQL/flexibleServers@2022-01-20-preview' = {
+ location: location
+ tags: tags
+ name: pgServerName
+ sku: {
+ name: 'Standard_B1ms'
+ tier: 'Burstable'
+ }
+ properties: {
+ version: '12'
+ administratorLogin: 'postgresadmin'
+ administratorLoginPassword: databasePassword
+ storage: {
+ storageSizeGB: 128
+ }
+ backup: {
+ backupRetentionDays: 7
+ geoRedundantBackup: 'Disabled'
+ }
+ network: {
+ delegatedSubnetResourceId: virtualNetwork::databaseSubnet.id
+ privateDnsZoneArmResourceId: privateDnsZone.id
+ }
+ highAvailability: {
+ mode: 'Disabled'
+ }
+ maintenanceWindow: {
+ customWindow: 'Disabled'
+ dayOfWeek: 0
+ startHour: 0
+ startMinute: 0
+ }
+ }
+
+ dependsOn: [
+ privateDnsZoneLink
+ ]
+}
+```
+
+### 3. Deployed the application
+
+The `azd up` command also deployed the sample application code to the provisioned Azure resources. The Developer CLI understands how to deploy different parts of your application code to different services in Azure using the `azure.yaml` file at the root of the project. The `azure.yaml` file specifies the app source code location, the type of app, and the Azure Service that should host that app.
+
+Consider the following `azure.yaml` file. These configurations tell the Azure Developer CLI that the Python code that lives at the root of the project should be deployed to the created App Service.
+
+### [Flask](#tab/flask)
+
+```yml
+name: flask-postgresql-sample-app
+metadata:
+ template: flask-postgresql-sample-app@0.0.1-beta
+
+ web:
+ project: .
+ language: py
+ host: appservice
+```
+
+### [Django](#tab/django)
+
+```yml
+name: django-postgresql-sample-app
+metadata:
+ template: django-postgresql-sample-app@0.0.1-beta
+
+ web:
+ project: .
+ language: py
+ host: appservice
+```
+++
+## Remove the resources
+
+Once you are finished experimenting with your sample application, you can run the `azd down` command to remove the app from Azure. Removing resources helps to avoid unintended costs or unused services in your Azure subscription.
+
+```bash
+azd down
+```
++ ## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost) - [How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-postgresql-server-thats-secured-behind-the-virtual-network-with-other-tools) - [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions) - [How is the Django sample configured to run on Azure App Service?](#how-is-the-django-sample-configured-to-run-on-azure-app-service)-- [I can't connect to the SSH session](#i-cant-connect-to-the-ssh-session)-- [I get an error when running database migrations](#i-get-an-error-when-running-database-migrations) #### How much does this setup cost?
The [Django sample application](https://github.com/Azure-Samples/msdocs-django-p
For more information, see [Production settings for Django apps](configure-language-python.md#production-settings-for-django-apps).
-#### I can't connect to the SSH session
-
-If you can't connect to the SSH session, then the app itself has failed to start. Check the [diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you see an error like `KeyError: 'AZURE_POSTGRESQL_CONNECTIONSTRING'`, it may mean that the environment variable is missing (you may have removed the app setting).
-
-#### I get an error when running database migrations
-
-If you encounter any errors related to connecting to the database, check if the app settings (`AZURE_POSTGRESQL_CONNECTIONSTRING`) have been changed. Without that connection string, the migrate command can't communicate with the database.
- ## Next steps Advance to the next tutorial to learn how to secure your app with a custom domain and certificate.
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Previously updated : 02/08/2023 Last updated : 03/20/2023 recommendations: false
http {
2. The following code sample is a self-contained `docker compose` example to run Form Recognizer Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
- ```yml
- version: '3.3'
-
- nginx:
- image: nginx:alpine
- container_name: reverseproxy
- volumes:
- - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
- ports:
- - "5000"
- rabbitmq:
- container_name: ${RABBITMQ_HOSTNAME}
- image: rabbitmq:3
- expose:
- - "5672"
- layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
- depends_on:
- - rabbitmq
- environment:
- eula: accept
- key: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
- Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
-
- custom-api:
- container_name: azure-cognitive-service-custom-api
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api
- restart: always
- depends_on:
- - rabbitmq
- environment:
- eula: accept
- key: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
- Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
-
- custom-supervised:
- container_name: azure-cognitive-service-custom-supervised
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised
- restart: always
- depends_on:
- - rabbitmq
- environment:
- eula: accept
- key: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- CustomFormRecognizer:ContainerPhase: All
- CustomFormRecognizer:LayoutAnalyzeUri: http://azure-cognitive-service-layout:5000/formrecognizer/v2.1/layout/analyze
- Logging:Console:LogLevel:Default: Information
- Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
- Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- ```
+ ```yml
+ version: '3.3'
+
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ rabbitmq:
+ container_name: ${RABBITMQ_HOSTNAME}
+ image: rabbitmq:3
+ expose:
+ - "5672"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-api:
+ container_name: azure-cognitive-service-custom-api
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api
+ restart: always
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
+
+ custom-supervised:
+ container_name: azure-cognitive-service-custom-supervised
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised
+ restart: always
+ depends_on:
+ - rabbitmq
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ CustomFormRecognizer:ContainerPhase: All
+ CustomFormRecognizer:LayoutAnalyzeUri: http://azure-cognitive-service-layout:5000/formrecognizer/v2.1/layout/analyze
+ Logging:Console:LogLevel:Default: Information
+ Queue:RabbitMQ:HostName: ${RABBITMQ_HOSTNAME}
+ Queue:RabbitMQ:Port: ${RABBITMQ_PORT}
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ ```
### Ensure the service is running
automanage Reference Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/reference-sdk.md
Azure Automanage currently supports the following SDKs:
- [Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/automanage/azure-resourcemanager-automanage) - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/automanage/arm-automanage) - [CSharp](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/automanage/Azure.ResourceManager.Automanage)-- PowerShell (pending)-- Azure CLI (pending)-- Terraform (pending)
+- [PowerShell](https://github.com/Azure/azure-powershell/blob/main/src/Automanage/help/Az.Automanage.md)
+- [Azure CLI](https://github.com/Azure/azure-cli-extensions/tree/main/src/automanage)
Here's a list of a few of the primary operations the SDKs provide:
Here's a list of a few of the primary operations the SDKs provide:
- Create Best Practices profile assignments - Create custom profile assignments - Remove assignments
+- Delete profiles
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md
This article shows you how to add a user-assigned managed identity for an Azure Automation account and how to use it to access other resources. For more information on how managed identities work with Azure Automation, see [Managed identities](automation-security-overview.md#managed-identities). > [!NOTE]
-> User-assigned managed identities are supported for Azure jobs only.
+> **User-assigned managed identities (UAMI) are in general supported for Azure jobs only.** One other scenario in which user-assigned managed identities (UAMI) run successfully in Hybrid Workers is, when only the Hybrid Worker VM has a UAMI assigned (i.e., the Automation Account can't have any UAMI assigned, otherwise the VM UAMI will fail authenticating).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
description: This article provides an overview of Azure availability zones and r
keywords: automation availability zones. Previously updated : 06/29/2022 Last updated : 03/16/2023
In the event when a zone is down, there's no action required by you to recover f
See [Regions and Availability Zones in Azure](../reliability/availability-zones-service-support.md) for the Azure regions that have availability zones. Automation accounts currently support the following regions:
+- Australia East
+- Brazil South
+- Canada Central
+- Central US
- China North 3-- Qatar Central-- West US 2-- East US 2 - East US-- North Europe-- West Europe
+- East US 2
- France Central
+- Germany West Central
- Japan East
+- North Europe
+- Qatar Central
+- South Africa North
+- South Central US
+- South East Asia
- UK South-- Southeast Asia-- Australia East-- Central US-- Brazil South-- Germany West Central
+- West Europe
+- West US 2
- West US 3 + ## Create a zone redundant Automation account You can create a zone redundant Automation account using: - [Azure portal](./automation-create-standalone-account.md?tabs=azureportal)
azure-app-configuration Enable Dynamic Configuration Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md
A *sentinel key* is a key that you update after you complete the change of all o
```
-You've set up your app to use the [options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options) during the quickstart. When the underlying configuration of your app is updated from App Configuration, your strongly typed `Settings` object obtained via `IOptionsSnapshot<T>` is updated automatically.
+You've set up your app to use the [options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options) during the quickstart. When the underlying configuration of your app is updated from App Configuration, your strongly typed `Settings` object obtained via `IOptionsSnapshot<T>` is updated automatically. Note that you shouldn't use the `IOptions<T>` if dynamic configuration update is desired because it doesn't read configuration data after the app has started.
## Request-driven configuration refresh
azure-arc Limitations Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md
This article describes limitations of Azure Arc-enabled SQL Managed Instance.
-## Backup and restore
+## Back up and restore
### Automated backups
This article describes limitations of Azure Arc-enabled SQL Managed Instance.
### Point-in-time restore (PITR) - Doesn't support restore from one Azure Arc-enabled SQL Managed Instance to another Azure Arc-enabled SQL Managed Instance. The database can only be restored to the same Arc-enabled SQL Managed Instance where the backups were created.-- Renaming of a databases is currently not supported, for point in time restore purposes.
+- Renaming databases is currently not supported, during point in time restore.
- No support for restoring a TDE enabled database currently. - A deleted database cannot be restored currently. ## Other limitations -- Transactional replication is currently not supported.-- Log shipping is currently blocked.
+- Transactional replication is currently not supported.
+- Log shipping is currently blocked.
+- Creating a database using SQL Server Management Studio does not work currently. Use the T-SQL command `CREATE DATABASE` to create databases.
## Roles and responsibilities
The roles and responsibilities between Microsoft and its customers differ betwee
### Frequently asked questions
-The table below summarizes answers to frequently asked questions regarding support roles and responsibilities.
+This table summarizes answers to frequently asked questions regarding support roles and responsibilities.
| Question | Azure Platform As A Service (PaaS) | Azure Arc hybrid services | |:-|::|::|
The table below summarizes answers to frequently asked questions regarding suppo
\* Azure services
-__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Because Microsoft does not own the infrastructure and does not operate it. Customers do.
+__Why doesn't Microsoft provide SLAs on Azure Arc hybrid services?__ Customers and their partners own and operate the infrastructure that Azure Arc hybrid services run on so Microsoft can't provide the SLA.
## Next steps
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
Title: "GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes" description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 02/07/2023 Last updated : 03/20/2023 -+ # GitOps Flux v2 configurations with AKS and Azure Arc-enabled Kubernetes
-Azure provides configuration management capability using GitOps in Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. You can easily enable and use GitOps in these clusters.
+Azure provides configuration management capability using GitOps in Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters.
With GitOps, you declare the desired state of your Kubernetes clusters in files in Git repositories. The Git repositories may contain the following files:
GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](h
:::image type="content" source="media/gitops/flux2-extension-install-aks.png" alt-text="Diagram showing the installation of the Flux extension for Azure Kubernetes Service cluster." lightbox="media/gitops/flux2-extension-install-aks.png":::
-GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created. The extension will be installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in a cluster, or you can install it manually using the portal, the Azure CLI (*az k8s-extension create --extensionType=microsoft.flux*), ARM template, or REST API.
+GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created. The extension is installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in a cluster, or you can install it manually using the portal, the Azure CLI (`az k8s-extension create --extensionType=microsoft.flux`), ARM template, or REST API.
### Version support
The most recent version of the Flux v2 extension and the two previous versions (
### Controllers
-The `microsoft.flux` extension installs by default the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig CRD, fluxconfig-agent, and fluxconfig-controller. You can control which of these controllers is installed and can optionally install the Flux image-automation and image-reflector controllers, which provide functionality around updating and retrieving Docker images.
+By default, the `microsoft.flux` extension installs the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig CRD, fluxconfig-agent, and fluxconfig-controller. You can control which of these controllers is installed. Optionally, you can also install the Flux image-automation and image-reflector controllers, which provide functionality for updating and retrieving Docker images.
-* [Flux Source controller](https://toolkit.fluxcd.io/components/source/controller/): Watches the source.toolkit.fluxcd.io custom resources. Handles the synchronization between the Git repositories, Helm repositories, Buckets and Azure Blob storage. Handles authorization with the source for private Git, Helm repos and Azure blob storage accounts. Surfaces the latest changes to the source through a tar archive file.
+* [Flux Source controller](https://toolkit.fluxcd.io/components/source/controller/): Watches the `source.toolkit.fluxcd.io` custom resources. Handles synchronization between the Git repositories, Helm repositories, Buckets and Azure Blob storage. Handles authorization with the source for private Git, Helm repos and Azure blob storage accounts. Surfaces the latest changes to the source through a tar archive file.
* [Flux Kustomize controller](https://toolkit.fluxcd.io/components/kustomize/controller/): Watches the `kustomization.toolkit.fluxcd.io` custom resources. Applies Kustomize or raw YAML files from the source onto the cluster. * [Flux Helm controller](https://toolkit.fluxcd.io/components/helm/controller/): Watches the `helm.toolkit.fluxcd.io` custom resources. Retrieves the associated chart from the Helm Repository source surfaced by the Source controller. Creates the `HelmChart` custom resource and applies the `HelmRelease` with given version, name, and customer-defined values to the cluster. * [Flux Notification controller](https://toolkit.fluxcd.io/components/notification/controller/): Watches the `notification.toolkit.fluxcd.io` custom resources. Receives notifications from all Flux controllers. Pushes notifications to user-defined webhook endpoints.
The `microsoft.flux` extension installs by default the [Flux controllers](https:
* `fluxconfigs.clusterconfig.azure.com` * FluxConfig CRD: Custom Resource Definition for `fluxconfigs.clusterconfig.azure.com` custom resources that define `FluxConfig` Kubernetes objects.
-* fluxconfig-agent: Responsible for watching Azure for new or updated `fluxConfigurations` resources, and for starting the associated Flux configuration in the cluster. Also, is responsible for pushing Flux status changes in the cluster back to Azure for each `fluxConfigurations` resource.
+* fluxconfig-agent: Responsible for watching Azure for new or updated `fluxConfigurations` resources, and for starting the associated Flux configuration in the cluster. Also responsible for pushing Flux status changes in the cluster back to Azure for each `fluxConfigurations` resource.
* fluxconfig-controller: Watches the `fluxconfigs.clusterconfig.azure.com` custom resources and responds to changes with new or updated configuration of GitOps machinery in the cluster. > [!NOTE]
-> The `microsoft.flux` extension is installed in the `flux-system` namespace and has cluster-wide scope. The option to install this extension at the namespace scope is not available, and attempt to install at namespace scope will fail with 400 error.
+> The `microsoft.flux` extension is installed in the `flux-system` namespace and has [cluster-wide scope](conceptual-extensions.md#extension-scope). The option to install this extension at the namespace scope is not available, and attempts to install at namespace scope will fail with 400 error.
## Flux configurations You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos, Bucket sources or Azure Blob Storage. When you create a `fluxConfigurations` resource, the values you supply for the [parameters](#parameters), such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service. The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `microsoft.flux` extension, manage the GitOps configuration process.
-`fluxconfig-agent` is responsible for:
+`fluxconfig-agent` is responsible for the following tasks:
* Polls the Kubernetes Configuration data plane service for new or updated `fluxConfigurations` resources. * Creates or updates `FluxConfig` custom resources in the cluster with the configuration information. * Watches `FluxConfig` custom resources and pushes status changes back to the associated Azure fluxConfiguration resources.
-`fluxconfig-controller` is responsible for:
+`fluxconfig-controller` is responsible for the following tasks:
* Watches status updates to the Flux custom resources created by the managed `fluxConfigurations`. * Creates private/public key pair that exists for the lifetime of the `fluxConfigurations`. This key is used for authentication if the URL is SSH based and if the user doesn't provide their own private key during creation of the configuration.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `m
* Sets up RBAC (service account provisioned, role binding created/assigned, role created/assigned). * Creates `GitRepository` or `Bucket` custom resource and `Kustomization` custom resources from the information in the `FluxConfig` custom resource.
-Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the source (Git repository, Bucket or Azure Blob storage) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams.
+Each `fluxConfigurations` resource in Azure is associated with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources in a Kubernetes cluster. When you create a `fluxConfigurations` resource, you specify the URL to the source (Git repository, Bucket or Azure Blob storage) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. You can also create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams.
> [!NOTE]
-> The `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making the changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be re-applied in Azure.
+> The `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be reapplied in Azure.
> > Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, make sure that your clusters connect with Azure within 48 hours. ## GitOps with Private Link
-If you've added support for [private link to an Azure Arc-enabled Kubernetes cluster](private-link.md), then the `microsoft.flux` extension works out-of-the-box with communication back to Azure. For connections to your Git repository, Helm repository, or any other endpoints that are needed to deploy your Kubernetes manifests, you will need to provision these endpoints behind your firewall or list them on your firewall so that the Flux Source controller can successfully reach them.
+If you've added support for [private link to an Azure Arc-enabled Kubernetes cluster](private-link.md), then the `microsoft.flux` extension works out-of-the-box with communication back to Azure. For connections to your Git repository, Helm repository, or any other endpoints that are needed to deploy your Kubernetes manifests, you must provision these endpoints behind your firewall, or list them on your firewall, so that the Flux Source controller can successfully reach them.
## Data residency
The Azure GitOps service (Azure Kubernetes Configuration Management) stores/proc
## Apply Flux configurations at scale
-Because Azure Resource Manager manages your configurations, you can automate creating the same configuration across all Azure Kubernetes Service and Azure Arc-enabled Kubernetes resources using Azure Policy, within the scope of a subscription or a resource group. This at-scale enforcement ensures that specific configurations will be applied consistently across entire groups of clusters.
+Because Azure Resource Manager manages your configurations, you can automate creating the same configuration across all Azure Kubernetes Service and Azure Arc-enabled Kubernetes resources using Azure Policy, within the scope of a subscription or a resource group. This at-scale enforcement ensures that specific configurations are applied consistently across entire groups of clusters.
[Learn how to use the built-in policies for Flux v2](./use-azure-policy-flux-2.md).
Because Azure Resource Manager manages your configurations, you can automate cre
For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation. You can see the full list of parameters that the `k8s-configuration flux` Azure CLI command supports by using the `-h` parameter:-
+az k8
```azurecli az k8s-configuration flux -h
Just like private keys, you can provide your `known_hosts` content directly or i
### Bucket source arguments
-If you use a `bucket` source instead of a `git` source, here are the bucket-specific command arguments.
+If you use `bucket` source, here are the bucket-specific command arguments.
| Parameter | Format | Notes | | - | - | - |
-| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: http://, https://. |
+| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: `http://`, `https://`. |
| `--bucket-name` | String | Name of the `bucket` to sync. | | `--bucket-access-key` | String | Access Key ID used to authenticate with the `bucket`. | | `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. |
If you use a `bucket` source instead of a `git` source, here are the bucket-spec
### Azure Blob Storage Account source arguments
-If you use a `azblob` source, here are the blob-specific command arguments.
+If you use `azblob` source, here are the blob-specific command arguments.
| Parameter | Format | Notes | | - | - | - |
If you use a `azblob` source, here are the blob-specific command arguments.
| `--sas_token` | String | The Azure Blob SAS Token for authentication | | `--mi_client_id` | String | The client ID of the managed identity for authentication with Azure Blob |
+> [!IMPORTANT]
+> When using managed identity authentication for AKS clusters and `azblob` source, the managed identity must be assigned at minimum the [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader) role. Authentication using a managed identity is not yet available for Azure Arc-enabled Kubernetes clusters.
+ ### Local secret for authentication with source You can use a local Kubernetes secret for authentication with a `git`, `bucket` or `azBlob` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
Learn more about using a local Kubernetes secret with these authentication metho
* [Bucket static authentication](https://fluxcd.io/docs/components/source/buckets/#static-authentication) > [!NOTE]
-> If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli-connect-using-an-outbound-proxy-server).
+> If you need Flux to access the source through your proxy, you must update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli-connect-using-an-outbound-proxy-server).
### Git implementation
-To support various repository providers that implement Git, Flux can be configured to use one of two Git libraries: `go-git` or `libgit2`. See the [Flux documentation](https://fluxcd.io/docs/components/source/gitrepositories/#git-implementation) for details.
+To support various repository providers that implement Git, Flux can be configured to use one of two Git libraries: `go-git` or `libgit2`. For details, see the [Flux documentation](https://fluxcd.io/docs/components/source/gitrepositories/#git-implementation).
The GitOps implementation of Flux v2 automatically determines which library to use for public cloud repositories:
For on-premises repositories, Flux uses `libgit2`.
### Kustomization
-By using `az k8s-configuration flux create`, you can create one or more kustomizations during the configuration.
+By using `az k8s-configuration flux kustomization create`, you can create one or more kustomizations during the configuration.
| Parameter | Format | Notes | | - | - | - |
By using `az k8s-configuration flux create`, you can create one or more kustomiz
| `name` | String | Unique name for this kustomization. | | `path` | String | Path within the Git repository to reconcile with the cluster. Default is the top level of the branch. | | `prune` | Boolean | Default is `false`. Set `prune=true` to assure that the objects that Flux deployed to the cluster will be cleaned up if they're removed from the repository or if the Flux configuration or kustomizations are deleted. Using `prune=true` is important for environments where users don't have access to the clusters and can make changes only through the Git repository. |
-| `depends_on` | String | Name of one or more kustomizations (within this configuration) that must reconcile before this kustomization can reconcile. For example: `depends_on=["kustomization1","kustomization2"]`. Note that if you remove a kustomization that has dependent kustomizations, the dependent kustomizations will get a `DependencyNotReady` state and reconciliation will halt.|
+| `depends_on` | String | Name of one or more kustomizations (within this configuration) that must reconcile before this kustomization can reconcile. For example: `depends_on=["kustomization1","kustomization2"]`. If you remove a kustomization that has dependent kustomizations, the state of dependent kustomizations becomes `DependencyNotReady`, and reconciliation will halt.|
| `timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. | | `sync_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. | | `retry_interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Default: `10m`. |
Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy)
### Update manifests for multi-tenancy
-LetΓÇÖs say you deploy a `fluxConfiguration` to one of our Kubernetes clusters in the **cluster-config** namespace with cluster scope. You configure the source to sync the `https://github.com/fluxcd/flux2-kustomize-helm-example` repo. This is the same sample Git repo used in the [Deploy applications using GitOps with Flux v2 tutorial](tutorial-use-gitops-flux2.md). After Flux syncs the repo, it will deploy the resources described in the manifests (YAML files). Two of the manifests describe HelmRelease and HelmRepository objects.
+LetΓÇÖs say you deploy a `fluxConfiguration` to one of our Kubernetes clusters in the **cluster-config** namespace with cluster scope. You configure the source to sync the `https://github.com/fluxcd/flux2-kustomize-helm-example` repo. This is the same sample Git repo used in the [Deploy applications using GitOps with Flux v2 tutorial](tutorial-use-gitops-flux2.md). After Flux syncs the repo, it deploys the resources described in the manifests (YAML files). Two of the manifests describe HelmRelease and HelmRepository objects.
```yaml apiVersion: helm.toolkit.fluxcd.io/v2beta1
spec:
By default, the Flux extension will deploy the `fluxConfigurations` by impersonating the **flux-applier** service account that is deployed only in the **cluster-config** namespace. Using the above manifests, when multi-tenancy is enabled the HelmRelease would be blocked. This is because the HelmRelease is in the **nginx** namespace and is referencing a HelmRepository in the **flux-system** namespace. Also, the Flux helm-controller cannot apply the HelmRelease, because there is no **flux-applier** service account in the **nginx** namespace.
-To work with multi-tenancy, the correct approach is to deploy all Flux objects into the same namespace as the `fluxConfigurations`. This avoids the cross-namespace reference issue, and allows the Flux controllers to get the permissions to apply the objects. Thus, for a GitOps configuration created in the **cluster-config** namespace, the above manifests would change to these:
+To work with multi-tenancy, the correct approach is to deploy all Flux objects into the same namespace as the `fluxConfigurations`. This approach avoids the cross-namespace reference issue, and allows the Flux controllers to get the permissions to apply the objects. Thus, for a GitOps configuration created in the **cluster-config** namespace, these example manifests would change as follows:
```yaml apiVersion: helm.toolkit.fluxcd.io/v2beta1
az k8s-extension update --configuration-settings multiTenancy.enforce=false -c C
If you are still using Flux v1, we recommend migrating to Flux v2 as soon as possible.
-To migrate to using Flux v2 in the same clusters where you've been using Flux v1, you first need to delete all Flux v1 `sourceControlConfigurations` from the clusters. Because Flux v2 has a fundamentally different architecture, the `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources in a cluster.
+To migrate to using Flux v2 in the same clusters where you've been using Flux v1, you must first delete all Flux v1 `sourceControlConfigurations` from the clusters. Because Flux v2 has a fundamentally different architecture, the `microsoft.flux` cluster extension won't install if there are Flux v1 `sourceControlConfigurations` resources in a cluster. The process of removing Flux v1 configurations and deploying Flux v2 configurations should not take more than 30 minutes.
-Removing Flux v1 `sourceControlConfigurations` will not stop any applications that are running on the clusters. However, during the period when Flux v1 configuration is removed and Flux v2 extension is not yet fully deployed, expect that:
+Removing Flux v1 `sourceControlConfigurations` doesn't stop any applications that are running on the clusters. However, during the period when Flux v1 configuration is removed and Flux v2 extension is not yet fully deployed:
-* If there are new changes in the application manifests stored in a Git repository, these will not be pulled during the migration, and the application version deployed on the cluster will be stale.
-* If there are unintended changes in the cluster state and it deviates from the desired state specified in source Git repository, the cluster will not be able to self-heal.
+* If there are new changes in the application manifests stored in a Git repository, these are not pulled during the migration, and the application version deployed on the cluster will be stale.
+* If there are unintended changes in the cluster state and it deviates from the desired state specified in source Git repository, the cluster won't be able to self-heal.
-We recommend testing your migration scenario in a development environment before migrating your production environment. The process of removing Flux v1 configurations and deploying Flux v2 configurations should not take more than 30 minutes.
+We recommend testing your migration scenario in a development environment before migrating your production environment.
### View and delete Flux v1 configurations
Key new features introduced in the GitOps extension for Flux v2:
* Flux v1 is a monolithic do-it-all operator. Flux v2 separates the functionalities into [specialized controllers](#controllers) (Source controller, Kustomize controller, Helm controller, and Notification controller). * Supports synchronization with multiple source repositories.
-* Supports [multi-tenancy](#multi-tenancy), like applying each source repository with its own set of permissions
+* Supports [multi-tenancy](#multi-tenancy), like applying each source repository with its own set of permissions.
* Provides operational insights through health checks, events and alerts. * Supports Git branches, pinning on commits and tags, and following SemVer tag ranges. * Credentials configuration per GitRepository resource: SSH private key, HTTP/S username/password/token, and OpenPGP public keys.
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/plan-at-scale-deployment.md
- Title: How to plan and deploy Azure Arc-enabled Kubernetes Previously updated : 04/12/2021-
-description: Onboard large number of clusters to Azure Arc-enabled Kubernetes for configuration management
--
-# Plan and deploy Azure Arc-enabled Kubernetes
-
-Deployment of an IT infrastructure service or business application is a challenge for any company. To prevent any unwelcome surprises or unplanned costs, you need to thoroughly plan for it to ensure you're as ready as possible. Such a plan should identify design and deployment criteria that needs to be met to complete the tasks.
-
-For the deployment to continue smoothly, your plan should establish a clear understanding of:
-
-* Roles and responsibilities.
-* Inventory of all Kubernetes clusters
-* Meet networking requirements.
-* The skill set and training required to enable successful deployment and on-going management.
-* Acceptance criteria and how you track its success.
-* Tools or methods to be used to automate the deployments.
-* Identified risks and mitigation plans to avoid delays and disruptions.
-* How to avoid disruption during deployment.
-* What's the escalation path when a significant issue occurs?
-
-The purpose of this article is to ensure you're prepared for a successful deployment of Azure Arc-enabled Kubernetes across multiple production clusters in your environment.
-
-## Prerequisites
-
-* An existing Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:
- - [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/)
- - Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
- - Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
-
-* Your machines have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server. More details can be found under [network prerequisites](network-requirements.md).
-
-* A `kubeconfig` file pointing to the cluster you want to connect to Azure Arc.
-* 'Read' and 'Write' permissions for the user or service principal creating the Azure Arc-enabled Kubernetes resource type of `Microsoft.Kubernetes/connectedClusters`.
-
-## Pilot
-
-Before deploying to all production clusters, start by evaluating this deployment process before adopting it broadly in your environment. For a pilot, identify a representative sampling of clusters that aren't critical to your companies ability to conduct business. You'll want to be sure to allow enough time to run the pilot and assess its impact: we recommend approximately 30 days.
-
-Establish a formal plan describing the scope and details of the pilot. The following sample plan should help you get started.
-
-* **Goals** - Describes the business and technical drivers that led to the decision that a pilot is necessary.
-* **Selection criteria** - Specifies the criteria used to select which aspects of the solution will be demonstrated via a pilot.
-* **Scope** - Covers solution components, expected schedule, duration of the pilot, and number of clusters to target.
-* **Success criteria and metrics** - Define the pilot's success criteria and specific measures used to determine level of success.
-* **Training plan** - Describes the plan for training system engineers, administrators, etc. who are new to Azure and it services during the pilot.
-* **Transition plan** - Describes the strategy and criteria used to guide transition from pilot to production.
-* **Rollback** - Describes the procedures for rolling back a pilot to pre-deployment state.
-* **Risks** - List all identified risks for conducting the pilot and associated with production deployment.
-
-## Phase 1: Build a foundation
-
-In this phase, system engineers or administrators perform the core activities such creation of resource groups, tags, role assignments so that the Azure Arc-enabled Kubernetes resources can then be created and operated.
-
-|Task |Detail |Duration |
-|--|-||
-| [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) | A dedicated resource group to include only Azure Arc-enabled Kubernetes resources and centralize management and monitoring of these resources. | One hour |
-| Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/azure/cloud-adoption-framework/decision-guides/resource-tagging/). This can help reduce the complexity of managing your Azure Arc-enabled Kubernetes resources and simplify making management decisions. | One day |
-| Identify [configurations](tutorial-use-gitops-connected-cluster.md) for GitOps | Identify the application or baseline configurations such as `PodSecurityPolicy`, `NetworkPolicy` that you want to deploy to your clusters | One day |
-| [Develop an Azure Policy](../../governance/policy/overview.md) governance plan | Determine how you'll implement governance of Azure Arc-enabled Kubernetes clusters at the subscription or resource group scope with Azure Policy. | One day |
-| Configure [Role based access control](../../role-based-access-control/overview.md) (RBAC) | Develop an access plan to identify who has read/write/all permissions on your clusters | One day |
-
-## Phase 2: Deploy Azure Arc-enabled Kubernetes
-
-In this phase, we connect your Kubernetes clusters to Azure:
-
-|Task |Detail |Duration |
-|--|-||
-| [Connect your first Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md) | As part of connecting your first cluster to Azure Arc, set up your onboarding environment with all the required tools such as Azure CLI, Helm, and `connectedk8s` extension for Azure CLI. | 15 minutes |
-
-## Phase 3: Manage and operate
-
-In this phase, we deploy applications and baseline configurations to your Kubernetes clusters.
-
-|Task |Detail |Duration |
-|--|-||
-|[Create configurations](tutorial-use-gitops-connected-cluster.md) on your clusters | Create configurations for deploying your applications on your Azure Arc-enabled Kubernetes resource. | 15 minutes |
-|[Use Azure Policy](use-azure-policy.md) for at-scale enforcement of configurations | Create policy assignments to automate the deployment of baseline configurations across all your clusters under a subscription or resource group scope. | 15 minutes |
-| [Upgrade Azure Arc agents](agent-upgrade.md) | If you have disabled auto-upgrade of agents on your clusters, update your agents manually to the latest version to make sure you have the most recent security and bug fixes. | 15 minutes |
-
-## Next steps
-
-* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
-* [Create configurations](./tutorial-use-gitops-connected-cluster.md) on your Azure Arc-enabled Kubernetes cluster.
-* [Use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 03/10/2023 Last updated : 03/15/2023 # Tutorial: Deploy applications using GitOps with Flux v2
-GitOps with Flux v2 can be enabled as a [cluster extension](conceptual-extensions.md) in Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
-
-> [!NOTE]
-> Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
-
-This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
+This tutorial describes how to use GitOps in a Kubernetes cluster. GitOps with Flux v2 is enabled as a [cluster extension](conceptual-extensions.md) in Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
In this tutorial, we use an example GitOps configuration with two kustomizations, so that you can see how one kustomization can have a dependency on another. You can add more kustomizations and dependencies as needed, depending on your scenario.
+Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
+ > [!TIP]
+> While the source in this tutorial is a Git repository, Flux also provides support for other common file sources such as Helm repositories, Buckets, and Azure Blob Storage.
+>
> You can also create Flux configurations by using Bicep, ARM templates, or Terraform AzAPI provider. For more information, see [Microsoft.KubernetesConfiguration fluxConfigurations](/azure/templates/microsoft.kubernetesconfiguration/fluxconfigurations). > [!IMPORTANT]
In this tutorial, we use an example GitOps configuration with two kustomizations
> [!TIP] > When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
+> [!NOTE]
+> Eventually Azure will stop supporting GitOps with Flux v1, so we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
+ ## Prerequisites To deploy applications using GitOps with Flux v2, you need the following:
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
Last updated 09/23/2022
The Start/Stop VMs v2 feature starts or stops Azure Virtual Machines instances across multiple subscriptions. It starts or stops virtual machines on user-defined schedules, provides insights through [Azure Application Insights](../../azure-monitor/app/app-insights-overview.md), and send optional notifications by using [action groups](../../azure-monitor/alerts/action-groups.md). For most scenarios, Start/Stop VMs can manage virtual machines deployed and managed both by Azure Resource Manager and by Azure Service Manager (classic), which is [deprecated](../../virtual-machines/classic-vm-deprecation.md).
-This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it's designed to take advantage of newer technology in Azure.
+This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it's designed to take advantage of newer technology in Azure. The Start/Stop VMs v2 relies on mutiple Azure services and it will be charged based on the service that are deployed and consumed.
## Important Start/Stop VMs v2 Updates
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Azure Maps uses a key-based authentication scheme. When you create your account,
> [!NOTE] > Azure Maps shares customer-provided address/location queries with third-party TomTom for mapping functionality purposes. These queries aren't linked to any customer or end user when shared with TomTom and can't be used to identify individuals.-
-Microsoft is currently in the process of adding TomTom and AccuWeather to the Online Services Subcontractor List.
+>
+> TomTom is a subprocessor that is authorized to subprocess Azure Maps customer data. For more information, see the Microsoft Online Services [Subprocessor List] located in the [Microsoft Trust Center].
## Supported regions
Stay up to date on Azure Maps:
[Azure Maps account]: https://azure.microsoft.com/services/azure-maps/ [TilesetID]: /rest/api/maps/render-v2/get-map-tile#tilesetid [Azure Maps blog]: https://azure.microsoft.com/blog/topics/azure-maps/
+[Microsoft Trust Center]: https://www.microsoft.com/trust-center/privacy
+[Subprocessor List]: https://servicetrust.microsoft.com/DocumentPage/aead9e68-1190-4d90-ad93-36418de5c594
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md
-# Secure an Azure Maps account with a SAS token
+# Secure an Azure Maps account with a SAS token (preview)
This article describes how to create an Azure Maps account with a securely stored SAS token you can use to call the Azure Maps REST API.
Find the API usage metrics for your Azure Maps account:
Explore samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]
-> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Then you define these elements for the resulting alert actions by using:
1. Select **Apply**. 1. Select **Next: Condition** at the bottom of the page.
-1. On the **Select a signal** pane, you can search for the signal name or you can filter the list of signals by:
+1. On the **Condition** tab, when you select the **Signal name** field, the most commonly used signals are displayed in the drop-down list. Select one of these popular signals, or select **See all signals** if you want to choose a different signal for the condition.
+1. (Optional.) If you chose to **See all signals** in the previous step, use the **Select a signal** pane to search for the signal name or filter the list of signals. Filter by:
- **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating. - **Signal source**: The service sending the signal. The list is pre-populated based on the type of alert rule you selected.
Then you define these elements for the resulting alert actions by using:
|Resource health|Resource health|The service that provides the resource-level health status. | |Service health|Service health|The service that provides the subscription-level health status. |
-1. Select the **Signal name** and **Apply**.
+ Select the **Signal name** and **Apply**.
1. Follow the steps in the tab that corresponds to the type of alert you're creating. ### [Metric alert](#tab/metric)
- 1. On the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields.
+ 1. Preview the results of the selected metric signal in the **Preview** section. Select values for the following fields.
|Field |Description | |||
- |Select time series|Select the time series to include in the results. |
- |Chart period|Select the time span to include in the results. Can be from the last six hours to the last week.|
-
- 1. (Optional) Depending on the signal type, you might see the **Split by dimensions** section.
-
- Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
-
- If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
-
- |Field |Description |
- |||
- |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the **Azure Resource ID** column makes the specified resource into the alert target. If detected, the **ResourceID** column is selected automatically and changes the context of the fired alert to the record's resource. |
- |Operator|The operator used on the dimension name and value. |
- |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
- |Include all future values| Select this field to include any future values added to the selected dimension. |
+ |Time range|The time range to include in the results. Can be from the last six hours to the last week.|
+ |Time series|The time series to include in the results.|
1. In the **Alert logic** section:
Then you define these elements for the resulting alert actions by using:
|Threshold sensitivity| If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern that's required to trigger an alert. <br> - **High**: Thresholds are tight and close to the metric series pattern. An alert rule is triggered on the smallest deviation, resulting in more alerts. <br> - **Medium**: Thresholds are less tight and more balanced. There will be fewer alerts than with high sensitivity (default). <br> - **Low**: Thresholds are loose, allowing greater deviation from the metric series pattern. Alert rules are only triggered on large deviations, resulting in fewer alerts. | |Aggregation granularity| Select the interval that's used to group the data points by using the aggregation type function. Choose an **Aggregation granularity** (period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.| |Frequency of evaluation|Select how often the alert rule is to be run. Select a frequency that's smaller than the aggregation granularity to generate a sliding window for the evaluation.|
+
+ 1. (Optional) Depending on the signal type, you might see the **Split by dimensions** section.
+
+ Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
+
+ If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
+
+ |Field |Description |
+ |||
+ |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the **Azure Resource ID** column makes the specified resource into the alert target. If detected, the **ResourceID** column is selected automatically and changes the context of the fired alert to the record's resource. |
+ |Operator|The operator used on the dimension name and value. |
+ |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
+ |Include all future values| Select this field to include any future values added to the selected dimension. |
1. (Optional) In the **When to evaluate** section:
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
To enable recommended alert rules:
1. On the **Alerts** page, select **Enable recommended alert rules**. The **Enable recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource. 1. In the **Alert me if** section, select all of the rules you want to enable. The rules are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like. 1. In the **Notify me by** section, select the way you want to be notified if an alert is fired.
+1. Select **Use an existing action group**, and enter the details of the existing action group if you want to use an action group that already exists.
1. Select **Enable**. :::image type="content" source="media/alerts-managing-alert-instances/alerts-enable-recommended-alert-rule-pane.png" alt-text="Screenshot of recommended alert rules pane.":::
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
This table provides a brief description of each alert type. For more information
If you don't have alert rules defined for the selected resource, you can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-manage-alert-rules.md#enable-recommended-alert-rules-in-the-azure-portal).
+The system compiles a list of recommended alert rules based on:
+
+- The resource providerΓÇÖs knowledge of important signals and thresholds for monitoring the resource.
+- Telemetry that tells us what customers commonly alert on for this resource.
+ > [!NOTE]
-> The alert rule recommendations feature is currently in preview and is only enabled for:
+> Recommended alert rules is enabled for:
> - Virtual machines > - AKS resources > - Log Analytics workspaces
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based resources:
> [!div class="checklist"] > - Support full integration between Application Insights and [Log Analytics](../logs/log-analytics-overview.md). > - Send Application Insights telemetry to a common [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
->
+> - Allow you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
> - Enable common [Azure role-based access control](../../role-based-access-control/overview.md) across your resources. > - Eliminate the need for cross-app/workspace queries. > - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml).
Legacy table: traces
* [Explore metrics](../essentials/metrics-charts.md) * [Write Log Analytics queries](../logs/log-query-overview.md)-
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
Use the following procedure if you're not using managed identity authentication.
## Limitations - Enabling managed identity authentication (preview) isn't currently supported by using Terraform or Azure Policy.-- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-name\>-\<cluster-region\>*. Currently, this name can't be modified.
+- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-region\>-<\cluster-name\>*. Currently, this name can't be modified.
## Next steps
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|MaxAllowedResourceCount |Yes |Maximum allowed entities count |Count |Maximum |Maximum allowed entities count |No Dimensions | |PipelineCancelledRuns |Yes |Cancelled pipeline runs metrics |Count |Total |Cancelled pipeline runs metrics |FailureType, CancelledBy, Name | |PipelineElapsedTimeRuns |Yes |Elapsed Time Pipeline Runs Metrics |Count |Total |Elapsed Time Pipeline Runs Metrics |RunId, Name |
-|PipelineFailedRuns |Yes |Failed pipeline runs metrics |Count |Total |Failed pipeline runs metrics |FailureType, Name |
-|PipelineSucceededRuns |Yes |Succeeded pipeline runs metrics |Count |Total |Succeeded pipeline runs metrics |FailureType, Name |
+|PipelineFailedRuns |Yes |Failed pipeline runs metrics |Count |Total |Failed pipeline runs metrics |FailureType, Pipeline |
+|PipelineSucceededRuns |Yes |Succeeded pipeline runs metrics |Count |Total |Succeeded pipeline runs metrics |FailureType, Pipeline |
|ResourceCount |Yes |Total entities count |Count |Maximum |Total entities count |No Dimensions | |SSISIntegrationRuntimeStartCancel |Yes |Cancelled SSIS integration runtime start metrics |Count |Total |Cancelled SSIS integration runtime start metrics |IntegrationRuntimeName | |SSISIntegrationRuntimeStartFailed |Yes |Failed SSIS integration runtime start metrics |Count |Total |Failed SSIS integration runtime start metrics |IntegrationRuntimeName |
This latest update adds a new column and reorders the metrics to be alphabetical
|SSISPackageExecutionCancel |Yes |Cancelled SSIS package execution metrics |Count |Total |Cancelled SSIS package execution metrics |IntegrationRuntimeName | |SSISPackageExecutionFailed |Yes |Failed SSIS package execution metrics |Count |Total |Failed SSIS package execution metrics |IntegrationRuntimeName | |SSISPackageExecutionSucceeded |Yes |Succeeded SSIS package execution metrics |Count |Total |Succeeded SSIS package execution metrics |IntegrationRuntimeName |
-|TriggerCancelledRuns |Yes |Cancelled trigger runs metrics |Count |Total |Cancelled trigger runs metrics |Name, FailureType |
-|TriggerFailedRuns |Yes |Failed trigger runs metrics |Count |Total |Failed trigger runs metrics |Name, FailureType |
-|TriggerSucceededRuns |Yes |Succeeded trigger runs metrics |Count |Total |Succeeded trigger runs metrics |Name, FailureType |
+|TriggerCancelledRuns |Yes |Cancelled trigger runs metrics |Count |Total |Cancelled trigger runs metrics |Pipeline, FailureType |
+|TriggerFailedRuns |Yes |Failed trigger runs metrics |Count |Total |Failed trigger runs metrics |Pipeline, FailureType |
+|TriggerSucceededRuns |Yes |Succeeded trigger runs metrics |Count |Total |Succeeded trigger runs metrics |Pipeline, FailureType |
## Microsoft.DataLakeAnalytics/accounts
This latest update adds a new column and reorders the metrics to be alphabetical
- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
-<!--Gen Date: Sun Mar 12 2023 11:30:35 GMT+0200 (Israel Standard Time)-->
+<!--Gen Date: Sun Mar 12 2023 11:30:35 GMT+0200 (Israel Standard Time)-->
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Deleting a linked workspace is permitted while linked to cluster. If you decide
- Error messages
- **Cluster Create**
- - 400 ΓÇö Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
- - 400 ΓÇö The body of the request is null or in bad format.
- - 400 ΓÇö "SKU" name is invalid. Set "SKU" name to capacityReservation.
- - 400 ΓÇö Capacity was provided but "SKU" is not capacityReservation. Set "SKU" name to capacityReservation.
- - 400 ΓÇö Missing Capacity in "SKU". Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
- - 400 ΓÇö Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
- - 400 ΓÇö No "SKU" was set. Set the "SKU" name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
- - 400 ΓÇö Identity is null or empty. Set Identity with systemAssigned type.
- - 400 ΓÇö KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.
- - 400 ΓÇö Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
- **Cluster Update** - 400 ΓÇö Cluster is in deleting state. Async operation is in progress. Cluster must complete its operation before any update operation is performed. - 400 ΓÇö KeyVaultProperties is not empty but has a bad format. See [key identifier update](#update-cluster-with-key-identifier-details).
Deleting a linked workspace is permitted while linked to cluster. If you decide
- 400 ΓÇö Cluster is in deleting state. Wait for the Async operation to complete and try again. **Cluster Get**
- - 404 ΓÇö Cluster not found, the cluster may have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
-
- **Cluster Delete**
- - 409 ΓÇö Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
-
- **Workspace link**
- - 404 ΓÇö Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
- - 409 ΓÇö Workspace link or unlink operation in process.
- - 400 ΓÇö Cluster not found, the cluster you specified doesnΓÇÖt exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
+ - 404--Cluster not found, the cluster might have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in deletion process.
- **Workspace unlink**
- - 404 ΓÇö Workspace not found. The workspace you specified doesnΓÇÖt exist or was deleted.
- - 409 ΓÇö Workspace link or unlink operation in process.
## Next steps - Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)
azure-monitor Log Analytics Workspace Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-health.md
To view your Log Analytics workspace health and set up health status alerts:
:::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency.png" alt-text="Screenshot that shows the Resource health screen for a Log Analytics workspace.":::
-1. To set up health status alerts:
- 1. Select **Add resource health alert**.
+1. To set up health status alerts, you can either [enable recommended out-of-the-box alert](../alerts/alerts-overview.md#recommended-alert-rules) rules, or manually create new alert rules.
+ - To enable the recommended alert rules:
+ 1. select **Alerts**, then select **Enable recommended alert rules**. The **Enable recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
+ 1. In the **Alert me if** section, select all of the rules you want to enable. The rules are populated with the default values for the rule condition, you can change the default values if you would like.
+ 1. In the **Notify me by** section, select the way you want to be notified if an alert is fired.
+ 1. Select **Use an existing action group**, and enter the details of the existing action group if you want to use an action group that already exists.
+ 1. Select **Enable**.
+
+ :::image type="content" source="../alerts/media/alerts-managing-alert-instances/alerts-enable-recommended-alert-rule-pane.png" alt-text="Screenshot of recommended alert rules pane.":::
+
+ - To create a new alert rule:
+ 1. Select **Add resource health alert**.
+
+ The **Create alert rule** wizard opens, with the **Scope** and **Condition** panes pre-populated. By default, the rule triggers alerts all status changes in all Log Analytics workspaces in the subscription. If necessary, you can edit and modify the scope and condition at this stage.
- The **Create alert rule** wizard opens, with the **Scope** and **Condition** panes pre-populated. By default, the rule triggers alerts all status changes in all Log Analytics workspaces in the subscription. If necessary, you can edit and modify the scope and condition at this stage.
-
- :::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" alt-text="Screenshot that shows the Create alert rule wizard for Log Analytics workspace latency issues.":::
-
- 1. Follow the rest of the steps in [Create a new alert rule in the Azure portal](../alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-in-the-azure-portal).
+ :::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" alt-text="Screenshot that shows the Create alert rule wizard for Log Analytics workspace latency issues.":::
+ 1. Follow the rest of the steps in [Create a new alert rule in the Azure portal](../alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-in-the-azure-portal).
## Investigate Log Analytics workspace health issues To investigate Log Analytics workspace health issues:
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Provide the following properties when creating new dedicated cluster:
After you create your cluster resource, you can edit properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
-You can have up to five active clusters per subscription per region. If the cluster is deleted, it's still reserved for 14 days. You can have up to seven clusters per subscription and region, five active, plus two deleted in past 14 days.
+Deleted clusters take two weeks to be completely removed. You can have up to seven clusters per subscription and region, five active, and two deleted in past two weeks.
> [!NOTE] > Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete.
N/A
You need to have *write* permissions on the cluster resource.
-When deleting a cluster, you're losing access to all data in cluster, which was ingested from workspaces that were linked to it. This operation isn't reversible.
-The cluster's billing stops when deleted, regardless of the 30-days commitment tier in cluster.
+When deleting a cluster, you're losing access to all data, which was ingested from workspaces that were linked to it. This operation isn't reversible.
+The cluster's billing stops when cluster is deleted, regardless of the 30-days commitment tier defined in cluster.
-If you delete your cluster while workspaces are linked, Workspaces get automatically unlinked from the cluster before the cluster delete, and new data sent to workspaces gets ingested to Log Analytics store instead. If the retention of data in workspaces older than the period it was linked to the cluster, you can query workspace for the time range before the link to cluster and after the unlink, and the service performs cross-cluster queries seamlessly.
+If you delete your cluster while workspaces are linked, workspaces get automatically unlinked from the cluster before the cluster delete, and new data to workspaces gets ingested to Log Analytics clusters instead. You can query workspace for the time range before it was linked to the cluster, and after the unlink, and the service performs cross-cluster queries seamlessly.
> [!NOTE]
-> - There is a limit of seven clusters per subscription and region, five active, plus two deleted in past 14 days.
+> - There is a limit of seven clusters per subscription and region, five active, plus two that were deleted in past two weeks.
> - Cluster's name remain reserved for 14 days after deletion, and can't be used for creating a new cluster. Use the following commands to delete a cluster:
Authorization: Bearer <token>
- A maximum of five active clusters can be created in each region and subscription. -- A maximum of seven clusters allowed per subscription and region, five active, plus two deleted in past 14 days.
+- A maximum of seven clusters allowed per subscription and region, five active, plus two that were deleted in past 2 weeks.
- A maximum of 1,000 Log Analytics workspaces can be linked to a cluster.
Authorization: Bearer <token>
- If you create a cluster and get an error "region-name doesn't support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body. - Double encryption setting can't be changed after the cluster has been created. -- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
+- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) a workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
## Troubleshooting -- If you get conflict error when creating a cluster, it may be that you've deleted your cluster in the last 14 days and it's in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
+- If you get conflict error when creating a cluster, it might have been deleted in past 2 weeks and in deletion process yet. The cluster name remains reserved during the 2 weeks deletion period and you can't create a new cluster with that name.
- If you update your cluster while the cluster is at provisioning or updating state, the update will fail.
Authorization: Bearer <token>
### Cluster Get
+ - 404--Cluster not found, the cluster might have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in deletion process.
### Cluster Delete
Authorization: Bearer <token>
- 404--Workspace not found. The workspace you specified doesn't exist or was deleted. - 409--Workspace link or unlink operation in process.-- 400--Cluster not found, the cluster you specified doesn't exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
+- 400--Cluster not found, the cluster you specified doesn't exist or was deleted.
### Workspace unlink - 404--Workspace not found. The workspace you specified doesn't exist or was deleted.
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
Instead of directly configuring the schema of the table, you can upload a file w
```kusto source | extend TimeGenerated = todatetime(Time)
- | parse RawData with
+ | parse kind = regex RawData with *
+ ':"'
ClientIP:string
- ' ' *
- ' ' *
- ' [' * '] "' RequestType:string
- " " Resource:string
+ " - -" * '"'
+ RequestType:string
+ ' '
+ Resource:string
" " * '" ' ResponseCode:int " " *
- | where ResponseCode != 200
- | project-away Time, RawData
``` 1. Select **Run** to view the results.
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-collector-release-notes.md
A point release to address user-reported bugs.
### Bug fixes - Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17) - Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
-<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md#not-supported-scenarios)
+<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](snapshot-debugger-troubleshoot.md#not-supported-scenarios)
## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2) A point release to address a user-reported bug.
A point release to revert a breaking change introduced in 1.4.0.
- Fix [Method not found in WebJobs](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/15) ## [1.4.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.0)
-Address multiple improvements and added support for Azure Active Directory (AAD) authentication for Application Insights ingestion.
+Address multiple improvements and added support for Azure Active Directory (Azure AD) authentication for Application Insights ingestion.
### Changes - Snapshot Collector package size reduced by 60%. From 10.34 MB to 4.11 MB. - Target netstandard2.0 only in Snapshot Collector.
A point release to address a couple of high-impact issues.
## [1.3.1](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.1) - Remove support for collecting snapshots with the RtlCloneUserProcess API and only support PssCaptureSnapshots API.-- Increase the default limit on how many snapshot can be captured in 10 minutes from 1 to 3.
+- Increase the default limit on how many snapshots can be captured in 10 minutes from 1 to 3.
- Allow SnapshotUploader.exe to negotiate TLS 1.1 and 1.2 - Report additional telemetry when SnapshotUploader logs a warning or an error - Stop taking snapshots when the backend service reports the daily quota was reached (50 snapshots per day)
Augmented usage telemetry
## [1.1.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.1.0) ### Changes - Added host memory protection. This feature reduces the impact on the host machine's memory.-- Improve the Azure portal snapshot viewing experience.
+- Improve the Azure portal snapshot viewing experience.
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
If you're running a different type of Azure service, here are instructions for e
* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json) * [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) * [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and Virtual Machine Scale Sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) ## Enable Snapshot Debugger for other clouds
For User-Assigned Identity:
|App Setting | Value | ||-|
-|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD;ClientId={Client id of the User-Assigned Identity} |
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AD;ClientID={Client ID of the User-Assigned Identity} |
## Disable Snapshot Debugger
Below you can find scenarios where Snapshot Collector isn't supported:
* Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. * See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.
-* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md).
+* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
[Enablement UI]: ./media/snapshot-debugger/enablement-ui.png [snapshot-debugger-app-setting]:./media/snapshot-debugger/snapshot-debugger-app-setting.png
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-function-app.md
We recommend that you have Snapshot Debugger enabled on all your apps to ease di
* Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. * [View snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal. * Customize Snapshot Debugger configuration based on your use-case on your Function app. For more information, see [snapshot configuration in host.json](../../azure-functions/functions-host-json.md#applicationinsightssnapshotconfiguration).
-* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md).
+* For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
+
+ Title: Troubleshoot Azure Application Insights Snapshot Debugger
+description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger.
+++
+reviewer: cweining
+ Last updated : 03/20/2023+++
+# <a id="troubleshooting"></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots
+
+If you enabled Application Insights Snapshot Debugger for your application, but aren't seeing snapshots for exceptions, you can use these instructions to troubleshoot.
+
+There can be many different reasons why snapshots aren't generated. You can start by running the snapshot health check to identify some of the possible common causes.
+
+## Not Supported Scenarios
+
+Below you can find scenarios where Snapshot Collector isn't supported:
+
+|Scenario | Side Effects | Recommendation |
+||--|-|
+|When using the Snapshot Collector SDK in your application directly (*.csproj*) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available. <br/> Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor` <br/> For more information about the Application Insights feature "Interop", see the [documentation.](../app/azure-web-apps-net-core.md#troubleshooting) | If you're using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure portal UX) |
+
+## Make sure you're using the appropriate Snapshot Debugger Endpoint
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+
+For App Service and applications using the Application Insights SDK, you have to update the connection string using the supported overrides for Snapshot Debugger as defined below:
+
+|Connection String Property | US Government Cloud | China Cloud |
+|||-|
+|SnapshotEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+For more information about other connection overrides, see [Application Insights documentation](../app/sdk-connection-string.md?tabs=net#connection-string-with-explicit-endpoint-overrides).
+
+For Function App, you have to update the `host.json` using the supported overrides below:
+
+|Property | US Government Cloud | China Cloud |
+|||-|
+|AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
+
+Below is an example of the `host.json` updated with the US Government Cloud agent endpoint:
+
+```json
+{
+ "version": "2.0",
+ "logging": {
+ "applicationInsights": {
+ "samplingExcludedTypes": "Request",
+ "samplingSettings": {
+ "isEnabled": true
+ },
+ "snapshotConfiguration": {
+ "isEnabled": true,
+ "agentEndpoint": "https://snapshot.monitor.azure.us"
+ }
+ }
+ }
+}
+```
+
+## Use the snapshot health check
+
+Several common problems result in the Open Debug Snapshot not showing up. Using an outdated Snapshot Collector, for example; reaching the daily upload limit; or perhaps the snapshot is just taking a long time to upload. Use the Snapshot Health Check to troubleshoot common problems.
+
+There's a link in the exception pane of the end-to-end trace view that takes you to the Snapshot Health Check.
++
+The interactive, chat-like interface looks for common problems and guides you to fix them.
++
+If that doesn't solve the problem, then refer to the following manual troubleshooting steps.
+
+## Verify the instrumentation key
+
+Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the *ApplicationInsights.config* file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal.
++
+## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
+
+If you have an ASP.NET application that's hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
+
+[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md?toc=/azure/azure-monitor/toc.json). The set of SSL security protocols is one of the quirks enabled by the `httpRuntime targetFramework` value in the `system.web` section of `web.config`.
+If the `httpRuntime targetFramework` is 4.5.2 or lower, then TLS 1.2 isn't included by default.
+
+> [!NOTE]
+> The `httpRuntime targetFramework` value is independent of the target framework used when building your application.
+To check the setting, open your *web.config* file and find the system.web section. Ensure that the `targetFramework` for `httpRuntime` is set to 4.6 or above.
+
+ ```xml
+ <system.web>
+ ...
+ <httpRuntime targetFramework="4.7.2" />
+ ...
+ </system.web>
+ ```
+
+> [!NOTE]
+> Modifying the `httpRuntime targetFramework` value changes the runtime quirks applied to your application and can cause other, subtle behavior changes. Be sure to test your application thoroughly after making this change. For a full list of compatibility changes, see [Re-targeting changes](/dotnet/framework/migration-guide/application-compatibility#retargeting-changes).
+> [!NOTE]
+> If the `targetFramework` is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you're using your own virtual machine, you may need to enable TLS 1.2 in the OS.
+## Preview Versions of .NET Core
+
+If you're using a preview version of .NET Core or your application references Application Insights SDK, directly or indirectly via a dependent assembly, follow the instructions for [Enable Snapshot Debugger for other environments](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json).
+
+## Check the Diagnostic Services site extension' Status Page
+
+If Snapshot Debugger was enabled through the [Application Insights pane](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json) in the portal, it was enabled by the Diagnostic Services site extension.
+
+> [!NOTE]
+> Codeless installation of Application Insights Snapshot Debugger follows the .NET Core support policy.
+> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+You can check the Status Page of this extension by going to the following url:
+`https://{site-name}.scm.azurewebsites.net/DiagnosticServices`
+
+> [!NOTE]
+> The domain of the Status Page link will vary depending on the cloud.
+This domain will be the same as the Kudu management site for App Service.
+This Status Page shows the installation state of the Profiler and Snapshot Collector agents. If there was an unexpected error, it will be displayed and show how to fix it.
+
+You can use the Kudu management site for App Service to get the base url of this Status Page:
+
+1. Open your App Service application in the Azure portal.
+1. Select **Advanced Tools**, or search for **Kudu**.
+1. Select **Go**.
+1. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
+ It will end like this: `https://<kudu-url>/DiagnosticServices`
+
+## Upgrade to the latest version of the NuGet package
+
+Based on how Snapshot Debugger was enabled, see the following options:
+
+* If Snapshot Debugger was enabled through the [Application Insights pane in the portal](snapshot-debugger-app-service.md?toc=/azure/azure-monitor/toc.json), then your application should already be running the latest NuGet package.
+
+* If Snapshot Debugger was enabled by including the [Microsoft.ApplicationInsights.SnapshotCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) NuGet package, use Visual Studio's NuGet Package Manager to make sure you're using the latest version of `Microsoft.ApplicationInsights.SnapshotCollector`.
+
+For the latest updates and bug fixes [consult the release notes](./snapshot-collector-release-notes.md).
+
+## Check the uploader logs
+
+After a snapshot is created, a minidump file (*.dmp*) is created on disk. A separate uploader process creates that minidump file and uploads it, along with any associated PDBs, to Application Insights Snapshot Debugger storage. After the minidump has uploaded successfully, it's deleted from disk. The log files for the uploader process are kept on disk. In an App Service environment, you can find these logs in `D:\Home\LogFiles`. Use the Kudu management site for App Service to find these log files.
+
+1. Open your App Service application in the Azure portal.
+1. Select **Advanced Tools**, or search for **Kudu**.
+1. Select **Go**.
+1. In the **Debug console** drop-down list, select **CMD**.
+1. Select **LogFiles**.
+
+You should see at least one file with a name that begins with `Uploader_` or `SnapshotUploader_` and a `.log` extension. Select the appropriate icon to download any log files or open them in a browser.
+The file name includes a unique suffix that identifies the App Service instance. If your App Service instance is hosted on more than one machine, there are separate log files for each machine. When the uploader detects a new minidump file, it's recorded in the log file. Here's an example of a successful snapshot and upload:
+
+```
+SnapshotUploader.exe Information: 0 : Received Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
+ DateTime=2018-03-09T01:42:41.8571711Z
+SnapshotUploader.exe Information: 0 : Creating minidump from Fork request ID 139e411a23934dc0b9ea08a626db16c5 from process 6368 (Low pri)
+ DateTime=2018-03-09T01:42:41.8571711Z
+SnapshotUploader.exe Information: 0 : Dump placeholder file created: 139e411a23934dc0b9ea08a626db16c5.dm_
+ DateTime=2018-03-09T01:42:41.8728496Z
+SnapshotUploader.exe Information: 0 : Dump available 139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:45.7525022Z
+SnapshotUploader.exe Information: 0 : Successfully wrote minidump to D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:45.7681360Z
+SnapshotUploader.exe Information: 0 : Uploading D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp, 214.42 MB (uncompressed)
+ DateTime=2018-03-09T01:42:45.7681360Z
+SnapshotUploader.exe Information: 0 : Upload successful. Compressed size 86.56 MB
+ DateTime=2018-03-09T01:42:59.6184651Z
+SnapshotUploader.exe Information: 0 : Extracting PDB info from D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp.
+ DateTime=2018-03-09T01:42:59.6184651Z
+SnapshotUploader.exe Information: 0 : Matched 2 PDB(s) with local files.
+ DateTime=2018-03-09T01:42:59.6809606Z
+SnapshotUploader.exe Information: 0 : Stamp does not want any of our matched PDBs.
+ DateTime=2018-03-09T01:42:59.8059929Z
+SnapshotUploader.exe Information: 0 : Deleted D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\139e411a23934dc0b9ea08a626db16c5.dmp
+ DateTime=2018-03-09T01:42:59.8530649Z
+```
+
+> [!NOTE]
+> The example above is from version 1.2.0 of the `Microsoft.ApplicationInsights.SnapshotCollector` NuGet package. In earlier versions, the uploader process is called `MinidumpUploader.exe` and the log is less detailed.
+In the previous example, the instrumentation key is `c12a605e73c44346a984e00000000000`. This value should match the instrumentation key for your application.
+The minidump is associated with a snapshot with the ID `139e411a23934dc0b9ea08a626db16c5`. You can use this ID later to locate the associated exception record in Application Insights Analytics.
+
+The uploader scans for new PDBs about once every 15 minutes. Here's an example:
+
+```
+SnapshotUploader.exe Information: 0 : PDB rescan requested.
+ DateTime=2018-03-09T01:47:19.4457768Z
+SnapshotUploader.exe Information: 0 : Scanning D:\home\site\wwwroot for local PDBs.
+ DateTime=2018-03-09T01:47:19.4457768Z
+SnapshotUploader.exe Information: 0 : Local PDB scan complete. Found 2 PDB(s).
+ DateTime=2018-03-09T01:47:19.4614027Z
+SnapshotUploader.exe Information: 0 : Deleted PDB scan marker : D:\local\Temp\Dumps\c12a605e73c44346a984e00000000000\6368.pdbscan
+ DateTime=2018-03-09T01:47:19.4614027Z
+```
+
+For applications that *aren't* hosted in App Service, the uploader logs are in the same folder as the minidumps: `%TEMP%\Dumps\<ikey>` (where `<ikey>` is your instrumentation key).
+
+## Troubleshooting Cloud Services
+
+In Cloud Services, the default temporary folder could be too small to hold the minidump files, leading to lost snapshots.
+
+The space needed depends on the total working set of your application and the number of concurrent snapshots.
+
+The working set of a 32-bit ASP.NET web role is typically between 200 MB and 500 MB. Allow for at least two concurrent snapshots.
+
+For example, if your application uses 1 GB of total working set, you should make sure there is at least 2 GB of disk space to store snapshots.
+
+Follow these steps to configure your Cloud Service role with a dedicated local resource for snapshots.
+
+1. Add a new local resource to your Cloud Service by editing the Cloud Service definition (.csdef) file. The following example defines a resource called `SnapshotStore` with a size of 5 GB.
+
+ ```xml
+ <LocalResources>
+ <LocalStorage name="SnapshotStore" cleanOnRoleRecycle="false" sizeInMB="5120" />
+ </LocalResources>
+ ```
+
+1. Modify your role's startup code to add an environment variable that points to the `SnapshotStore` local resource. For Worker Roles, the code should be added to your role's `OnStart` method:
+
+ ```csharp
+ public override bool OnStart()
+ {
+ Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
+ return base.OnStart();
+ }
+ ```
+
+ For Web Roles (ASP.NET), the code should be added to your web application's `Application_Start` method:
+
+ ```csharp
+ using Microsoft.WindowsAzure.ServiceRuntime;
+ using System;
+ namespace MyWebRoleApp
+ {
+ public class MyMvcApplication : System.Web.HttpApplication
+ {
+ protected void Application_Start()
+ {
+ Environment.SetEnvironmentVariable("SNAPSHOTSTORE", RoleEnvironment.GetLocalResource("SnapshotStore").RootPath);
+ // TODO: The rest of your application startup code
+ }
+ }
+ }
+ ```
+
+1. Update your role's *ApplicationInsights.config* file to override the temporary folder location used by `SnapshotCollector`
+
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- Use the SnapshotStore local resource for snapshots -->
+ <TempFolder>%SNAPSHOTSTORE%</TempFolder>
+ <!-- Other SnapshotCollector configuration options -->
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+## Overriding the Shadow Copy folder
+
+When the Snapshot Collector starts up, it tries to find a folder on disk that is suitable for running the Snapshot Uploader process. The chosen folder is known as the Shadow Copy folder.
+
+The Snapshot Collector checks a few well-known locations, making sure it has permissions to copy the Snapshot Uploader binaries. The following environment variables are used:
+
+* Fabric_Folder_App_Temp
+* LOCALAPPDATA
+* APPDATA
+* TEMP
+
+If a suitable folder can't be found, Snapshot Collector reports an error saying *"Couldn't find a suitable shadow copy folder."*
+
+If the copy fails, Snapshot Collector reports a `ShadowCopyFailed` error.
+
+If the uploader can't be launched, Snapshot Collector reports an `UploaderCannotStartFromShadowCopy` error. The body of the message often contains `System.UnauthorizedAccessException`. This error usually occurs because the application is running under an account with reduced permissions. The account has permission to write to the shadow copy folder, but it doesn't have permission to execute code.
+
+Since these errors usually happen during startup, they'll usually be followed by an `ExceptionDuringConnect` error saying *Uploader failed to start*."
+
+To work around these errors, you can specify the shadow copy folder manually via the `ShadowCopyFolder` configuration option. For example, using *ApplicationInsights.config*:
+
+ ```xml
+ <TelemetryProcessors>
+ <Add Type="Microsoft.ApplicationInsights.SnapshotCollector.SnapshotCollectorTelemetryProcessor, Microsoft.ApplicationInsights.SnapshotCollector">
+ <!-- Override the default shadow copy folder. -->
+ <ShadowCopyFolder>D:\SnapshotUploader</ShadowCopyFolder>
+ <!-- Other SnapshotCollector configuration options -->
+ </Add>
+ </TelemetryProcessors>
+ ```
+
+Or, if you're using *appsettings.json* with a .NET Core application:
+
+ ```json
+ {
+ "ApplicationInsights": {
+ "InstrumentationKey": "<your instrumentation key>"
+ },
+ "SnapshotCollectorConfiguration": {
+ "ShadowCopyFolder": "D:\\SnapshotUploader"
+ }
+ }
+ ```
+
+## Use Application Insights search to find exceptions with snapshots
+
+When a snapshot is created, the throwing exception is tagged with a snapshot ID. That snapshot ID is included as a custom property when the exception is reported to Application Insights. Using **Search** in Application Insights, you can find all records with the `ai.snapshot.id` custom property.
+
+1. Browse to your Application Insights resource in the Azure portal.
+1. Select **Search**.
+1. Type `ai.snapshot.id` in the Search text box and press Enter.
++
+If this search returns no results, then, no snapshots were reported to Application Insights in the selected time range.
+
+To search for a specific snapshot ID from the Uploader logs, type that ID in the Search box. If you can't find records for a snapshot that you know was uploaded, follow these steps:
+
+1. Double-check that you're looking at the right Application Insights resource by verifying the instrumentation key.
+
+1. Using the timestamp from the Uploader log, adjust the Time Range filter of the search to cover that time range.
+
+If you still don't see an exception with that snapshot ID, then the exception record wasn't reported to Application Insights. This situation can happen if your application crashed after it took the snapshot but before it reported the exception record. In this case, check the App Service logs under `Diagnose and solve problems` to see if there were unexpected restarts or unhandled exceptions.
+
+## Edit network proxy or firewall rules
+
+If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Snapshot Debugger service.
+
+The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
- Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance. - See [snapshots](snapshot-debugger.md?toc=/azure/azure-monitor/toc.json#view-snapshots-in-the-portal) in the Azure portal.-- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md).
+- For help with troubleshooting Snapshot Debugger issues, see [Snapshot Debugger troubleshooting](snapshot-debugger-troubleshoot.md).
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
The following environments are supported:
* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json) * [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running OS family 4 or later * [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running on Windows Server 2012 R2 or later
-* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later
+* [Azure Virtual Machines and Virtual Machine Scale Sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later
* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) running Windows Server 2012 R2 or later or Windows 8.1 or later > [!NOTE] > Client applications (for example, WPF, Windows Forms or UWP) aren't supported.
-If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md).
+If you've enabled Snapshot Debugger but aren't seeing snapshots, check our [Troubleshooting guide](snapshot-debugger-troubleshoot.md).
## Grant permissions
If there's a match, then a snapshot of the running process is created. The snaps
The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (*.pdb*) files. > [!TIP]- > * A process snapshot is a suspended clone of the running process. > * Creating the snapshot takes about 10 to 20 milliseconds. > * The default value for `ThresholdForSnapshotting` is 1. This is also the minimum value. Therefore, your app has to trigger the same exception **twice** before a snapshot is created.
Enable Application Insights Snapshot Debugger for your application:
* [Azure Function](snapshot-debugger-function-app.md?toc=/azure/azure-monitor/toc.json) * [Azure Cloud Services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) * [Azure Service Fabric services](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines and virtual machine scale sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and Virtual Machine Scale Sets](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json)
* [On-premises virtual or physical machines](snapshot-debugger-vm.md?toc=/azure/azure-monitor/toc.json) Beyond Application Insights Snapshot Debugger:
azure-relay Relay Hybrid Connections Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-protocol.md
properties at this time:
```json { "accept" : {
- "address" : "wss://dc-node.servicebus.windows.net:443/$hc/{path}?..."
+ "address" : "wss://dc-node.servicebus.windows.net:443/$hc/{path}?...",
"id" : "4cb542c3-047a-4d40-a19f-bdc66441e736", "connectHeaders" : { "Host" : "...",
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 3/17/2023 Last updated : 3/20/2023 # Known issues: Azure VMware Solution
Refer to the table below to find details about resolution dates or possible work
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) |
+| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. | February 2023 |
-In this article, you learned about the currently known issues with the Azure VMware Solution. For more information about the Azure VMware Solution, see:
+In this article, you learned about the current known issues with the Azure VMware Solution.
->[!div class="nextstepaction"]
->[About Azure VMware Solution](introduction.md)
+For more information, see [About Azure VMware Solution](introduction.md).
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an externa
1. Check **Notifications** or the **Run Execution Status** pane to see the progress. ## Add existing AD group to cloudadmin group
+> [!IMPORTANT]
+> Nested groups are not supported, and their use may cause loss of access.
You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to a cloudadmin group. Users in the cloudadmin group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO.
azure-vmware Enable Vmware Cds With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md
The following diagram shows typical architecture for Cloud Director services wit
VMware Cloud Director supports multi-tenancy by using organizations. A single organization can have multiple organization virtual data centers (VDC). Each OrganizationΓÇÖs VDC can have their own dedicated Tier-1 router (Edge Gateway) which is further connected with the providerΓÇÖs managed shared Tier-0 router.
-[Learn more about CDs on Azure VMware Solutions refernce architecture](https://cloudsolutions.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/cloud-director-service-reference-architecture-for-azure-vmware-solution.pdf)
+[Learn more about CDs on Azure VMware Solutions reference architecture](https://cloudsolutions.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/cloud-director-service-reference-architecture-for-azure-vmware-solution.pdf)
## Connect tenants and their organization virtual datacenters to Azure vNet based resources
azure-vmware Migrate Sql Server Always On Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-cluster.md
Title: Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution
description: Learn how to migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution. Previously updated : 3/17/2023 Last updated : 3/20/2023 # Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution
The table below indicates the estimated downtime for each Microsoft SQL Server t
| **Scenario** | **Downtime expected** | **Notes** | |:|:--|:--|
-| **Standalone instance** | LOW | Migrate with VMware vMotion, the DB is available during migration, but it is not recommended to commit any critical data during it. |
-| **Always-On Availability Group** | LOW | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
-| **Failover Cluster Instance** | HIGH | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
+| **Standalone instance** | Low | Migrate with VMware vMotion, the DB is available during migration, but it is not recommended to commit any critical data during it. |
+| **Always-On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Failover Cluster Instance** | High | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
## Windows Server Failover Cluster quorum considerations
For details about configuring and managing the quorum, see [Failover Clustering
## Migrate Microsoft SQL Server Always-On cluster 1. Access your Always-On cluster with SQL Server Management Studio using administration credentials.
- 1. Select your primary replica and open **Availability Group** **Properties**.
--
+ - Select your primary replica and open **Availability Group** **Properties**.
:::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-1.png" alt-text="Diagram showing Always On Availability Group properties." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-1.png":::-
- 1. Change **Availability Mode** to **Asynchronous commit** only for the replica to be migrated.
- 1. Change **Failover Mode** to **Manual** for every member of the availability group.
+ - Change **Availability Mode** to **Asynchronous commit** only for the replica to be migrated.
+ - Change **Failover Mode** to **Manual** for every member of the availability group.
1. Access the on-premises vCenter Server and proceed to HCX area. 1. Under **Services** select **Migration** > **Migrate**.
- 1. Select one virtual machine running the secondary replica of the database the is going to be migrated.
- 1. Set the vSphere cluster in the remote private cloud to run the migrated SQL cluster as the **Compute Container**.
- 1. Select the **vSAN Datastore** as remote storage.
- 1. Select a folder. This not mandatory, but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
- 1. Keep **Same format as source**.
- 1. Select **vMotion** as **Migration profile**.
- 1. In **Extended Options** select **Migrate Custom Attributes**.
- 1. Verify that on-premises network segments have the correct remote stretched segment in Azure.
- 1. Select **Validate** and ensure that all checks are completed with pass status. The most common error is related to the storage configuration. Verify again that there are no virtual SCSI controllers have the physical sharing setting.
- 1. Click **Go** to start the migration.
+ - Select one virtual machine running the secondary replica of the database the is going to be migrated.
+ - Set the vSphere cluster in the remote private cloud to run the migrated SQL cluster as the **Compute Container**.
+ - Select the **vSAN Datastore** as remote storage.
+ - Select a folder. This not mandatory, but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
+ - Keep **Same format as source**.
+ - Select **vMotion** as **Migration profile**.
+ - In **Extended Options** select **Migrate Custom Attributes**.
+ - Verify that on-premises network segments have the correct remote stretched segment in Azure.
+ - Select **Validate** and ensure that all checks are completed with pass status. The most common error is related to the storage configuration. Verify again that there are no virtual SCSI controllers have the physical sharing setting.
+ - Click **Go** to start the migration.
1. Once the migration has been completed, access the migrated replica and verify connectivity with the rest of the members in the availability group. 1. In SQL Server Management Studio, open the **Availability Group Dashboard** and verify that the replica appears as **Online**. :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-2.png" alt-text="Diagram showing Always On Availability Group Dashboard." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-2.png":::
- 1. **Data Loss** status in the **Failover Readiness** column is expected since the replica has been out-of-sync with the primary during the migration.
+ - **Data Loss** status in the **Failover Readiness** column is expected since the replica has been out-of-sync with the primary during the migration.
1. Edit the **Availability Group** **Properties** again and set **Availability Mode** back to **Synchronous commit**.
- 1. The secondary replica starts to synchronize back all the changes made to the primary replica during the migration. Wait until it appears in Synchronized state.
+ - The secondary replica starts to synchronize back all the changes made to the primary replica during the migration. Wait until it appears in Synchronized state.
1. From the **Availability Group Dashboard** in SSMS click on **Start Failover Wizard**. 1. Select the migrated replica and click **Next**.
For details about configuring and managing the quorum, see [Failover Clustering
>[!Note] > Migrate one replica at a time and verify that all changes are synchronized back to the replica after each migration. Do not migrate all the replicas at the same time using **HCX Bulk Migration**. 1. After the migration of all the replicas is completed, access your Always-On availability group with **SQL Server Management Studio**.
- 1. Open the Dashboard and verify there is no data loss in any of the replicas and that all are in a **Synchronized** state.
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-7.png" alt-text="Diagram showing availability Group Dashboard with new primary replica and all migrated secondary replicas in synchronized state." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-7.png":::
- 1. Edit the **Properties** of the availability group and set **Failover Mode** to **Automatic** in all replicas.
+ - Open the Dashboard and verify there is no data loss in any of the replicas and that all are in a **Synchronized** state.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-7.png" alt-text="Diagram showing availability Group Dashboard with new primary replica and all migrated secondary replicas in synchronized state." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-7.png":::
+ - Edit the **Properties** of the availability group and set **Failover Mode** to **Automatic** in all replicas.
:::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-8.png" alt-text="Diagram showing a setting for failover back to Automatic for all replicas." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-8.png"::: ## Next steps
-[Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
-
-[Create a placement policy in Azure VMware Solution](create-placement-policy.md)
-
-[Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)
-
-[Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/)
-
-[Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/)
-
-[Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/)
-
-[Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)
-
-[Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf)
-
-[VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
-
-[Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf)
-
-[Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf)
-
-[Setup for Windows Server Failover Cluster in VMware vSphere 7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-vcenter-server-703-setup-wsfc.pdf)
+- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
+- [Create a placement policy in Azure VMware Solution](create-placement-policy.md)
+- [Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)
+- [Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/)
+- [Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/)
+- [Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/)
+- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)
+- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf)
+- [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
+- [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf)
+- [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf)
+- [Setup for Windows Server Failover Cluster in VMware vSphere 7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-vcenter-server-703-setup-wsfc.pdf)
azure-vmware Migrate Sql Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-failover-cluster.md
Title: Migrate SQL Server failover cluster to Azure VMware Solution
description: Learn how to migrate SQL Server failover cluster to Azure VMware Solution Previously updated : 3/17/2023 Last updated : 3/20/2023
The table below indicates the downtime for each Microsoft SQL Server topology.
| **Scenario** | **Downtime expected** | **Notes** | |:|:--|:--|
-| **Standalone instance** | LOW | Migration will be done using vMotion, the DB will be available during migration time, but it isn't recommended to commit any critical data during it. |
-| **Always-On Availability Group** | LOW | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
-| **Failover Cluster Instance** | HIGH | All nodes of the cluster will be shut down and migrated using VMware HCX Cold Migration. Downtime duration will depend upon database size and private network speed to Azure cloud. |
+| **Standalone instance** | Low | Migration will be done using vMotion, the DB will be available during migration time, but it isn't recommended to commit any critical data during it. |
+| **Always-On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Failover Cluster Instance** | High | All nodes of the cluster will be shut down and migrated using VMware HCX Cold Migration. Downtime duration will depend upon database size and private network speed to Azure cloud. |
## Windows Server Failover Cluster quorum considerations
For illustration purposes, in this document we're using a two-node cluster with
1. From vSphere Client shutdown the second node of the cluster. 1. Access the first node of the cluster and open **Failover Cluster Manager**.
- 1. Verify that the second node is in **Offline** state and that all clustered services and storage are under the control of the first node.
-
- :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-1.png" alt-text="Diagram showing Windows Server Failover Cluster Manager cluster storage verification." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-1.png":::
-
- 1. Shut down the cluster.
+ - Verify that the second node is in **Offline** state and that all clustered services and storage are under the control of the first node.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-1.png" alt-text="Diagram showing Windows Server Failover Cluster Manager cluster storage verification." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-1.png":::
+ - Shut down the cluster.
:::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-2.png" alt-text="Diagram showing a shut down cluster using Windows Server Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-2.png":::
- 1. Check that all cluster services are successfully stopped without errors.
+ - Check that all cluster services are successfully stopped without errors.
1. Shut down first node of the cluster. 1. From the **vSphere Client**, edit the settings of the second node of the cluster.
- 1. Remove all shared disks from the virtual machine configuration.
- 1. Ensure that the **Delete files from datastore** checkbox isn't selected as this will permanently delete the disk from the datastore, and you'll need to recover the cluster from a previous backup.
- 1. Set **SCSI Bus Sharing** from **Physical** to **None** in the virtual SCSI controllers used for the shared storage. Usually, these controllers are of VMware Paravirtual type.
+ - Remove all shared disks from the virtual machine configuration.
+ - Ensure that the **Delete files from datastore** checkbox isn't selected as this will permanently delete the disk from the datastore, and you'll need to recover the cluster from a previous backup.
+ - Set **SCSI Bus Sharing** from **Physical** to **None** in the virtual SCSI controllers used for the shared storage. Usually, these controllers are of VMware Paravirtual type.
1. Edit the first node virtual machine settings. Set **SCSI Bus Sharing** from **Physical** to **None** in the SCSI controllers. 1. From the vSphere Client,** go to the HCX plugin area. Under **Services**, select **Migration** > **Migrate**.
- 1. Select the second node virtual machine.
- 1. Set the vSphere cluster in the remote private cloud that will run the migrated SQL cluster as the **Compute Container**.
- 1. Select the **vSAN Datastore** as remote storage.
- 1. Select a folder if you want to place the virtual machines in specific folder, this not mandatory but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
- 1. Keep **Same format as source**.
- 1. Select **Cold migration** as **Migration profile**.
- 1. In **Extended** **Options** select **Migrate Custom Attributes**.
- 1. Verify that on-premises network segments have the correct remote stretched segment in Azure.
- 1. Select **Validate** and ensure that all checks are completed with pass status. The most common error here will be one related to the storage configuration. Verify again that there are no SCSI controllers with physical sharing setting.
- 1. Select **Go** and the migration will initiate.
+ - Select the second node virtual machine.
+ - Set the vSphere cluster in the remote private cloud that will run the migrated SQL cluster as the **Compute Container**.
+ - Select the **vSAN Datastore** as remote storage.
+ - Select a folder if you want to place the virtual machines in specific folder, this not mandatory but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
+ - Keep **Same format as source**.
+ - Select **Cold migration** as **Migration profile**.
+ - In **Extended** **Options** select **Migrate Custom Attributes**.
+ - Verify that on-premises network segments have the correct remote stretched segment in Azure.
+ - Select **Validate** and ensure that all checks are completed with pass status. The most common error here will be one related to the storage configuration. Verify again that there are no SCSI controllers with physical sharing setting.
+ - Select **Go** and the migration will initiate.
1. Repeat the same process for the first node. 1. Access **Azure VMware Solution vSphere Client** and edit the first node settings and set back to physical SCSI Bus sharing the SCSI controller(s) managing the shared disks. 1. Edit node 2 settings in **vSphere Client**.
- 1. Set SCSI Bus sharing back to physical in the SCSI controller managing shared storage.
- 1. Add the cluster shared disks to the node as additional storage. Assign them to the second SCSI controller.
- 1. Ensure that all the storage configuration is the same as the one recorded before the migration.
+ - Set SCSI Bus sharing back to physical in the SCSI controller managing shared storage.
+ - Add the cluster shared disks to the node as additional storage. Assign them to the second SCSI controller.
+ - Ensure that all the storage configuration is the same as the one recorded before the migration.
1. Power on the first node virtual machine. 1. Access the first node VM with **VMware Remote Console**.
- 1. Verify virtual machine network configuration and ensure it can reach on-premises and Azure resources.
- 1. Open **Failover Cluster Manager** and verify cluster services.
+ - Verify virtual machine network configuration and ensure it can reach on-premises and Azure resources.
+ - Open **Failover Cluster Manager** and verify cluster services.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-3.png" alt-text="Diagram showing a cluster summary in Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-3.png"::: 1. Power on the second node virtual machine. 1. Access the second node VM from the **VMware Remote Console**.
- 1. Verify that Windows Server can reach the storage.
- 1. In the **Failover Cluster Manager** review that the second node appears as **Online** status.
+ - Verify that Windows Server can reach the storage.
+ - In the **Failover Cluster Manager** review that the second node appears as **Online** status.
:::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-4.png" alt-text="Diagram showing a cluster node status in Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-4.png"::: 1. Using the **SQL Server Management Studio** connect to the SQL Server cluster resource network name. Check the database is online and accessible.
azure-vmware Migrate Sql Server Standalone Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-standalone-cluster.md
Title: Migrate Microsoft SQL Server Standalone to Azure VMware Solution
description: Learn how to migrate Microsoft SQL Server Standalone to Azure VMware Solution. Previously updated : 3/17/2023 Last updated : 3/20/2023
This table indicates the estimated downtime for each Microsoft SQL Server topolo
| **Scenario** | **Downtime expected** | **Notes** | |:|:--|:--|
-| **Standalone instance** | LOW | Migration is done using VMware vMotion, the DB is available during migration time, but it isn't recommended to commit any critical data during it. |
-| **Always-On Availability Group** | LOW | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
-| **Failover Cluster Instance** | HIGH | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
+| **Standalone instance** | Low | Migration is done using VMware vMotion, the DB is available during migration time, but it isn't recommended to commit any critical data during it. |
+| **Always-On Availability Group** | Low | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Failover Cluster Instance** | High | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
## Migrate Microsoft SQL Server standalone 1. Log into your on-premises **vCenter Server** and access the VMware HCX plugin. 1. Under **Services** select **Migration** > **Migrate**.
- a. Select the Microsoft SQL Server virtual machine.
- a. Set the vSphere cluster in the remote private cloud of the migrated SQL cluster as the **Compute Container**.
- a. Select the vSAN Datastore as remote storage.
- a. Select a folder. This isn't mandatory, but we recommended separating the different workloads in your Azure VMware Solution private cloud.
- a. Keep **Same format as source**.
- a. Select **vMotion** as Migration profile.
- a. In **Extended Options** select **Migrate Custom Attributes**.
- a. Verify that on-premises network segments have the correct remote stretched segment in Azure VMware Solution.
- a. Select **Validate** and ensure that all checks are completed with pass status.
- a. Select **Go** to start the migration.
+ - Select the Microsoft SQL Server virtual machine.
+ - Set the vSphere cluster in the remote private cloud of the migrated SQL cluster as the **Compute Container**.
+ - Select the vSAN Datastore as remote storage.
+ - Select a folder. This isn't mandatory, but we recommended separating the different workloads in your Azure VMware Solution private cloud.
+ - Keep **Same format as source**.
+ - Select **vMotion** as Migration profile.
+ - In **Extended Options** select **Migrate Custom Attributes**.
+ - Verify that on-premises network segments have the correct remote stretched segment in Azure VMware Solution.
+ - Select **Validate** and ensure that all checks are completed with pass status.
+ - Select **Go** to start the migration.
1. After the migration has completed, access the virtual machine using VMware Remote Console in the vSphere Client.
- a. Verify the network configuration and check connectivity both with on-premises and Azure VMware Solution resources.
- a. Using SQL Server Management Studio verify you can access the database.
+ - Verify the network configuration and check connectivity both with on-premises and Azure VMware Solution resources.
+ - Using SQL Server Management Studio verify you can access the database.
:::image type="content" source="media/sql-server-hybrid-benefit/sql-standalone-1.png" alt-text="Diagram showing a SQL Server Management Studio connection to the migrated database." border="false" lightbox="media/sql-server-hybrid-benefit/sql-standalone-1.png":::
azure-vmware Sql Server Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/sql-server-hybrid-benefit.md
Title: Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptio
description: Learn about Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions. Previously updated : 3/16/2023 Last updated : 3/20/2023
Azure Hybrid benefit is a cost saving offering from Microsoft you can use to sav
## Microsoft SQL server
-Microsoft SQL Server is a core component of many business-critical applications currently running on VMware vSphere and is one of the most widely used database platforms in the market with customers running hundreds of SQL Server instances with VMware vSphere on-premises.
+Microsoft SQL server is a core component of many business-critical applications currently running on VMware vSphere and is one of the most widely used database platforms in the market with customers running hundreds of SQL Server instances with VMware vSphere on-premises.
Azure VMware Solution is an ideal solution for customers looking to migrate and modernize their vSphere-based applications to the cloud, including their Microsoft SQL databases.
Now that you've covered Azure Hybrid benefit, you may want to learn about:
- [Migrate Microsoft SQL Server Standalone to Azure VMware Solution](migrate-sql-server-standalone-cluster.md) - [Migrate SQL Server failover cluster to Azure VMware Solution](migrate-sql-server-failover-cluster.md) - [Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution](migrate-sql-server-always-on-cluster.md)
+- [Enable SQL Azure hybrid benefit for Azure VMware Solution](migrate-sql-server-standalone-cluster.md)
+- [Configure Windows Server Failover Cluster on Azure VMware Solution vSAN](configure-windows-server-failover-cluster.md)
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-network-checklist.md
Example `/22` CIDR network address block: `10.10.0.0/22`
The subnets:
-| Network usage | Subnet | Example |
-| -- | | - |
-| Private cloud management | `/26` | `10.10.0.0/26` |
-| HCX Mgmt Migrations | `/26` | `10.10.0.64/26` |
-| Global Reach Reserved | `/26` | `10.10.0.128/26` |
-| NSX-T Data Center DNS Service | `/32` | `10.10.0.192/32` |
-| Reserved | `/32` | `10.10.0.193/32` |
-| Reserved | `/32` | `10.10.0.194/32` |
-| Reserved | `/32` | `10.10.0.195/32` |
-| Reserved | `/30` | `10.10.0.196/30` |
-| Reserved | `/29` | `10.10.0.200/29` |
-| Reserved | `/28` | `10.10.0.208/28` |
-| ExpressRoute peering | `/27` | `10.10.0.224/27` |
-| ESXi Management | `/25` | `10.10.1.0/25` |
-| vMotion Network | `/25` | `10.10.1.128/25` |
-| Replication Network | `/25` | `10.10.2.0/25` |
-| vSAN | `/25` | `10.10.2.128/25` |
-| HCX Uplink | `/26` | `10.10.3.0/26` |
-| Reserved | `/26` | `10.10.3.64/26` |
-| Reserved | `/26` | `10.10.3.128/26` |
-| Reserved | `/26` | `10.10.3.192/26` |
+| Network usage | Description | Subnet | Example |
+| -- | - | | - |
+| Private cloud management | Management Network (i.e. vCenter, NSX-T) | `/26` | `10.10.0.0/26` |
+| HCX Mgmt Migrations | Local connectivity for HCX appliances (downlinks) | `/26` | `10.10.0.64/26` |
+| Global Reach Reserved | Outbound interface for ExpressRoute | `/26` | `10.10.0.128/26` |
+| NSX-T Data Center DNS Service | Built-in NSX-T DNS Service | `/32` | `10.10.0.192/32` |
+| Reserved | Reserved | `/32` | `10.10.0.193/32` |
+| Reserved | Reserved | `/32` | `10.10.0.194/32` |
+| Reserved | Reserved | `/32` | `10.10.0.195/32` |
+| Reserved | Reserved | `/30` | `10.10.0.196/30` |
+| Reserved | Reserved | `/29` | `10.10.0.200/29` |
+| Reserved | Reserved | `/28` | `10.10.0.208/28` |
+| ExpressRoute peering | ExpressRoute Peering | `/27` | `10.10.0.224/27` |
+| ESXi Management | ESXi management VMkernel interfaces | `/25` | `10.10.1.0/25` |
+| vMotion Network | vMotion VMkernel interfaces | `/25` | `10.10.1.128/25` |
+| Replication Network | vSphere Replication interfaces | `/25` | `10.10.2.0/25` |
+| vSAN | vSAN VMkernel interfaces and node communication | `/25` | `10.10.2.128/25` |
+| HCX Uplink | Uplinks for HCX IX and NE appliances to remote peers | `/26` | `10.10.3.0/26` |
+| Reserved | Reserved | `/26` | `10.10.3.64/26` |
+| Reserved | Reserved | `/26` | `10.10.3.128/26` |
+| Reserved | Reserved | `/26` | `10.10.3.192/26` |
The subnets:
| Private Cloud management network | On-premises Active Directory | TCP | 389/636 | These ports are open to allow communications for Azure VMware Solutions vCenter Server to communicate to any on-premises Active Directory/LDAP server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 636 is recommended for security purposes. | | Private Cloud management network | On-premises Active Directory Global Catalog | TCP | 3268/3269 | These ports are open to allow communications for Azure VMware Solutions vCenter Server to communicate to any on-premises Active Directory/LDAP global catalog server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter Server. Port 3269 is recommended for security purposes. | | On-premises network | Private Cloud vCenter Server | TCP (HTTPS) | 443 | This port allows you to access vCenter Server from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
-| On-premises network | HCX Manager | TCP (HTTPS) | 9443 | Hybrid Cloud Manager Virtual Appliance Management Interface for Hybrid Cloud Manager system configuration. |
-| Admin Network | Hybrid Cloud Manager | SSH | 22 | Administrator SSH access to Hybrid Cloud Manager. |
+| On-premises network | HCX Cloud Manager | TCP (HTTPS) | 9443 | HCX Cloud Manager virtual appliance management interface for HCX system configuration. |
+| On-premises Admin Network | HCX Cloud Manager | SSH | 22 | Administrator SSH access to HCX Cloud Manager virtual appliance. |
| HCX Manager | Interconnect (HCX-IX) | TCP (HTTPS) | 8123 | HCX Bulk Migration Control | | HCX Manager | Interconnect (HCX-IX), Network Extension (HCX-NE) | HTTP TCP (HTTPS) | 9443 | Send management instructions to the local HCX Interconnect using the REST API. | | Interconnect (HCX-IX)| L2C | TCP (HTTPS) | 443 | Send management instructions from Interconnect to L2C when L2C uses the same path as the Interconnect. |
-| HCX Manager, Interconnect (HCX-IX) | ESXi Hosts | TCP | 80,902 | Management and OVF deployment. |
-| HCX NE, Interconnect (HCX-IX) at Source| HCX NE, Interconnect (HCX-IX) at Destination)| UDP | 4500 | Required for IPSEC<br> Internet key exchange (IKEv2) to encapsulate workloads for the bidirectional tunnel. Network Address Translation-Traversal (NAT-T) is also supported. |
-| Interconnect (HCX-IX) local | Interconnect (HCX-IX) (remote) | UDP | 500 | Required for IPSEC<br> Internet key exchange (ISAKMP) for the bidirectional tunnel. |
-| On-premises vCenter Server network | Private Cloud management network | TCP | 8000 | vMotion of VMs from on-premises vCenter Server to Private Cloud vCenter Server |
+| HCX Manager, Interconnect (HCX-IX) | ESXi Hosts | TCP | 80,443,902 | Management and OVF deployment. |
+| Interconnect (HCX-IX), Network Extension (HCX-NE) at Source| Interconnect (HCX-IX), Network Extension (HCX-NE) at Destination| UDP | 4500 | Required for IPSEC<br> Internet key exchange (IKEv2) to encapsulate workloads for the bidirectional tunnel. Network Address Translation-Traversal (NAT-T) is also supported. |
+| On-premises Interconnect (HCX-IX) | Cloud Interconnect (HCX-IX) | UDP | 500 | Required for IPSEC<br> Internet key exchange (ISAKMP) for the bidirectional tunnel. |
+| On-premises vCenter Server network | Private Cloud management network | TCP | 8000 | vMotion of VMs from on-premises vCenter Server to Private Cloud vCenter Server |
+| HCX Connector | connect.hcx.vmware.com<br> hybridity.depot.vmware.com | TCP | 443 | `connect` is needed to validate license key.<br> `hybridity` is needed for updates. |
-[For a full list of HCX port requirements](https://ports.esp.vmware.com/home/VMware-HCX)
+There can be more items to consider when it comes to firewall rules, this is intended to give common rules for common scenarios. Note that when source and destination say "on-premises," this is only important if you have a firewall that inspects flows within your datacenter. If you do not have a firewall that inspects between on-premises components, you can ignore those rules as they would not be needed.
+
+[Full list of HCX port requirements](https://ports.esp.vmware.com/home/VMware-HCX)
## DHCP and DNS resolution considerations
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
Title: Azure Kubernetes Service backup - Overview
description: This article gives you an understanding about Azure Kubernetes Service (AKS) backup, the cloud-native process to back up and restore the containerized applications and data running in AKS clusters. Previously updated : 03/14/2023 Last updated : 03/20/2023
[Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) backup is a simple, cloud-native process to back up and restore the containerized applications and data running in AKS clusters. You can configure scheduled backup for cluster state and application data (persistent volumes - CSI driver-based Azure Disks). The solution provides granular control to choose a specific namespace or an entire cluster to back up or restore by storing backups locally in a blob container and as disk snapshots. With AKS backup, you can unlock end-to-end scenarios - operational recovery, cloning developer/test environments, or cluster upgrade scenarios.
-AKS backup integrates with Backup center (with other backup management capabilities) to provide a single pane of glass that helps you govern, monitor, operate, and analyze backups at scale.
+AKS backup integrates with Backup center, providing a single pane of glass that can help you govern, monitor, operate, and analyze backups at scale. Your backups are also available in the *AKS portal* under the **Settings** section.
## How does AKS backup work?
-AKS backup enables you to back up your Kubernetes workloads and persistent volumes deployed in AKS clusters. The solution requires a [**Backup Extension**](../azure-arc/kubernetes/conceptual-extensions.md) to be installed in the AKS cluster. Backup vault communicates to the Backup Extension to perform backup and restore related operations. You can configure scheduled backups for your clusters as per your backup policy and can restore the backups to the original or an alternate cluster within the same subscription and region. The extension also allows you to enable granular controls to choose a specific namespace or an entire cluster as a backup/restore configuration while performing the specific operation.
+AKS Backup enables you to back up your Kubernetes workloads and Persistent Volumes deployed in AKS clusters. The solution requires a [**Backup Extension**](/azure/azure-arc/kubernetes/conceptual-extensions) to be installed inside the AKS cluster and Backup Vault communicates to the Extension to perform backup and restore related operations. **Backup Extension** is mandatory to be installed inside AKS cluster to enable backup and restore. As part of installation, a storage account and a blob container is to be provided in input where backups will be stored.
+
+Along with Backup Extension, a *User Identity* is created in the AKS cluster's Managed Resource Group (called Extension Identity). This extension identity gets the *Storage Account Contributor* role assigned to it on the storage account where backups are stored in a blob container.
+
+To support Public, Private, and Authorized IP based clusters, AKS backup requires *Trusted Access* to be enabled between *Backup vault* and *AKS cluster*. Trusted Access allows Backup vault to access the AKS clusters as specific permissions assigned to it related to the *Backup operations*. For more information on AKS Trusted Access, see [Enable Azure resources to access Azure Kubernetes Service (AKS) clusters using Trusted Access](../aks/trusted-access-feature.md).
>[!Note]
->- You must install Backup Extension in the AKS cluster to enable backups and restores. With the extension installation, a User Identity is created in the AKS cluster's managed resource group (Extension Identity), which gets assigned a set of permissions to access the storage account with the backups stored in the blob container.
->
->- An AKS cluster can have only one Backup Extension installed at a time.
->
->- Currently, AKS backup allows storing backups in Operational Tier. Operational Tier is a local data store and backups aren't moved to a vault but are stored in your own tenant. However, the Backup vault still serves as the unit for managing backups.
+>AKS backup currently allows storing backups in *Operational Tier*. Operational Tier is a local data store and backups aren't moved to a vault, but are stored in your own tenant. However, the Backup vault still serves as the unit of managing backups.
-The backup solution enables backups for your Kubernetes workloads deployed in the cluster and the data stored in the persistent volume. Currently, the solution only supports persistent volumes of CSI driver-based Azure Disks. During backups, other *PV* types (such as File Share and Blobs) are skipped by the solution. The Kubernetes workloads are stored in a blob container and the Disk-based persistent volumes are backed up as Disk snapshots.
+Once *Backup Extension* is installed and *Trusted Access* is enabled, you can configure scheduled backups for the clusters as per your backup policy, and can restore the backups to the original or an alternate cluster in the same subscription and region. AKS backup allows you to enable granular controls to choose a specific *namespace* or an *entire cluster* as a backup/restore configuration while performing the specific operation.
+
+The *backup solution* enables backup operation for your Kubernetes workloads deployed in the cluster and the data stored in the *Persistent Volume*. The Kubernetes workloads are stored in a blob container and the *Disk-based Persistent Volumes* are backed up as *Disk Snapshots* in a Snapshot Resource Group
+
+>[!Note]
+>Currently, the solution only supports Persistent Volumes of CSI Driver-based Azure Disks. During backups, other Persistent Volume types (File Share, Blobs) are skipped by the solution.
## Backup To configure backup for AKS cluster, first you need to create a *Backup vault*. The vault gives you a consolidated view of the backups configured across different workloads. AKS backup supports only Operational Tier backup.
-Note: Copying backups to the Vault Tier is currently not supported. So, the Backup vault storage redundancy setting (LRS/GRS) doesn't apply to the backups stored in Operational Tier.
+
+>[!Note]
+>- The Backup vault and the AKS cluster to be backed up or restored should be in the same region and subscription.
+>- Copying backups to the *Vault Tier* is currently not supported. So, the *Backup vault storage redundancy* setting (LRS/GRS) doesn't apply to the backups stored in Operational Tier.
AKS backup automatically triggers scheduled backup job that copies the cluster resources to a blob container and creates an incremental snapshot of the disk-based persistent volumes as per the backup frequency. Older backups are deleted as per the retention duration specified by the backup policy. >[!Note]
->AKS backup allows creating multiple backup instances for a single AKS cluster. You can create multiple backup Instances with different backup configurations, as required. However, each backup instance of an AKS cluster should be created with a different backup policy, either in the same or in a different Backup vault.
+>AKS backup allows creating multiple backup instances for a single AKS cluster with different backup configurations, as required. However, each backup instance of an AKS cluster should be created either in a different Backup vault or with a different backup policy in the same Backup vault.
## Backup management
backup Azure Kubernetes Service Cluster Backup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md
Title: Azure Kubernetes Service (AKS) backup using Azure Backup prerequisites
description: This article explains the prerequisites for Azure Kubernetes Service (AKS) backup. Previously updated : 03/14/2023 Last updated : 03/20/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- You need to install Backup Extension on both the source cluster to be backed up and the target cluster where the restore will happen.
+- Backup Extension can be installed in the cluster from the *AKS portal* blade on the **Backup** tab under **Settings**. You can also use the Azure CLI commands to [manage the installation and other operations on the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#manage-operations).
+
+- Before you install an extension in an AKS cluster, you must register the `Microsoft.KubernetesConfiguration` resource provider at the subscription level. Learn how to [register the resource provider](azure-kubernetes-service-cluster-manage-backups.md#register-the-resource-provider).
+ Learn [how to manage the operation to install Backup Extension using Azure CLI](azure-kubernetes-service-cluster-manage-backups.md#manage-operations). ## Trusted Access
Your Azure resources access AKS clusters through the AKS regional gateway using
For AKS backup, the Backup vault accesses your AKS clusters via Trusted Access to configure backups and restores. The Backup vault is assigned a pre-defined role **Microsoft.DataProtection/backupVaults/backup-operator** in the AKS cluster, allowing it to only perform specific backup operations.
+Before you enable Trusted Access between a Backup vault and an AKS cluster, [enable a *feature flag* on the cluster's subscription](azure-kubernetes-service-cluster-manage-backups.md#enable-the-feature-flag).
+ Learn [how to enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#enable-trusted-access). ## AKS Cluster
To enable backup for an AKS cluster, see the following prerequisites: .
- The Backup Extension during installation fetches Container Images stored in Microsoft Container Registry (MCR). If you enable a firewall on the AKS cluster, the extension installation process might fail due to access issues on the Registry. Learn [how to allow MCR access from the firewall](../container-registry/container-registry-firewall-access-rules.md#configure-client-firewall-rules-for-mcr).
+- Install Backup Extension on the AKS clusters following the [required FQDN/application rules](../aks/limit-egress-traffic.md#required-fqdn--application-rules-6).
+
+- If you've any previous installation of *Velero* in the AKS cluster, you need to delete it before installing Backup Extension.
++ ## Required roles and permissions To perform AKS backup and restore operations as a user, you need to have specific roles on the AKS cluster, Backup vault, Storage account, and Snapshot resource group.
Also, as part of the backup and restore operations, the following roles are assi
| | | | | | Reader | Backup vault | AKS cluster | Allows the Backup vault to perform *List* and *Read* operations on AKS cluster. | | Reader | Backup vault | Snapshot resource group | Allows the Backup vault to perform *List* and *Read* operations on snapshot resource group. |
-| Disk Snapshot Contributor | AKS cluster | Snapshot resource group | Allows AKS cluster to store persistent volume snapshots in the resource group. |
+| Contributor | AKS cluster | Snapshot resource group | Allows AKS cluster to store persistent volume snapshots in the resource group. |
| Storage Account Contributor | Extension Identity | Storage account | Allows Backup Extension to store cluster resource backups in the blob container. | >[!Note]
backup Azure Kubernetes Service Cluster Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md
Title: Azure Kubernetes Service (AKS) backup support matrix description: This article provides a summary of support settings and limitations of Azure Kubernetes Service (AKS) backup. Previously updated : 03/03/2023 Last updated : 03/20/2023
You can use [Azure Backup](./backup-overview.md) to protect Azure Kubernetes Ser
## Supported regions
-AKS backup is available in all the Azure public cloud regions.
+AKS backup is available in all the Azure public cloud regions, East US, North Europe, West Europe, South East Asia, West US 2, East US 2, West US, North Central US, Central US, France Central, Korea Central, Australia East, UK South, East Asia, West Central US, Japan East, South Central US, West US3, Canada Central, Canada East, Australia South East, Central India, Norway East, Germany West Central, Switzerland North, Sweden Central, Japan West, UK West, Korea South, South Africa North, South India, France South, Brazil South, UAE North.
## Limitations
backup Azure Kubernetes Service Cluster Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md
Title: Back up Azure Kubernetes Service (AKS) using Azure Backup
description: This article explains how to back up Azure Kubernetes Service (AKS) using Azure Backup. Previously updated : 03/15/2023 Last updated : 03/20/2023
To configure backups for AKS cluster, follow these steps:
5. Select **Install/Fix Extension** to install the **Backup Extension** on the cluster.
-6. In the *context* pane, provide the *storage account* and *blob container* where you need to store the backup, and then select **Generate Command**.
+6. In the *context* pane, provide the *storage account* and *blob container* where you need to store the backup, and then select **Click on Install Extension**.
- >[!Note]
- >Before you install the AKS Backup Extension via *Azure CLI*, you must enable the `Microsoft.KubernetesConfiguration` resource provider on the subscription.
- >
- >To register the resource provider before the extension installation (don't initiate extension installation before registering resource provider), run the following commands:
- >
- >1. Register the resource provider.
- > `az provider register --namespace Microsoft.KubernetesConfiguration`
- >2. Monitor the registration process. The registration may take up to *10 minutes*.
- > `az provider show -n Microsoft.KubernetesConfiguration -o table`
-
-7. Open the PowerShell console, and then upgrade the CLI to version *2.24.0* or later using the command `az upgrade`.
-
- Sign in to the Azure portal (using the command `az login`), and then copy and run the generated commands.
-
- The commands install the *Backup Extension* and *Assign Extension* managed identity permissions on the storage account.
-
- Once done, select **Revalidate**.
-
- >[!Note]
- >We're using the Extension managed identity attached to the underlying compute of the AKS cluster. After running the `az role assignment` command, it may take some time (up to *1 hour*) to propagate permission to the AKS cluster (due to caching issue). If revalidation fails, try again after some time.
-
-8. To enable *Trusted Access* and *other role permissions*, select **Grant Permission** > **Next**.
+7. To enable *Trusted Access* and *other role permissions*, select **Grant Permission** > **Next**.
-9. Select the backup policy that defines the schedule and retention policy for AKS backup, and then select **Next**.
+8. Select the backup policy that defines the schedule and retention policy for AKS backup, and then select **Next**.
-10. Select **Add/Edit** to define the *backup instance configuration*.
+9. Select **Add/Edit** to define the *backup instance configuration*.
-11. In the *context* pane, enter the *cluster resources* that you want to back up.
+10. In the *context* pane, enter the *cluster resources* that you want to back up.
Learn about the [backup configurations](#backup-configurations).
-12. Select the *snapshot resource group* where *persistent volume (Azure Disk) snapshots* need to be stored, and then select **Validate**.
+11. Select the *snapshot resource group* where *persistent volume (Azure Disk) snapshots* need to be stored, and then select **Validate**.
After validation, if the appropriate roles aren't assigned to the vault over snapshot resource group, the error **Role assignment not done** appears.
-14. To resolve the error, select the *checkbox* corresponding to the *Datasource*, and then select **Assign Missing Role**.
+12. To resolve the error, select the *checkbox* corresponding to the *Datasource*, and then select **Assign Missing Role**.
-15. Once the *role assignment* is successful, select **Next**.
+13. Once the *role assignment* is successful, select **Next**.
-16. Select **Configure Backup**.
+14. Select **Configure Backup**.
Once the configuration is complete, the **Backup Instance** gets created.
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
Title: Manage Azure Kubernetes Service (AKS) backups using Azure Backup
description: This article explains how to manage Azure Kubernetes Service (AKS) backups using Azure Backup. Previously updated : 03/15/2023 Last updated : 03/20/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
This section provides the set of Azure CLI commands to create, update, delete operations on the backup extension. You can use the *update* command to change the blob container where backups are stored along with compute limits for the underlying Backup Extension Pods.
+## Register the resource provider
+
+To register the resource provider, run the following command:
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.KubernetesConfiguration
+ ```
+
+>[!Note]
+>Don't initiate extension installation before registering resource provider.
+
+### Monitor the registration process
+
+The registration may take up to *10 minutes*. To monitor the registration process, run the following command:
+
+ ```azurecli-interactive
+ az provider show -n Microsoft.KubernetesConfiguration -o table
+ ```
+ ### Install Backup Extension To install the Backup Extension, use the following command:
To stop the Backup Extension install operation, use the following command:
az k8s-extension delete --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg ```
+### Grant permission on storage account
+
+To provide *Storage Account Contributor Permission* to the Extension Identity on storage account, run the following command:
+
+ ```azurecli-interactive
+ az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname
+ ```
+ ### View Backup Extension installation status To view the progress of Backup Extension installation, use the following command:
To view the progress of Backup Extension installation, use the following command
az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg ```
+## Enable the feature flag
+
+To enable the feature flag follow these steps:
+
+1. To install the *aks-preview* extension, run the following command:
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+1. To update to the latest version of the extension released, run the following command:
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
+
+1. To register the *TrustedAccessPreview* feature flag, run the `az feature register` command.
+
+ **Example**
+
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
+ ```
+
+ It takes a few minutes for the status to show Registered.
+
+1. To verify the registration status, run the `az feature show` command.
+
+ **Example**
+
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
+ ```
+
+1. When the status shows as **Registered**, run the `az provider register` command to refresh the `Microsoft.ContainerService` resource provider registration.
+
+ **Example**
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
+
+>[!Note]
+>Don't initiate backup configuration before enabling the feature flag.
+ ## Enable Trusted Access To enable Trusted Access between Backup vault and AKS cluster, use the following Azure CLI command:
To enable Trusted Access between Backup vault and AKS cluster, use the following
```
->[!Note]
->AKS backup experience via Azure portal allows you to perform both Backup Extension installation and Trusted Access enablement, required to make the AKS cluster ready for backup and restore operations.
+Learn more about [other commands related to Trusted Access](../aks/trusted-access-feature.md#trusted-access-feature-overview).
## Next steps
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 03/15/2023 Last updated : 03/20/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - March 2023
+ - [Immutable vault for Azure Backup is now generally available](#immutable-vault-for-azure-backup-is-now-generally-available)
- [Support for selective disk backup with enhanced policy for Azure VM (preview)](#support-for-selective-disk-backup-with-enhanced-policy-for-azure-vm-preview) - [Azure Kubernetes Service backup (preview)](#azure-kubernetes-service-backup-preview) - [Azure Blob vaulted backups (preview)](#azure-blob-vaulted-backups-preview)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Immutable vault for Azure Backup is now generally available
+
+Azure Backup now supports immutable vaults that help you ensure that recovery points once created can't be deleted before their expiry as per the backup policy (expiry at the time at which the recovery point was created). You can also choose to make the immutability irreversible to offer maximum protection to your backup data, thus helping you protect your data better against various threats, including ransomware attacks and malicious actors.
+
+For more information, see the [concept of Immutable vault for Azure Backup](backup-azure-immutable-vault-concept.md).
+ ## Support for selective disk backup with enhanced policy for Azure VM (preview) Azure Backup now provides *Selective Disk backup and restore* capability to Enhanced policy. Using this capability, you can selectively back up a subset of the data disks that are attached to your VM, and then restore a subset of the disks that are available in a recovery point, both from instant restore and vault tier.
bastion Bastion Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-nsg.md
Azure Bastion is deployed specifically to ***AzureBastionSubnet***.
* **Egress Traffic to target VMs:** Azure Bastion will reach the target VMs over private IP. The NSGs need to allow egress traffic to other target VM subnets for port 3389 and 22. If you are using the custom port feature as part of Standard SKU, the NSGs will instead need to allow egress traffic to other target VM subnets for the custom value(s) you have opened on your target VMs. * **Egress Traffic to Azure Bastion data plane:** For data plane communication between the underlying components of Azure Bastion, enable ports 8080, 5701 outbound from the **VirtualNetwork** service tag to the **VirtualNetwork** service tag. This enables the components of Azure Bastion to talk to each other. * **Egress Traffic to other public endpoints in Azure:** Azure Bastion needs to be able to connect to various public endpoints within Azure (for example, for storing diagnostics logs and metering logs). For this reason, Azure Bastion needs outbound to 443 to **AzureCloud** service tag.
- * **Egress Traffic to Internet:** Azure Bastion needs to be able to communicate with the Internet for session and certificate validation. For this reason, we recommend enabling port 80 outbound to the **Internet.**
+ * **Egress Traffic to Internet:** Azure Bastion needs to be able to communicate with the Internet for session, Bastion Shareable Link, and certificate validation. For this reason, we recommend enabling port 80 outbound to the **Internet.**
:::image type="content" source="./media/bastion-nsg/outbound.png" alt-text="Screenshot shows outbound security rules for Azure Bastion connectivity." lightbox="./media/bastion-nsg/outbound.png":::
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-apis-tools.md
Title: APIs and tools for developers description: Learn about the APIs and tools available for developing solutions with the Azure Batch service. Previously updated : 06/11/2021 Last updated : 03/20/2023
Your applications and services can issue direct REST API calls or use one or mor
## Batch Management APIs
-The Azure Resource Manager APIs for Batch provide programmatic access to Batch accounts. Using these APIs, you can programmatically manage Batch accounts, quotas, application packages, and other resources through the Microsoft.Batch provider.
+The Azure Resource Manager APIs for Batch provide programmatic access to Batch accounts. Using these APIs, you can programmatically manage Batch accounts, quotas, application packages, and other resources through the Microsoft.Batch provider.
| API | API reference | Download | Tutorial | Code samples | | | | | | | | **Batch Management REST** |[Azure REST API - Docs](/rest/api/batchmanagement/) |- |- |[GitHub](https://github.com/Azure-Samples/batch-dotnet-manage-batch-accounts) | | **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management/management-batch(deprecated)) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) | | **Batch Management Python** |[Azure SDK for Python - Docs](/samples/azure-samples/azure-samples-python-management/batch/) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- |
-| **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/arm-batch-readme) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- |
+| **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/arm-batch-readme) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- |
| **Batch Management Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch/management) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |- | ## Batch command-line tools
-These command-line tools provide the same functionality as the Batch service and Batch Management APIs:
+These command-line tools provide the same functionality as the Batch service and Batch Management APIs:
- [Batch PowerShell cmdlets](/powershell/module/az.batch/): The Azure Batch cmdlets in the [Azure PowerShell](/powershell/azure/) module enable you to manage Batch resources with PowerShell. - [Azure CLI](/cli/azure): The Azure CLI is a cross-platform toolset that provides shell commands for interacting with many Azure services, including the Batch service and Batch Management service. For more information, see [Manage Batch resources with Azure CLI](batch-cli-get-started.md).
These additional tools may be helpful for building and debugging your Batch appl
- [Azure portal](https://portal.azure.com/): You can create, monitor, and delete Batch pools, jobs, and tasks in the Azure portal. You can view status information for these and other resources while you run your jobs, and even download files from the compute nodes in your pools. For example, you can download a failed task's `stderr.txt` while troubleshooting. You can also download Remote Desktop (RDP) files that you can use to log in to compute nodes. - [Azure Batch Explorer](https://azure.github.io/BatchExplorer/): Batch Explorer is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. Download an [installation package](https://azure.github.io/BatchExplorer/) for Mac, Linux, or Windows.-- [Azure Batch Shipyard](https://github.com/Azure/batch-shipyard): Batch Shipyard is a tool to help provision, execute, and monitor container-based batch processing and HPC workloads on Azure Batch. - [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/): While not strictly an Azure Batch tool, the Storage Explorer can be helpful when developing and debugging your Batch solutions. ## Additional resources
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Title: Container workloads description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 08/18/2021 Last updated : 03/20/2023 ms.devlang: csharp, python
ImageReference imageReference = new ImageReference(
ContainerRegistry containerRegistry = new ContainerRegistry( registryServer: "https://hub.docker.com", userName: "UserName",
- password: "YourPassword"
+ password: "YourPassword"
); // Specify container configuration, prefetching Docker images
containerTask.ContainerSettings = cmdContainerSettings;
## Next steps -- For easy deployment of container workloads on Azure Batch through [Shipyard recipes](https://github.com/Azure/batch-shipyard/tree/master/recipes), see the [Batch Shipyard](https://github.com/Azure/batch-shipyard) toolkit. - For information on installing and using Docker CE on Linux, see the [Docker](https://docs.docker.com/engine/installation/) documentation. - Learn how to [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md). - Learn more about the [Moby project](https://mobyproject.org/), a framework for creating container-based systems.
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md
Title: Use compute-intensive Azure VMs with Batch description: How to take advantage of HPC and GPU virtual machine sizes in Azure Batch pools. Learn about OS dependencies and see several scenario examples. Previously updated : 12/17/2018 Last updated : 03/20/2023 # Use RDMA or GPU instances in Batch pools To run certain Batch jobs, you can take advantage of Azure VM sizes designed for large-scale computation. For example:
-* To run multi-instance [MPI workloads](batch-mpi.md), choose H-series or other sizes that have a network interface for Remote Direct Memory Access (RDMA). These sizes connect to an InfiniBand network for inter-node communication, which can accelerate MPI applications.
+* To run multi-instance [MPI workloads](batch-mpi.md), choose H-series or other sizes that have a network interface for Remote Direct Memory Access (RDMA). These sizes connect to an InfiniBand network for inter-node communication, which can accelerate MPI applications.
* For CUDA applications, choose N-series sizes that include NVIDIA Tesla graphics processing unit (GPU) cards. This article provides guidance and examples to use some of Azure's specialized sizes in Batch pools. For specs and background, see:
-* High performance compute VM sizes ([Linux](../virtual-machines/sizes-hpc.md), [Windows](../virtual-machines/sizes-hpc.md))
+* High performance compute VM sizes ([Linux](../virtual-machines/sizes-hpc.md), [Windows](../virtual-machines/sizes-hpc.md))
-* GPU-enabled VM sizes ([Linux](../virtual-machines/sizes-gpu.md), [Windows](../virtual-machines/sizes-gpu.md))
+* GPU-enabled VM sizes ([Linux](../virtual-machines/sizes-gpu.md), [Windows](../virtual-machines/sizes-gpu.md))
> [!NOTE] > Certain VM sizes might not be available in the regions where you create your Batch accounts. To check that a size is available, see [Products available by region](https://azure.microsoft.com/regions/services/) and [Choose a VM size for a Batch pool](batch-pool-vm-sizes.md).
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
| Size | Capability | Operating systems | Required software | Pool settings | | -- | -- | -- | -- | -- | | [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/linux/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Ubuntu 16.04 LTS, or<br/>CentOS-based HPC<br/>(Azure Marketplace) | Intel MPI 5<br/><br/>Linux RDMA drivers | Enable inter-node communication, disable concurrent task execution |
-| [NC, NCv2, NCv3, NDv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Ubuntu 16.04 LTS, or<br/>CentOS 7.3 or 7.4<br/>(Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers | N/A |
+| [NC, NCv2, NCv3, NDv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Ubuntu 16.04 LTS, or<br/>CentOS 7.3 or 7.4<br/>(Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers | N/A |
| [NV, NVv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Ubuntu 16.04 LTS, or<br/>CentOS 7.3<br/>(Azure Marketplace) | NVIDIA GRID drivers | N/A | <sup>*</sup>RDMA-capable N-series sizes also include NVIDIA Tesla GPUs
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
| Size | Capability | Operating systems | Required software | Pool settings | | -- | | -- | -- | -- | | [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/windows/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Windows Server 2016, 2012 R2, or<br/>2012 (Azure Marketplace) | Microsoft MPI 2012 R2 or later, or<br/> Intel MPI 5<br/><br/>Windows RDMA drivers | Enable inter-node communication, disable concurrent task execution |
-| [NC, NCv2, NCv3, ND, NDv2 series](../virtual-machines/windows/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Windows Server 2016 or <br/>2012 R2 (Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers| N/A |
+| [NC, NCv2, NCv3, ND, NDv2 series](../virtual-machines/windows/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Windows Server 2016 or <br/>2012 R2 (Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers| N/A |
| [NV, NVv2 series](../virtual-machines/windows/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Windows Server 2016 or<br/>2012 R2 (Azure Marketplace) | NVIDIA GRID drivers | N/A | <sup>*</sup>RDMA-capable N-series sizes also include NVIDIA Tesla GPUs
The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o
To configure a specialized VM size for your Batch pool, you have several options to install required software or drivers:
-* For pools in the virtual machine configuration, choose a preconfigured [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/) VM image that has drivers and software preinstalled. Examples:
+* For pools in the virtual machine configuration, choose a preconfigured [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/) VM image that has drivers and software preinstalled. Examples:
* [CentOS-based 7.4 HPC](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview) - includes RDMA drivers and Intel MPI 5.1
To configure a specialized VM size for your Batch pool, you have several options
* [Ubuntu Server (with GPU and RDMA drivers) for Azure Batch container pools](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-azure-batch.ubuntu-server-container-rdma?tab=Overview)
-* Create a [custom Windows or Linux VM image](batch-sig-images.md) on which you have installed drivers, software, or other settings required for the VM size.
+* Create a [custom Windows or Linux VM image](batch-sig-images.md) on which you have installed drivers, software, or other settings required for the VM size.
* Create a Batch [application package](batch-application-packages.md) from a zipped driver or application installer, and configure Batch to deploy the package to pool nodes and install once when each node is created. For example, if the application package is an installer, create a [start task](jobs-and-tasks.md#start-task) command line to silently install the app on all pool nodes. Consider using an application package and a pool start task if your workload depends on a particular driver version.
- > [!NOTE]
+ > [!NOTE]
> The start task must run with elevated (admin) permissions, and it must wait for success. Long-running tasks will increase the time to provision a Batch pool. >
-* [Batch Shipyard](https://github.com/Azure/batch-shipyard) automatically configures the GPU and RDMA drivers to work transparently with containerized workloads on Azure Batch. Batch Shipyard is entirely driven with configuration files. There are many sample recipe configurations available that enable GPU and RDMA workloads such as the [CNTK GPU Recipe](https://github.com/Azure/batch-shipyard/tree/master/recipes/CNTK-GPU-OpenMPI) which preconfigures GPU drivers on N-series VMs and loads Microsoft Cognitive Toolkit software as a Docker image.
-- ## Example: NVIDIA GPU drivers on Windows NC VM pool To run CUDA applications on a pool of Windows NC nodes, you need to install NVDIA GPU drivers. The following sample steps use an application package to install the NVIDIA GPU drivers. You might choose this option if your workload depends on a specific GPU driver version.
To run CUDA applications on a pool of Windows NC nodes, you need to install NVDI
1. Using the Batch APIs or Azure portal, create a pool in the virtual machine configuration with the desired number of nodes and scale. The following table shows sample settings to install the NVIDIA GPU drivers silently using a start task: | Setting | Value |
-| - | -- |
+| - | -- |
| **Image Type** | Marketplace (Linux/Windows) | | **Publisher** | MicrosoftWindowsServer | | **Offer** | WindowsServer |
To run CUDA applications on a pool of Linux NC nodes, you need to install necess
To run Windows MPI applications on a pool of Azure H16r VM nodes, you need to configure the HpcVmDrivers extension and install [Microsoft MPI](/message-passing-interface/microsoft-mpi). Here are sample steps to deploy a custom Windows Server 2016 image with the necessary drivers and software:
-1. Deploy an Azure H16r VM running Windows Server 2016. For example, create the VM in the US West region.
-2. Add the HpcVmDrivers extension to the VM by [running an Azure PowerShell command](../virtual-machines/sizes-hpc.md) from a client computer that connects to your Azure subscription, or using Azure Cloud Shell.
+1. Deploy an Azure H16r VM running Windows Server 2016. For example, create the VM in the US West region.
+2. Add the HpcVmDrivers extension to the VM by [running an Azure PowerShell command](../virtual-machines/sizes-hpc.md) from a client computer that connects to your Azure subscription, or using Azure Cloud Shell.
1. Make a Remote Desktop connection to the VM. 1. Download the [setup package](https://www.microsoft.com/download/details.aspx?id=57467) (MSMpiSetup.exe) for the latest version of Microsoft MPI, and install Microsoft MPI. 1. Follow the steps to create an [Azure Compute Gallery image](batch-sig-images.md) for Batch.
Using the Batch APIs or Azure portal, create a pool using this image and with th
## Next steps * To run MPI jobs on an Azure Batch pool, see the [Windows](batch-mpi.md) or [Linux](/archive/blogs/windowshpc/introducing-mpi-support-for-linux-on-azure-batch) examples.-
-* For examples of GPU workloads on Batch, see the [Batch Shipyard](https://github.com/Azure/batch-shipyard/) recipes.
batch High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/high-availability-disaster-recovery.md
- Title: High availability and disaster recovery
-description: Learn how to design your Batch application for a regional outage.
- Previously updated : 09/08/2021--
-# Design your Batch application for high availability
-
-Azure Batch is available in all Azure regions, but when a Batch account is created it must be associated with one specific region. All operations for the Batch account then apply to that region. For example, pools and associated virtual machines (VMs) are created in the same region as the Batch account.
-
-When designing an application that uses Batch, you must consider the possibility of Batch not being available in a region. It's possible to encounter a rare situation where there is a problem with the region as a whole, the entire Batch service in the region, or your specific Batch account.
-
-If the application or solution using Batch always needs to be available, then it should be designed to either failover to another region or always have the workload split between two or more regions. Both approaches require at least two Batch accounts, with each account located in a different region.
-
-## Multiple Batch accounts in multiple regions
-
-Using multiple Batch accounts in various regions lets your application continue running if a Batch account in one region becomes unavailable. If your application needs to be highly available, having multiple accounts is especially important.
-
-In some cases, applications may be designed to intentionally use two or more regions. For example, when you need a considerable amount of capacity, using multiple regions may be needed to handle either a large-scale application or cater for future growth. These applications will also require multiple Batch accounts (one per region used).
-
-## Design considerations for providing failover
-
-When providing the ability to failover to an alternate region, all components in a solution need to be considered; it is not sufficient to simply have a second Batch account. For example, in most Batch applications, an Azure storage account is required, with the storage account and Batch account needing to be in the same region for acceptable performance.
-
-Consider the following points when designing a solution that can failover:
--- Pre-create all required services in each region, such as the Batch account and storage account. There is often no charge for having accounts created, and charges accrue only when the account is used or when data is stored.-- Make sure the appropriate [quotas](batch-quota-limit.md) are set on all subscriptions ahead of time, so you can allocate the required number of cores using the Batch account.-- Use templates and/or scripts to automate the deployment of the application in a region.-- Keep application binaries and reference data up-to-date in all regions. Staying up-to-date will ensure the region can be brought online quickly without having to wait for the upload and deployment of files. For example, if a custom application to install on pool nodes is stored and referenced using Batch application packages, then when a new version of the application is produced, it should be uploaded to each Batch account and referenced by the pool configuration (or make the new version the default version).-- In the application calling Batch, storage, and any other services, make it easy to switch over clients or the load to different regions.-- When applicable, consider [creating pools across Availability Zones](create-pool-availability-zones.md).-- Consider frequently switching over to an alternate region as part of normal operation. For example, with two deployments in separate regions, switch over to the alternate region every month.-
-## Next steps
--- Learn more about creating Batch accounts with the [Azure portal](batch-account-create-portal.md), the [Azure CLI](./scripts/batch-cli-sample-create-account.md), [PowerShell](batch-powershell-cmdlets-get-started.md), or the [Batch management API](batch-management-dotnet.md).-- Learn about the [default quotas associated with a Batch account](batch-quota-limit.md) and how quotas can be increased.
batch Pool File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/pool-file-shares.md
Title: Azure file share for Azure Batch pools description: How to mount an Azure Files share from compute nodes in a Linux or Windows pool in Azure Batch. Previously updated : 08/23/2021 Last updated : 03/20/2023 # Use an Azure file share with a Batch pool
For details on how to mount an Azure file share on a pool, see [Mount a virtual
## Next steps - To learn about other options to read and write data in Batch, see [Persist job and task output](batch-task-output.md).-- Explore the [Batch Shipyard](https://github.com/Azure/batch-shipyard) toolkit, which includes [Shipyard recipes](https://github.com/Azure/batch-shipyard/tree/master/recipes) to deploy file systems for Batch container workloads.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
The [Image Retrieval APIs](./how-to/image-retrieval.md), part of the Image Analy
As part of the Image Analysis 4.0 API, the [Background removal API](./concept-background-removal.md) lets you remove the background of an image. This operation can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object.
+### Computer Vision 3.0 & 3.1 previews deprecation
+
+The preview versions of the Computer Vision 3.0 and 3.1 APIs are scheduled to be retired on September 30, 2023. Customers won't be able to make any calls to these APIs past this date. Customers are encouraged to migrate their workloads to the generally available (GA) 3.2 API instead. Mind the following changes when migrating from the preview versions to the 3.2 API:
+- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.
+- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
+- Computer Vision 3.2 API uses a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
+ ## October 2022 ### Computer Vision Image Analysis 4.0 (public preview)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Azure OpenAI Service provides REST API access to OpenAI's powerful language mode
| Feature | Azure OpenAI | | | | | Models available | GPT-3 base series <br>**New ChatGPT (gpt-35-turbo)**<br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable.|
-| Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable. \*\*East US Fine-tuning is currently unavailable to new customers. Please use US South Central for US based training|
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable. \*\*East US and West Europe Fine-tuning is currently unavailable to new customers. Please use US South Central for US based training|
| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | | Virtual network support & private link support | Yes | | Managed Identity| Yes, via Azure Active Directory |
container-apps Azure Arc Create Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-create-container-app.md
Previously updated : 11/29/2022 Last updated : 3/20/2023
Next, add the required Azure CLI extensions.
```azurecli-interactive az extension add --upgrade --yes --name customlocation az extension remove --name containerapp
-az extension add -s https://download.microsoft.com/download/5/c/2/5c2ec3fc-bd2a-4615-a574-a1b7c8e22f40/containerapp-0.0.1-py2.py3-none-any.whl --yes
+az extension add -s https://aka.ms/acaarccli/containerapp-latest-py2.py3-none-any.whl --yes
``` ## Create a resource group
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
Previously updated : 12/16/2022 Last updated : 3/20/2023
az extension add --name connectedk8s --upgrade --yes
az extension add --name k8s-extension --upgrade --yes az extension add --name customlocation --upgrade --yes az extension remove --name containerapp
-az extension add --source https://download.microsoft.com/download/5/c/2/5c2ec3fc-bd2a-4615-a574-a1b7c8e22f40/containerapp-0.0.1-py2.py3-none-any.whl --yes
+az extension add --source https://aka.ms/acaarccli/containerapp-latest-py2.py3-none-any.whl --yes
``` # [PowerShell](#tab/azure-powershell)
az extension add --source https://download.microsoft.com/download/5/c/2/5c2ec3fc
az extension add --name connectedk8s --upgrade --yes az extension add --name k8s-extension --upgrade --yes az extension add --name customlocation --upgrade --yes
-az extension az extension remove --name containerapp
-az extension add --source https://download.microsoft.com/download/5/c/2/5c2ec3fc-bd2a-4615-a574-a1b7c8e22f40/containerapp-0.0.1-py2.py3-none-any.whl --yes
+az extension remove --name containerapp
+az extension add --source https://aka.ms/acaarccli/containerapp-latest-py2.py3-none-any.whl --yes
```
container-apps Azure Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md
Previously updated : 11/29/2021 Last updated : 03/20/2023
The following table describes the role of each revision created for you:
|-|-|-|-|-| | `<extensionName>-k8se-activator` | Used as part of the scaling pipeline | 2 | 100 millicpu | 500 MB | | `<extensionName>-k8se-billing` | Billing record generation - Azure Container Apps on Azure Arc enabled Kubernetes is Free of Charge during preview | 3 | 100 millicpu | 100 MB |
-| `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 500 MB |
+| `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 1 GB |
| `<extensionName>-k8se-envoy` | A front-end proxy layer for all data-plane http requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB |
-| `<extensionName>-k8se-envoy-controller` | Operator, which generates Envoy configuration | 2 | 100 millicpu | 500 MB |
+| `<extensionName>-k8se-envoy-controller` | Operator, which generates Envoy configuration | 2 | 200 millicpu | 500 MB |
| `<extensionName>-k8se-event-processor` | An alternative routing destination to help with apps that have scaled to zero while the system gets the first instance available. | 2 | 100 millicpu | 500 MB | | `<extensionName>-k8se-http-scaler` | Monitors inbound request volume in order to provide scaling information to [KEDA](https://keda.sh). | 1 | 100 millicpu | 500 MB | | `<extensionName>-k8se-keda-cosmosdb-scaler` | Keda Cosmos DB Scaler | 1 | 10 m | 128 MB |
The following table describes the role of each revision created for you:
| `<extensionName>-k8se-local-envoy` | A front-end proxy layer for all data-plane tcp requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB | | `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB | | `<extensionName>-k8se-mdm` | Metrics and Logs Agent | 2 | 500 millicpu | 500 MB |
+| dapr-metrics | Dapr metrics pod | 1 | 100 millicpu | 500 MB |
| dapr-operator | Manages component updates and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | | dapr-placement-server | Used for Actors only - creates mapping tables that map actor instances to pods | 1 | 100 millicpu | 500 MB | | dapr-sentry | Manages mTLS between services and acts as a CA | 2 | 800 millicpu | 200 MB |
ARM64 based clusters aren't supported at this time.
- Initial public preview release of Container apps extension
+### Container Apps extension v1.0.47 (January 2023)
+
+- Upgrade of Envoy to 1.0.24
+
+### Container Apps extension v1.0.48 (February 2023)
+
+- Add probes to EasyAuth container(s)
+- Increased memory limit for dapr-operator
+- Added prevention of platform header overwriting
+
+### Container Apps extension v1.0.49 (February 2023)
+
+ - Upgrade of KEDA to 2.9.1
+ - Upgrade of Dapr to 1.9.5
+ - Increase Envoy Controller resource limits to 200 m CPU
+ - Increase Container App Controller resource limits to 1 GB memory
+ - Reduce EasyAuth sidecar resource limits to 50 m CPU
+ - Resolve KEDA error logging for missing metric values
+
+### Container Apps extension v1.0.50 (March 2023)
+ - Updated logging images in sync with Public Cloud
+ ## Next steps [Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md)
container-apps Containerapp Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containerapp-up.md
The command can build and push a container image to an Azure Container Registry
If you need to customize the Container Apps environment, first create the environment using the `az containerapp env create` command. If you don't provide an existing environment, the `up` command looks for one in your resource group and, if found, uses that environment. If not found, it creates an environment with a Log Analytics workspace.
-To learn more about the `az containerapp up` command and its options, see [`az containerapp up`](/cli/azure/containerapp#az_containerapp_up).
+To learn more about the `az containerapp up` command and its options, see [`az containerapp up`](/cli/azure/containerapp#az-containerapp-up).
## Prerequisites
Because the `up` command creates a GitHub Actions workflow, rerunning it to depl
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Deploy your code to Azure Container Apps](quickstart-code-to-cloud.md)
+> [Quickstart: Deploy your code to Azure Container Apps](quickstart-code-to-cloud.md)
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
Next, query for the infrastructure subnet ID.
# [Bash](#tab/bash) ```bash
-INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VNET_NAME> --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'`
+INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VNET_NAME> --name infrastructure --query "id" -o tsv | tr -d '[:space:]'`
``` # [Azure PowerShell](#tab/azure-powershell)
container-apps Microservices Dapr Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-pubsub.md
Title: "Microservices communication using Dapr Pub/sub messaging"
+ Title: "Microservices communication using Dapr Publish and Subscribe"
description: Enable two sample Dapr applications to send and receive messages and leverage Azure Container Apps.
zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json
zone_pivot_groups: dapr-languages-set
-# Microservices communication using Dapr Pub/sub messaging
+# Microservices communication using Dapr Publish and Subscribe
In this tutorial, you'll: > [!div class="checklist"]
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
Title: Quickstart - Use Java to create a document database using Azure Cosmos DB
-description: This quickstart presents a Java code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
+ Title: "Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data"
+description: Use a Java code sample from GitHub to learn how to build an app to connect to and query Azure Cosmos DB for NoSQL.
ms.devlang: java Previously updated : 08/26/2021 Last updated : 03/16/2023
> * [Go](quickstart-go.md) >
+This quickstart guide explains how to build a Java app to manage an Azure Cosmos DB for NoSQL account. You create the Java app using the SQL Java SDK, and add resources to your Azure Cosmos DB account by using the Java application.
-In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB for NoSQL account using the Azure portal, or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Java app using the SQL Java SDK, and then add resources to your Azure Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+First, create an Azure Cosmos DB for NoSQL account using the Azure portal. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. You can [try Azure Cosmos DB account](https://aka.ms/trycosmosdb) for free without a credit card or an Azure subscription.
> [!IMPORTANT]
-> This quickstart is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
->
+> This quickstart is for Azure Cosmos DB Java SDK v4 only. For more information, see the [release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), [performance tips](performance-tips-java-sdk-v4.md), and [troubleshooting guide](troubleshoot-java-sdk-v4.md). If you currently use an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
> [!TIP]
-> If you're working with Azure Cosmos DB resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Cosmos DB, see [Access data with Azure Cosmos DB NoSQL API](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db).
+> If you work with Azure Cosmos DB resources in a Spring application, consider using [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Cosmos DB, see [Access data with Azure Cosmos DB NoSQL API](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db).
## Prerequisites -- An Azure account with an active subscription.
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- An Azure account with an active subscription. If you don't have an Azure subscription, you can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
+- [Java Development Kit (JDK) 8](https://www.oracle.com/java/technologies/javase/8u-relnotes.html). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. ## Introductory notes
-*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
+**The structure of an Azure Cosmos DB account:** For any API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the following diagram:
-You may read more about databases, containers and items [here.](../resource-model.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
+For more information, see [Databases, containers, and items in Azure Cosmos DB](../resource-model.md).
-The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
+A few important properties are defined at the level of the container, including *provisioned throughput* and *partition key*. The provisioned throughput is measured in request units (RUs), which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. To learn more about throughput provisioning, see [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md).
-As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
+As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key that maps each document to a partition. Partitions are managed such that each partition is assigned a roughly equal slice out of the range of partition key values. Therefore, you're advised to choose a partition key that's relatively random or evenly distributed. Otherwise, some partitions see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*). To learn more, see [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
## Create a database account
Before you can create a document database, you need to create an API for NoSQL a
[!INCLUDE [cosmos-db-create-collection](../includes/cosmos-db-create-collection.md)] <a id="add-sample-data"></a>+ ## Add sample data [!INCLUDE [cosmos-db-create-sql-api-add-sample-data](../includes/cosmos-db-create-sql-api-add-sample-data.md)]
Before you can create a document database, you need to create an API for NoSQL a
## Clone the sample application
-Now let's switch to working with code. Let's clone an API for NoSQL app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Now let's switch to working with code. Clone an API for NoSQL app from GitHub, set the connection string, and run it. You can see how easy it is to work with data programmatically.
Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
git clone https://github.com/Azure-Samples/azure-cosmos-java-getting-started.git
## Review the code
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
-](#run-the-app).
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app](#run-the-app).
## [Passwordless Sync API (Recommended)](#tab/passwordlesssync)
This step is optional. If you're interested in learning how the database resourc
[!INCLUDE [default-azure-credential-sign-in](../../../includes/passwordless/default-azure-credential-sign-in.md)]
-You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` will automatically discover and use the account you signed-in with in the previous step.
+You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` automatically discovers and uses the account you signed into in the previous step.
-### Managing database resources using the synchronous (sync) API
+### Manage database resources using the synchronous (sync) API
+
+* `CosmosClient` initialization: The `CosmosClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute requests against the service.
-* `CosmosClient` initialization. The `CosmosClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute requests against the service.
-
[!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=CreatePasswordlessSyncClient)]
-* Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
+* Use the [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
```azurecli-interactive # Create a SQL API database
You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by ad
* Item creation by using the `createItem` method. [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=CreateItem)]
-
+ * Point reads are performed using `readItem` method. [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncPasswordlessMain.java?name=ReadItem)]
Now go back to the Azure portal to get your connection string information and la
mvn package ```
-3. In the git terminal window, use the following command to start the Java application. Replace `SYNCASYNCMODE` with `sync-passwordless` or `async-passwordless`, depending upon which sample code you'd like to run. Replace `YOUR_COSMOS_DB_HOSTNAME` with the quoted URI value from the portal, and replace `YOUR_COSMOS_DB_MASTER_KEY` with the quoted primary key from portal.
+3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync-passwordless` or `async-passwordless`, depending on which sample code you'd like to run. Replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
```bash mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY ```
- The terminal window displays a notification that the FamilyDB database was created.
+ The terminal window displays a notification that the `FamilyDB` database was created.
-4. The app will reference the database and container you created via Azure CLI earlier.
-
-5. The app will perform point reads using object IDs and partition key value (which is lastName in our sample).
-6. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson')
+4. The app references the database and container you created via Azure CLI earlier.
+
+5. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
-7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources). from your account so that you don't incur charges.
+6. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
+
+7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account so that you don't incur charges.
## [Passwordless Async API](#tab/passwordlessasync)
Now go back to the Azure portal to get your connection string information and la
[!INCLUDE [default-azure-credential-sign-in](../../../includes/passwordless/default-azure-credential-sign-in.md)]
-You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` will automatically discover and use the account you signed-in with in the previous step.
+You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the [azure-identity dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` automatically discovers and uses the account you signed-in with in the previous step.
### Managing database resources using the asynchronous (async) API
-* Async API calls return immediately, without waiting for a response from the server. In light of this, the following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
+* Async API calls return immediately, without waiting for a response from the server. The following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute asynchronous requests against the service.
-
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=CreatePasswordlessAsyncClient)]
-* Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
+* Use the [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
```azurecli-interactive # Create a SQL API database
You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by ad
--partition-key-path '/lastName' ```
-* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream which issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program does not terminate during item creation. **The proper asynchronous programming practice is not to block on async calls - in realistic use-cases requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
+* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream that issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program doesn't terminate during item creation. **The proper asynchronous programming practice is not to block on async calls. In realistic use cases, requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
[!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncPasswordlessMain.java?name=CreateItem)]
Now go back to the Azure portal to get your connection string information and la
mvn package ```
-3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync-passwordless` or `async-passwordless` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal)
+3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync-passwordless` or `async-passwordless` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
```bash mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY
Now go back to the Azure portal to get your connection string information and la
The terminal window displays a notification that the `AzureSampleFamilyDB` database was created.
-4. The app will reference the database and container you created via Azure CLI earlier.
-
-5. The app will perform point reads using object IDs and partition key value (which is lastName in our sample).
-6. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson')
+4. The app references the database and container you created via Azure CLI earlier.
+
+5. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
+
+6. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
-7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources). from your account so that you don't incur charges.
+7. The app doesn't delete the created resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account so that you don't incur charges.
-# [Sync API](#tab/sync)
+## [Sync API](#tab/sync)
### Managing database resources using the synchronous (sync) API
Now go back to the Azure portal to get your connection string information and la
mvn package ```
-3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal)
+3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
```bash mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY ``` The terminal window displays a notification that the FamilyDB database was created.
-
-4. The app creates database with name `AzureSampleFamilyDB`
-5. The app creates container with name `FamilyContainer`
-6. The app will perform point reads using object IDs and partition key value (which is lastName in our sample).
-7. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson')
+
+4. The app creates a database with the name `AzureSampleFamilyDB`.
+
+5. The app creates a container with the name `FamilyContainer`.
+
+6. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
+
+7. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
+ 8. The app doesn't delete the created resources. Return to the Azure portal to [clean up the resources](#clean-up-resources) from your account so you don't incur charges.
-# [Async API](#tab/async)
+## [Async API](#tab/async)
### Managing database resources using the asynchronous (async) API
-* Async API calls return immediately, without waiting for a response from the server. In light of this, the following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
+* Async API calls return immediately, without waiting for a response from the server. The following code snippets show proper design patterns for accomplishing all of the preceding management tasks using async API.
* `CosmosAsyncClient` initialization. The `CosmosAsyncClient` provides client-side logical representation for the Azure Cosmos DB database service. This client is used to configure and execute asynchronous requests against the service.
-
+ [!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateAsyncClient)] * `CosmosAsyncDatabase` creation.
Now go back to the Azure portal to get your connection string information and la
[!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/sync/SyncMain.java?name=CreateContainerIfNotExists)]
-* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream which issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program does not terminate during item creation. **The proper asynchronous programming practice is not to block on async calls - in realistic use-cases requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
+* As with the sync API, item creation is accomplished using the `createItem` method. This example shows how to efficiently issue numerous async `createItem` requests by subscribing to a Reactive Stream that issues the requests and prints notifications. Since this simple example runs to completion and terminates, `CountDownLatch` instances are used to ensure the program doesn't terminate during item creation. **The proper asynchronous programming practice is not to block on async calls. In realistic use cases, requests are generated from a main() loop that executes indefinitely, eliminating the need to latch on async calls.**
[!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=CreateItem)]
-* As with the sync API, point reads are performed using `readItem` method.
+* As with the sync API, point reads are performed by using `readItem` method.
[!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=ReadItem)]
-* As with the sync API, SQL queries over JSON are performed using the `queryItems` method.
+* As with the sync API, SQL queries over JSON are performed by using the `queryItems` method.
[!code-java[](~/azure-cosmosdb-java-v4-getting-started/src/main/java/com/azure/cosmos/sample/async/AsyncMain.java?name=QueryItems)]
Now go back to the Azure portal to get your connection string information and la
mvn package ```
-3. In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal)
+3. In the git terminal window, use the following command to start the Java application. Replace SYNCASYNCMODE with `sync` or `async` depending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal.
```bash mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY ```
- The terminal window displays a notification that the FamilyDB database was created.
-
-4. The app creates database with name `AzureSampleFamilyDB`
-5. The app creates container with name `FamilyContainer`
-6. The app will perform point reads using object IDs and partition key value (which is lastName in our sample).
-7. The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson')
+ The terminal window displays a notification that the `FamilyDB` database was created.
+
+4. The app creates a database with the name `AzureSampleFamilyDB`.
+
+5. The app creates a container with the name `FamilyContainer`.
+
+6. The app performs point reads using object IDs and partition key value (which is `lastName` in our sample).
+
+7. The app queries items to retrieve all families with last name (*Andersen*, *Wakefield*, *Johnson*).
8. The app doesn't delete the created resources. Return to the Azure portal to [clean up the resources](#clean-up-resources) from your account so you don't incur charges.
Now go back to the Azure portal to get your connection string information and la
- ## Review SLAs in the Azure portal [!INCLUDE [cosmosdb-tutorial-review-slas](../includes/cosmos-db-tutorial-review-slas.md)]
Now go back to the Azure portal to get your connection string information and la
## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB for NoSQL account, create a document database and container using the Data Explorer, and run a Java app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a document database and container using Data Explorer, and run a Java app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+Are you capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating RUs using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+* If you know typical request rates for your current database workload, learn how to [estimate RUs using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md).
cosmos-db Sdk Dotnet Core V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-core-v2.md
ms.devlang: csharp Previously updated : 04/18/2022 Last updated : 03/20/2023
||| |**Release notes**| [Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)| |**SDK download**| [NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/)|
-|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
+|**API documentation**|[.NET API reference documentation](/dotnet/api/microsoft.azure.documents)|
|**Samples**|[.NET code samples](samples-dotnet.md)| |**Get started**|[Get started with the Azure Cosmos DB .NET](sdk-dotnet-v2.md)| |**Web app tutorial**|[Web application development with Azure Cosmos DB](tutorial-dotnet-web-app.md)|
cosmos-db Sdk Dotnet V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-v2.md
ms.devlang: csharp Previously updated : 04/18/2022 Last updated : 03/20/2023
||| |**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md)| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/)|
-|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
+|**API documentation**|[.NET API reference documentation](/dotnet/api/microsoft.azure.documents)|
|**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples)| |**Get started**|[Get started with the Azure Cosmos DB .NET SDK](quickstart-dotnet.md)| |**Web app tutorial**|[Web application development with Azure Cosmos DB](tutorial-dotnet-web-app.md)|
cosmos-db Sdk Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-dotnet-v3.md
ms.devlang: csharp Previously updated : 03/22/2022 Last updated : 03/20/2023
||| |**Release notes**|[Release notes](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md)| |**SDK download**|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/)|
-|**API documentation**|[.NET API reference documentation](/dotnet/api/overview/azure/cosmosdb)|
+|**API documentation**|[.NET API reference documentation](/dotnet/api/microsoft.azure.cosmos)|
|**Samples**|[.NET code samples](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage)| |**Get started**|[Get started with the Azure Cosmos DB .NET SDK](quickstart-dotnet.md)| |**Best Practices**|[Best Practices for Azure Cosmos DB .NET SDK](best-practice-dotnet.md)|
cost-management-billing Reserved Instance Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reserved-instance-purchase-recommendations.md
Previously updated : 01/05/2023 Last updated : 03/20/2023 # Reservation recommendations
The following steps define how recommendations are calculated:
1. The recommendation engine evaluates the hourly usage for your resources in the given scope over the past 7, 30, and 60 days. 2. Based on the usage data, the engine simulates your costs with and without reservations. 3. The costs are simulated for different quantities, and the quantity that maximizes the savings is recommended.
-4. If your resources are shut down regularly, the simulation won't find any savings, and no purchase recommendation is provided.
-5. The recommendation calculations include any special discounts that you might have for your on-demand usage rates, such as Microsoft Azure Consumption Commitment (MACC) and Azure Commitment Discount (ACD) based solely on historic usage.
+4. If your resources are shut down regularly, the simulation can't find any savings, and no purchase recommendation is provided.
+5. The recommendation calculations include any special discounts that you might have for your on-demand usage rates.
- The recommendations account for existing reservations and savings plans. So, previously purchased reservations and savings plans are excluded when providing recommendations. ## Recommendations in the Azure portal
-Reservation purchase recommendations are also shown in the Azure portal in the purchase experience. Recommendations are shown with the **Recommended Quantity**. When purchased, the quantity that Azure recommends will give the maximum savings possible. Although you can buy any quantity that you like, if you buy a different quantity your savings won't be optimal.
+Reservation purchase recommendations are also shown in the Azure portal in the purchase experience. Recommendations are shown with the **Recommended Quantity**. When purchased, the quantity that Azure recommends gives the maximum savings possible. Although you can buy any quantity that you like, if you buy a different quantity your savings aren't optimal.
Let's look at some examples why.
More information about the recommendation appears when you select **See details*
:::image type="content" source="./media/reserved-instance-purchase-recommendations/recommended-quantity-details.png" alt-text="Example showing details for a reservation purchase recommendation " :::
-The chart and estimated values change when you increase the recommended quantity. By increasing the reservation quantity, your savings will be reduced because you'll end up with reduced reservation use. In other words, you'll pay for reservations that aren't fully used.
+The chart and estimated values change when you increase the recommended quantity. When you increase the reservation quantity, your savings are reduced because you end up with reduced reservation use. In other words, you pay for reservations that aren't fully used.
-If you lower the reservation quantity, your savings will also be reduced. Although you'll have increased utilization, there will likely be periods when your reservations won't fully cover your use. Usage beyond your reservation quantity will be used by more expensive pay-as-you-go resources. The following example image illustrates the point. We've manually reduced the reservation quantity to 4. The reservation utilization is increased, but the overall savings are reduced because pay-as-you go costs are present.
+If you lower the reservation quantity, your savings are also reduced. Although utilization is increased, there might be periods when your reservations don't fully cover your use. Usage beyond your reservation quantity is used by more expensive pay-as-you-go resources. The following example image illustrates the point. We've manually reduced the reservation quantity to 4. The reservation utilization is increased, but the overall savings are reduced because pay-as-you go costs are present.
:::image type="content" source="./media/reserved-instance-purchase-recommendations/recommended-quantity-details-changed.png" alt-text="Example showing changed reservation purchase recommendation details" :::
cost-management-billing Choose Commitment Amount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/choose-commitment-amount.md
The recommendations engine calculates savings plan purchases for the selected te
## Recommendations for management groups
-Currently, the Azure portal doesn't provide savings plan recommendations for management groups. However, you can manually calculate your own per-hour commitment for management groups using the following steps.
-
-1. Download the Usage Detail report from the EA portal or Azure portal to get your usage and cost.
- - EA portal - Sign in to ea.azure.com, navigate to the Reports section, and then download the Usage Details report for the current and previous two months.
- - Azure portal - Sign in to the Azure portal and navigate to Cost Management + Billing. Under Billing, select **Usage + charges** and then download for the current and previous two months.
-1. Open the downloaded file in Excel. Or, if the file size is too large to open in Excel, you can use Power BI Desktop.
-1. Create the `cost` column by multiplying `PayG Price` * `Quantity` to create `CalculatedCost`.
-1. Filter `Charge Type` = `Usage`.
-1. Filter `Meter Category` = `Virtual Machines`, `App Service`, `Functions`, `Container Instance` because the savings plan applies to only those services.
-1. Filter `ProductOrderName` = `Blank`.
-1. Filter `Quantity` >= `23` to consider only items used for 24 hours because a savings plan is per hour commitment, and we have the granularity of per day, not per hour. This step avoids sparse compute records.
-1. Filter `Months` for the current and previous two months.
-1. If you're using Power BI, export the data to a CSV file and open it in Excel.
-1. Copy the subscription names that belong to the management group where you want to apply a savings plan to an Excel sheet.
-1. In Excel, use the `Vlookup` function for the subscriptions against the filtered data.
-1. Divide `CalculatedCost` by `24` hours to get `PerHour` cost.
-1. Create a PivotTable to group the data by subscription and by month and day, and then copy the PivotTable data to a new sheet.
-1. Multiply the `PerHour` cost by `0.4`.
- This step determines the discount for the usage. For example, you committed $100.00 USD and you are charged based on a one or three-year savings plan discount. The discount applies per SKU, so your cost per hour is less than 100 hours. You need more compute cost to get the $100.00 US value. So, 40% is a safe limit.
-1. View the range of cost per hour, per day, and per month to determine a safe commitment to make.
+Currently, the Azure portal doesn't provide savings plan recommendations for management groups. However, you can get the details of per hour commitment of Subscriptions based recommendation from Azure portal and combine the amount based on Subscriptions grouping as part of Management group and apply the Savings Plan.
+ ## Need help? Contact us
databox-online Azure Stack Edge Pro 2 System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-system-requirements.md
Previously updated : 11/15/2022 Last updated : 03/17/2023
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Previously updated : 01/17/2023 Last updated : 03/20/2023
Azure DDoS Network Protection, combined with application design best practices,
> [!NOTE] > DDoS IP Protection is currently only available in Azure Preview PowerShell.
-> [!NOTE]
-> Protecting a public IP resource attached to a Public Load Balancer is not supported for DDoS IP Protection SKU.
- ## SKUs Azure DDoS Protection supports two SKU Types, DDoS IP Protection and DDoS Network Protection. The SKU is configured in the Azure portal during the workflow when you configure Azure DDoS Protection.
defender-for-cloud Plan Defender For Servers Select Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-select-plan.md
You can choose from two Defender for Servers paid plans:
| Feature | Details | Plan 1 | Plan 2 | |:|:|::|::|
-| **Defender for Endpoint integration** | Defender for Servers integrates with Defender for Endpoint and protects servers with all the features, including:<br/><br/>- [Attack surface reduction](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction) to lower the risk of attack.<br/><br/> - [Next-generation protection](/microsoft-365/security/defender-endpoint/next-generation-protection), including real-time scanning and protection and [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection).<br/><br/> - EDR, including [threat analytics](/microsoft-365/security/defender-endpoint/threat-analytics), [automated investigation and response](/microsoft-365/security/defender-endpoint/automated-investigations), [advanced hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview), and [Microsoft Defender Experts](/microsoft-365/security/defender-endpoint/endpoint-attack-notifications).<br/><br/> - Vulnerability assessment and mitigation provided by [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities) as part of the Defender for Endpoint integration. | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Defender for Endpoint integration** | Defender for Servers integrates with Defender for Endpoint and protects servers with all the features, including:<br/><br/>- [Attack surface reduction](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction) to lower the risk of attack.<br/><br/> - [Next-generation protection](/microsoft-365/security/defender-endpoint/next-generation-protection), including real-time scanning and protection and [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection).<br/><br/> - EDR, including [threat analytics](/microsoft-365/security/defender-endpoint/threat-analytics), [automated investigation and response](/microsoft-365/security/defender-endpoint/automated-investigations), [advanced hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview), and [Microsoft Defender Experts](/microsoft-365/security/defender-endpoint/endpoint-attack-notifications).<br/><br/> - Vulnerability assessment and mitigation provided by [Microsoft Defender Vulnerability Management (MDVM)](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities) as part of the Defender for Endpoint integration. With Plan 2, you can get premium MDVM features, provided by the [MDVM add-on](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities#vulnerability-managment-capabilities-for-servers).| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
| **Licensing** | Defender for Servers covers licensing for Defender for Endpoint. Licensing is charged per hour instead of per seat, lowering costs by protecting virtual machines only when they're in use.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Defender for Endpoint provisioning** | Defender for Servers automatically provisions the Defender for Endpoint sensor on every supported machine that's connected to Defender for Cloud.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Unified view** | Defender for Endpoint alerts appear in the Defender for Cloud portal. You can get detailed information in the Defender for Endpoint portal.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Threat detection for OS-level (agent-based)** | Defender for Servers and Defender for Endpoint detect threats at the OS level, including virtual machine behavioral detections and *fileless attack detection*, which generates detailed security alerts that accelerate alert triage, correlation, and downstream response time.<br>[Learn more](alerts-reference.md#alerts-windows) | :::image type="icon" source="./mediE](/microsoft-365/security/defender-endpoint/overview-endpoint-detection-response) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Threat detection for network-level (agentless)** | Defender for Servers detects threats that are directed at the control plane on the network, including network-based detections for Azure virtual machines. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Microsoft Defender Vulnerability Management Add-on** | Get comprehensive visibility, assessments, and protection with consolidated asset inventories, security baselines assessments, application block feature, and more. [Learn more](deploy-vulnerability-assessment-defender-vulnerability-management.md). | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Microsoft Defender Vulnerability Management (MDVM) Add-on** | Enhance your vulnerability management program consolidated asset inventories, security baselines assessments, application block feature, and more. [Learn more](deploy-vulnerability-assessment-defender-vulnerability-management.md). | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
| **Security Policy and Regulatory Compliance** | Customize a security policy for your subscription and also compare the configuration of your resources with requirements in industry standards, regulations, and benchmarks. Learn more about [regulatory compliance](regulatory-compliance-dashboard.md) and [security policies](security-policy-concept.md) | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::|
-| **[Qualys vulnerability assessment](deploy-vulnerability-assessment-vm.md)** | As an alternative to Defender Vulnerability Management, Defender for Cloud integrates with the Qualys scanner to identify vulnerabilities. You don't need a Qualys license or account. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::|
+| **[Qualys vulnerability assessment](deploy-vulnerability-assessment-vm.md)** | As an alternative to Defender Vulnerability Management, Defender for Cloud can deploy a Qualys scanner and display the findings. You don't need a Qualys license or account. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::|
**[Adaptive application controls](adaptive-application-controls.md)** | Adaptive application controls define allowlists of known safe applications for machines. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 |:::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **Free data ingestion (500 MB) in workspaces** | Free data ingestion is available for [specific data types](faq-defender-for-servers.yml#what-data-types-are-included-in-the-daily-allowance-). Data ingestion is calculated per node, per reported workspace, and per day. It's available for every workspace that has a *Security* or *AntiMalware* solution installed. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: | | **[Just-in-time virtual machine access](just-in-time-access-overview.md)** | Just-in-time virtual machine access locks down machine ports to reduce the attack surface. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
defender-for-iot Concept Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-zero-trust.md
# Zero Trust and your OT networks
-[Zero Trust](/security/zero-trust/zero-trust-overview) is a security strategy for designing and implementing the following sets of security principles:
-
-|Verify explicitly |Use least privilege access |Assume breach |
-||||
-|Always authenticate and authorize based on all available data points. | Limit user access with Just-In-Time and Just-Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection. | Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.
-
-<!--add include file here after publishing-->
Implement Zero Trust principles across your operational technology (OT) networks to help you with challenges, such as:
defender-for-iot Faqs General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-general.md
Microsoft Defender for IoT delivers comprehensive security across all your IoT/O
## Do I have to be an Azure customer?
-No, for the agentless version of Microsoft Defender for IoT, you do not need to be an Azure customer. However, if you want to send alerts to Microsoft Sentinel; provision network sensors and monitor their health from the cloud; and benefit from automatic software and threat intelligence updates, you will need to connect the sensor to Azure and Defender for IoT. For more information, see [Sensor connection methods](architecture-connections.md).
+You must be an Azure customer to use Microsoft Defender for IoT. However, OT sensors installed for air-gapped networks can be managed locally, and don't need to connect to the cloud.
-For the agent-based version of Microsoft Defender for IoT, you must be an Azure customer.
+For more information, see [Defender for IoT subscription billing](billing.md).
## What happens when the internet connection stops working?
To learn more about how to get started with Defender for IoT, see the following
- Read the Defender for IoT [overview](overview.md) - [Get started with Defender for IoT](getting-started.md) - [OT Networks frequently asked questions](faqs-ot.md)-- [Enterprise IoT networks frequently asked questions](faqs-eiot.md)
+- [Enterprise IoT networks frequently asked questions](faqs-eiot.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - | | **22.3** | | | |
+| 22.3.7 | 03/2023 | Patch | 02/2024 |
+| 22.3.6 | 03/2023 | Patch | 02/2024 |
| 22.3.5 | 01/2023 | Patch | 12/2023 | | 22.3.4 | 01/2023 | Major | 12/2023 | | **22.2** | | | |
To understand whether a feature is supported in your sensor version, check the r
## Versions 22.3.x
+### 22.3.6 / 22.3.7
+
+<a name=22.3.7></a>
+
+**Release date**: 03/2023
+
+**Supported until**: 02/2024
+
+Version 22.3.7 includes the same features as 22.3.6. If you have version 22.3.6 installed, we strongly recommend that you update to version 22.3.7, which also includes important bug fixes.
+
+- [Support for transient devices](device-inventory.md#supported-devices)
+- [Auto-resolved notifications](how-to-work-with-the-sensor-device-map.md#device-notification-responses)
+- [Device data retention updated to 90 days](references-data-retention.md#device-data-retention-periods)
+- [Merging](how-to-investigate-sensor-detections-in-a-device-inventory.md#merge-devices) and [deleting](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) devices on OT sensors now include confirmation messages when the action has completed
+- Support for [deleting multiple devices](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) on OT sensors
+- An enhanced [editing device details](how-to-investigate-sensor-detections-in-a-device-inventory.md#edit-device-details) process on the OT sensor, using an **Edit** button in the toolbar at the top of the page
+- [Enhanced UI on the OT sensor for uploading an SSL/TLS certificate](how-to-deploy-certificates.md#deploy-ssltls-certificates-on-ot-appliances)
+- [Activation files for locally-managed sensors no longer expire](how-to-manage-individual-sensors.md#upload-a-new-activation-file)
+- Severity for all [**Suspicion of Malicious Activity**](alert-engine-messages.md#malware-engine-alerts) alerts is now **Critical**
+- [Allow internet connections on an OT network in bulk](how-to-accelerate-alert-incident-response.md#allow-internet-connections-on-an-ot-network)
++ ### 22.3.5 **Release date**: 01/2023
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 02/22/2023 Last updated : 03/14/2023
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | **Cloud features**: - [New Microsoft Sentinel incident experience for Defender for IoT](#new-microsoft-sentinel-incident-experience-for-defender-for-iot) |
+| **OT networks** | **Sensor version 22.3.6**: <br>- [Support for transient devices](#support-for-transient-devices)<br>- [Learn DNS traffic by configuring allowlists](#learn-dns-traffic-by-configuring-allowlists)<br>- [Device data retention updates](#device-data-retention-updates)<br>- [UI enhancements when uploading SSL/TLS certificates](#ui-enhancements-when-uploading-ssltls-certificates)<br>- [Activation files expiration updates](#activation-files-expiration-updates)<br>- [UI enhancements for managing the device inventory](#ui-enhancements-for-managing-the-device-inventory)<br>- [Updated severity for all Suspicion of Malicious Activity alerts](#updated-severity-for-all-suspicion-of-malicious-activity-alerts)<br>- [Automatically resolved device notifications](#automatically-resolved-device-notifications) <br><br> **Cloud features**: <br>- [New Microsoft Sentinel incident experience for Defender for IoT](#new-microsoft-sentinel-incident-experience-for-defender-for-iot) |
+
+### Support for transient devices
+
+Defender for IoT now identifies *transient* devices as a unique device type that represents devices that were detected for only a short time. We recommend investigating these devices carefully to understand their impact on your network.
+
+For more information, see [Defender for IoT device inventory](device-inventory.md) and [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
+
+### Learn DNS traffic by configuring allowlists
+
+The *support* user can now decrease the number of unauthorized internet alerts by creating an allowlist of domain names on your OT sensor.
+
+When a DNS allowlist is configured, the sensor checks each unauthorized internet connectivity attempt against the list before triggering an alert. If the domain's FQDN is included in the allowlist, the sensor doesnΓÇÖt trigger the alert and allows the traffic automatically.
+
+All OT sensor users can view the list of allowed DNS domains and their resolved IP addresses in data mining reports.  
+
+For example:
+
+
+For more information, see [Allow internet connections on an OT network](how-to-accelerate-alert-incident-response.md#allow-internet-connections-on-an-ot-network) and [Create data mining queries](how-to-create-data-mining-queries.md).
+
+
+### Device data retention updates
+
+The device data retention period on the OT sensor and on-premises management console has been updated to 90 days from the date of the **Last activity** value.
+
+For more information, see [Device data retention periods](references-data-retention.md#device-data-retention-periods).
+
+### UI enhancements when uploading SSL/TLS certificates
+
+The OT sensor version 22.3.6 has an enhanced **SSL/TLS Certificates** configuration page for defining your SSL/TLS certificate settings and deploying a CA-signed certificate.
+
+For more information, see [Deploy SSL/TLS certificates on OT appliances](how-to-deploy-certificates.md).
+
+### Activation files expiration updates
+
+Activation files on locally-managed OT sensors now remain activated for as long as your Defender for IoT plan is active on your Azure subscription, just like activation files on cloud-connected OT sensors.
+
+You'll only need to update your activation file if you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software) or switching the sensor management mode, such as moving from locally-managed to cloud-connected.
+
+For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md).
+
+### UI enhancements for managing the device inventory
+
+The following enhancements were added to the OT sensor's device inventory in version 22.3.6:
+
+- A smoother process for [editing device details](how-to-investigate-sensor-detections-in-a-device-inventory.md#edit-device-details) on the OT sensor. Edit device details directly from the device inventory page on the OT sensor console using the new **Edit** button in the toolbar at the top of the page.
+- The OT sensor now supports [deleting multiple devices](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) simultaneously.
+- The procedures for [merging](how-to-investigate-sensor-detections-in-a-device-inventory.md#merge-devices) and [deleting](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) devices now include confirmation messages that appear when the action has completed.
+
+For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
+
+### Updated severity for all Suspicion of Malicious Activity alerts
+
+All alerts with the **Suspicion of Malicious Activity** category now have a severity of **Critical**.
+
+For more information, see [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts).
+
+
+### Automatically resolved device notifications
+
+Starting in version 22.3.6, selected notifications on the OT sensor's **Device map** page are now automatically resolved if they aren't dismissed or otherwise handled within 14 days.
+
+After updating your sensor version, the **Inactive devices** and **New OT devices** notifications no longer appear. While any **Inactive devices** notifications that are left over from before the update are automatically dismissed, you may still have legacy **New OT devices** notifications to handle. Handle these notifications as needed to remove them from your sensor.
+
+For more information, see [Manage device notifications](how-to-work-with-the-sensor-device-map.md#manage-device-notifications).
### New Microsoft Sentinel incident experience for Defender for IoT
For more information, see [Tutorial: Investigate and detect threats for IoT devi
| **Enterprise IoT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) | + ### Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2 [Version 2.0.2](release-notes-sentinel.md#version-202) of the Microsoft Defender for IoT solution is now available in the [Microsoft Sentinel content hub](../../sentinel/sentinel-solutions-catalog.md), with improvements in analytics rules for incident creation, an enhanced incident details page, and performance improvements for analytics rule queries.
deployment-environments How To Configure Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-use-cli.md
- Title: Configure Deployment Environments by using the Azure CLI extension-
-description: Learn how to set up and use the Azure Deployment Environments Preview Azure CLI extension to configure the Deployment Environments service.
---- Previously updated : 10/26/2022---
-# Configure Azure Deployment Environments by using the Azure CLI extension
-
-This article shows you how to use the Azure Deployment Environments Preview Azure CLI extension to configure a Deployment Environments service. In Deployment Environments, you'll use Deployment Environments Azure CLI extension to create and work with [environments](./concept-environments-key-concepts.md#environments).
-
-> [!IMPORTANT]
-> Azure Deployment Environments is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Setup
-
-1. Install the Deployment Environments Azure CLI extension:
-
- 1. [Download and install the Azure CLI](/cli/azure/install-azure-cli).
- 1. Install the Deployment Environments AZ CLI extension:
-
- - **Automated installation**
-
- To install, execute the script *https://aka.ms/DevCenter/Install-DevCenterCli.ps1* directly in PowerShell:
-
- ```powershell
- iex "& { $(irm https://aka.ms/DevCenter/Install-DevCenterCli.ps1 ) }"
- ```
-
- Any existing dev center extension is uninstalled and the latest version is installed.
-
- - **Manual installation**
-
- In the Azure CLI, run the following command:
-
- ```azurecli
- az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-0.1.0-py3-none-any.whl
- ```
-
-1. Sign in to the Azure CLI:
-
- ```azurecli
- az login
- ```
-
-1. Set the default subscription to the subscription you'll use to create your specific Deployment Environment resources:
-
- ```azurecli
- az account set --subscription {subscriptionId}
- ```
-
-## Commands
-
-### Create a new resource group
-
-```azurecli
-az group create -l <region-name> -n <resource-group-name>
-```
-
-Optionally, set defaults so that you don't need to pass the argument into each command:
-
-```azurecli
-az configure --defaults group=<resource-group-name>
-```
-
-### Get help for a command
-
-```azurecli
-az devcenter admin <command> --help
-```
-
-```azurecli
-az devcenter dev <command> --help
-```
-
-**Dev centers**
-
-### Create a dev center with an associated user-assigned identity
-
-```azurecli
-az devcenter admin devcenter create --identity-type "UserAssigned" --user-assigned-identity
- "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/identityGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testidentity1" --location <location-name> -g <resource-group-name> - <name>
-```
-
-### Create a dev center with an associated system-assigned identity
-
-```azurecli
-az devcenter admin devcenter create --location <location-name> -g <resource-group-name> -n <name> \
- --identity-type "SystemAssigned"
-```
-
-### List dev centers (in the specified resource group)
-
-```azurecli
-az devcenter admin devcenter list -g <resource-group-name>
-```
-
-### List dev centers (in the selected subscription if resource group isn't specified or configured in defaults)
-
-```azurecli
-az devcenter admin devcenter list --output table
-```
-
-### Get a specific dev center
-
-```azurecli
-az devcenter admin devcenter show -g <resource-group-name> --name <name>
-```
-
-### Delete a dev center
-
-```azurecli
-az devcenter admin devcenter delete -g <resource-group-name> --name <name>
-```
-
-### Force-delete a dev center
-
-```azurecli
-az devcenter admin devcenter delete -g <resource-group-name> --name <name> --yes
-```
-
-**Environment types**
-
-### Create an environment type
-
-```azurecli
-az devcenter admin environment-type create --dev-center-name <devcenter-name> -g <resource-group-name> --name <name>
-```
-
-### List environment types by dev center
-
-```azurecli
-az devcenter admin environment-type list --dev-center-name <devcenter-name> --resource-group <resource-group-name>
-```
-
-### List environment types by project
-
-```azurecli
-az devcenter admin environment-type list --project-name <devcenter-name> --resource-group <resource-group-name>
-```
-
-### Delete an environment type
-
-```azurecli
-az devcenter admin environment-type delete --dev-center-name <devcenter-name> --name "{environmentTypeName}" \
- --resource-group <resource-group-name>
-```
-
-### List environment types by dev center and project for developers
-
-```azurecli
-az devcenter dev environment list --dev-center <devcenter-name> --project-name <project-name>
-```
-
-**Project environment types**
-
-### Create project environment types
-
-```azurecli
-az devcenter admin project-environment-type create --description "Developer/Testing environment" --dev-center-name \
- <devcenter-name> --name "{environmentTypeName}" --resource-group <resource-group-name> \
- --deployment-target-id "/subscriptions/00000000-0000-0000-0000-000000000000" \
- --status Enabled --type SystemAssigned
-```
-
-### List project environment types by dev center
-
-```azurecli
-az devcenter admin project-environment-type list --dev-center-name <devcenter-name> \
- --resource-group <resource-group-name>
-```
-
-### List project environment types by project
-
-```azurecli
-az devcenter admin project-environment-type list --project-name <project-name> --resource-group <resource-group-name>
-```
-
-### Delete project environment types
-
-```azurecli
-az devcenter admin project-environment-type delete --project-name <project-name> \
- --environment-type-name "{environmentTypeName}" --resource-group <resource-group-name>
-```
-
-#### List allowed project environment types
-
-```azurecli
-az devcenter admin project-allowed-environment-type list --project-name <project-name> \
- --resource-group <resource-group-name>
-```
-
-**Catalogs**
-
-### Create a catalog that uses a GitHub repository
-
-```azurecli
-az devcenter admin catalog create --git-hub secret-identifier="https://<key-vault-name>.azure-int.net/secrets/<secret-name>" uri=<git-clone-uri> branch=<git-branch> -g <resource-group-name> --name <name> --dev-center-name <devcenter-name>
-```
-
-### Create a catalog that uses an Azure DevOps repository
-
-```azurecli
-az devcenter admin catalog create --ado-git secret-identifier="https://<key-vault-name>.azure-int.net/secrets/<secret-name>" uri=<git-clone-uri> branch=<git-branch> -g <resource-group-name> --name <name> --dev-center-name <devcenter-name>
-```
-
-### Sync a catalog
-
-```azurecli
-az devcenter admin catalog sync --name <name> --dev-center-name <devcenter-name> -g <resource-group-name>
-```
-
-### List catalogs in a dev center
-
-```azurecli
-az devcenter admin catalog list -g <resource-group-name> --dev-center-name <devcenter-name>
-```
-
-### Delete a catalog
-
-```azurecli
-az devcenter admin catalog delete -g <resource-group-name> --dev-center-name <devcenter-name> -n <name>
-```
-
-**Catalog items**
-
-### List catalog items that are available in a project
-
-```azurecli
-az devcenter dev catalog-item list --dev-center-name <devcenter-name> --project-name <name>
-```
-
-**Projects**
-
-### Create a project
-
-```azurecli
-az devcenter admin project create -g <resource-group-name> -n <project-name> --dev-center-id <devcenter-resource-id>
-```
-
-### List projects (in the specified resource group)
-
-```azurecli
-az devcenter admin project list -g <resource-group-name>
-```
-
-### List projects (in the selected subscription if resource group isn't specified or configured in defaults)
-
-```azurecli
-az graph query -q "Resources | where type =~ 'microsoft.devcenter/projects' | project id, name"
-```
-
-### Delete a project
-
-```azurecli
-az devcenter admin project delete -g <resource-group-name> --name <project-name>
-```
-
-**Environments**
-
-### Create an environment
-
-```azurecli
-az devcenter dev environment create --dev-center-name <devcenter-name> \
- --project-name <project-name> -n <name> --environment-type <environment-type-name> \
- --catalog-item-name <catalog-item-name> catalog-name <catalog-name> \
- --parameters <deployment-parameters-json-string>
-```
-
-### Deploy an environment
-
-```azurecli
-az devcenter environment deploy-action --action-id "deploy" --dev-center <devcenter-name> \
- -g <resource-group-name> --project-name <project-name> -n <name> --parameters <parameters-json-string>
-```
-
-### List the environments in a project
-
-```azurecli
-az devcenter dev environment list --dev-center <devcenter-name> --project-name <project-name>
-```
-
-#### Delete an environment
-
-```azurecli
-az devcenter dev environment delete --dev-center <devcenter-name> --project-name <project-name> -n <name> --user-id "me"
-```
deployment-environments How To Install Devcenter Cli Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-install-devcenter-cli-extension.md
+
+ Title: Install the devcenter Azure CLI extension
+
+description: Learn how to install the Azure CLI and the Azure Deployment Environments Preview CLI extension so you can create Deployment Environments resources from the command line.
+++++ Last updated : 03/19/2023
+Customer intent: As a dev infra admin, I want to install the Deployment Environments CLI extension so that I can create Deployment Environments resources from the command line.
++
+# Azure Deployment Environments Preview Azure CLI extension
+
+In addition to the Azure admin portal and the developer portal, you can use the Deployment Environments Azure CLI extension to create resources. Azure Deployment Environments and Microsoft Dev Box use the same Azure CLI extension, which is called `devcenter`.
+
+## Install the Deployment Environments CLI extension
+
+To install the Deployment Environments Azure CLI extension, you first need to install the Azure CLI. The following steps show you how to install the Azure CLI, then the Deployment Environments CLI extension.
+
+1. Download and install the [Azure CLI](/cli/azure/install-azure-cli).
+
+1. Install the Deployment Environments CLI extension
+``` azurecli
+az extension add --name devcenter
+```
+1. Check that the `devcenter` extension is installed
+``` azurecli
+az extension list
+```
+### Update the Deployment Environments CLI extension
+You can update the Deployment Environments CLI extension if you already have it installed.
+
+To update a version of the extension that's installed
+``` azurecli
+az extension update --name devcenter
+```
+### Remove the Deployment Environments CLI extension
+
+To remove the extension, use the following command
+```azurecli
+az extension remove --name devcenter
+```
+
+## Get started with the Deployment Environments CLI extension
+
+You might find the following commands useful as you work with the Deployment Environments CLI extension.
+
+1. Sign in to Azure CLI with your work account.
+
+ ```azurecli
+ az login
+ ```
+
+1. Set your default subscription to the subscription where you're creating your specific Deployment Environments resources.
+
+ ```azurecli
+ az account set --subscription {subscriptionId}
+ ```
+
+1. Set default resource group. Setting a default resource group means you don't need to specify the resource group for each command.
+
+ ```azurecli
+ az configure --defaults group={resourceGroupName}
+ ```
+
+1. Get Help for a command
+
+ ```azurecli
+ az devcenter admin --help
+ ```
+
+## Next steps
+
+For complete command listings, refer to the [Microsoft Deployment Environments and Azure Deployment Environments Azure CLI documentation](https://aka.ms/CLI-reference).
dev-box Cli Reference Subset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/cli-reference-subset.md
- Title: Microsoft Dev Box Preview Azure CLI Reference-
-description: This article contains descriptions and definitions for a subset of the Dev Box Azure CLI extension.
----- Previously updated : 10/12/2022-
-# Microsoft Dev Box Preview Azure CLI reference
-This article contains descriptions and definitions for a subset of the Microsoft Dev Box Preview CLI extension.
-
-> [!NOTE]
-> Microsoft Dev Box is currently in public preview. Features and commands may change. If you need additional assistance, contact the Dev Box team by using [Report a problem](https://aka.ms/devbox/report).
-
-## Prerequisites
-Install the Azure CLI and the Dev Box CLI extension as described here: [Microsoft Dev Box CLI](how-to-install-dev-box-cli.md)
-## Commands
-
-* [Azure Compute Gallery](#azure-compute-gallery)
-* [DevCenter](#devcenter)
-* [Project](#project)
-* [Network Connection](#network-connection)
-* [Dev Box Definition](#dev-box-definition)
-* [Dev Box Pool](#dev-box-pool)
-* [Dev Boxes](#dev-boxes)
-
-### Azure Compute Gallery
-
-#### Create an image definition that meets all requirements
-
-```azurecli
-az sig image-definition create --resource-group {resourceGroupName}
gallery-name {galleryName} --gallery-image-definition {definitionName}publisher {publisherName} --offer {offerName} --sku {skuName}os-type windows --os-state Generalizedhyper-v-generation v2features SecurityType=TrustedLaunch
-```
-
-#### Attach a Gallery to the DevCenter
-
-```azurecli
-az devcenter admin gallery create -g demo-rg
dev-center-name contoso-devcenter -n SharedGallerygallery-resource-id "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/galleries/{computeGalleryName}"
-```
-
-### DevCenter
-
-#### Create a DevCenter
-
-```azurecli
-az devcenter admin devcenter create -g demo-rg
--n contoso-devcenter --identity-type UserAssigneduser-assigned-identity "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{managedIdentityName}"location {regionName}
-```
-
-### Project
-
-#### Create a Project
-
-```azurecli
-az devcenter admin project create -g demo-rg
--n ContosoProjectdescription "project description"devcenter-id /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevCenter/devcenters/{devCenterName}
-```
-
-#### Delete a Project
-
-```azurecli
-az devcenter admin project delete
--g {resourceGroupName}project {projectName}
-```
-
-### Network Connection
-
-#### Create a native AADJ Network Connection
-
-```azurecli
-az devcenter admin network-connection create --location "centralus"
domain-join-type "AzureADJoin"subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default"name "{networkConnectionName}" --resource-group "rg1"
-```
-
-#### Create a hybrid AADJ Network Connection
-
-```azurecli
-az devcenter admin network-connection create --location "centralus"
domain-join-type "HybridAzureADJoin" --domain-name "mydomaincontroller.local"domain-password "Password value for user" --domain-username "testuser@mydomaincontroller.local"subnet-id "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ExampleRG/providers/Microsoft.Network/virtualNetworks/ExampleVNet/subnets/default"name "{networkConnectionName}" --resource-group "rg1"
-```
-
-#### Attach a Network Connection to the DevCenter
-
-```azurecli
-az devcenter admin attached-network create --attached-network-connection-name westus3network
dev-center-name contoso-devcenter -g demo-rgnetwork-connection-id /subscriptions/f141e9f2-4778-45a4-9aa0-8b31e6469454/resourceGroups/demo-rg/providers/Microsoft.DevCenter/networkConnections/netset99
-```
-
-### Dev Box Definition
-
-#### List Dev Box Definitions in a DevCenter
-
-```azurecli
-az devcenter admin devbox-definition list
dev-center-name "Contoso" --resource-group "rg1"
-```
-
-#### List skus available in your subscription
-
-```azurecli
-az devcenter admin sku list
-```
-#### Create a Dev Box Definition with a marketplace image
-
-```azurecli
-az devcenter admin devbox-definition create -g demo-rg
dev-center-name contoso-devcenter -n BaseImageDefinitionimage-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/Default/images/MicrosoftWindowsDesktop_windows-ent-cpc_win11-21h2-ent-cpc-m365"sku name="general_a_8c32gb_v1"
-```
-
-#### Create a Dev Box Definition with a custom image
-
-```azurecli
-az devcenter admin devbox-definition create -g demo-rg
dev-center-name contoso-devcenter -n CustomDefinitionimage-reference id="/subscriptions/{subscriptionId}/resourceGroups/demo-rg/providers/Microsoft.DevCenter/devcenters/contoso-devcenter/galleries/SharedGallery/images/CustomImageName"os-storage-type "ssd_1024gb" --sku name=general_a_8c32gb_v1
-```
-
-### Dev Box Pool
-
-#### Create a Pool
-
-```azurecli
-az devcenter admin pool create -g demo-rg
project-name ContosoProject -n MarketplacePooldevbox-definition-name Definitionnetwork-connection-name westus3networklicense-type Windows_Client --local-administrator Enabled
-```
-
-#### Get Pool
-
-```azurecli
-az devcenter admin pool show --resource-group "{resourceGroupName}"
project-name {projectName} --name "{poolName}"
-```
-
-#### List Pools
-
-```azurecli
-az devcenter admin pool list --resource-group "{resourceGroupName}"
project-name {projectName}
-```
-
-#### Update Pool
-
-Update Network Connection
-
-```azurecli
-az devcenter admin pool update
resource-group "{resourceGroupName}"project-name {projectName}name "{poolName}"network-connection-name {networkConnectionName}
-```
-
-Update Dev Box Definition
-
-```azurecli
-az devcenter admin pool update
resource-group "{resourceGroupName}"project-name {projectName}name "{poolName}"devbox-definition-name {devBoxDefinitionName}
-```
-
-#### Delete Pool
-
-```azurecli
-az devcenter admin pool delete
resource-group "{resourceGroupName}"project-name "{projectName}"name "{poolName}"
-```
-
-### Dev Boxes
-
-#### List available Projects
-
-```azurecli
-az devcenter dev project list
devcenter {devCenterName}
-```
-
-#### List Pools in a Project
-
-```azurecli
-az devcenter dev pool list
devcenter {devCenterName}project-name {ProjectName}
-```
-
-#### Create a dev box
-
-```azurecli
-az devcenter dev dev-box create
devcenter {devCenterName}project-name {projectName}pool-name {poolName}--n {devBoxName}
-```
-
-#### Get web connection URL for a dev box
-
-```azurecli
-az devcenter dev dev-box show-remote-connection
devcenter {devCenterName}project-name {projectName}user-id "me"--n {devBoxName}
-```
-
-#### List your Dev Boxes
-
-```azurecli
-az devcenter dev dev-box list --devcenter {devCenterName}
-```
-
-#### View details of a Dev Box
-
-```azurecli
-az devcenter dev dev-box show
devcenter {devCenterName}project-name {projectName}--n {devBoxName}
-```
-
-#### Stop a Dev Box
-
-```azurecli
-az devcenter dev dev-box stop
devcenter {devCenterName}project-name {projectName}user-id "me"--n {devBoxName}
-```
-
-#### Start a Dev Box
-
-```azurecli
-az devcenter dev dev-box start
devcenter {devCenterName}project-name {projectName}user-id "me"--n {devBoxName}
-```
-
-## Next steps
-
-Learn how to install the Azure CLI and the Dev Box CLI extension at:
--- [Microsoft Dev Box CLI](./how-to-install-dev-box-cli.md)
dev-box How To Install Dev Box Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md
Previously updated : 10/12/2022 Last updated : 03/19/2023 Customer intent: As a dev infra admin, I want to install the Dev Box CLI extension so that I can create Dev Box resources from the command line.
-# Microsoft Dev Box Preview CLI
+# Microsoft Dev Box Preview Azure CLI extension
-In addition to the Azure admin portal and the Dev Box user portal, you can use Dev Box's Azure CLI Extension to create resources.
+In addition to the Azure admin portal and the developer portal, you can use the Dev Box Azure CLI extension to create resources. Microsoft Dev Box and Azure Deployment Environments use the same Azure CLI extension, which is called `devcenter`.
## Install the Dev Box CLI extension
-1. Download and install the [Azure CLI](/cli/azure/install-azure-cli).
-
-1. Install the Dev Box Azure CLI extension:
- #### [Install by using a PowerShell script](#tab/Option1/)
-
- Using <https://aka.ms/DevCenter/Install-DevCenterCli.ps1> uninstalls any existing Dev Box CLI extension and installs the latest version.
-
- ```azurepowershell
- write-host "Setting Up DevCenter CLI"
-
- # Get latest version
- $indexResponse = Invoke-WebRequest -Method Get -Uri "https://fidalgosetup.blob.core.windows.net/cli-extensions/index.json" -UseBasicParsing
- $index = $indexResponse.Content | ConvertFrom-Json
- $versions = $index.extensions.devcenter
- $latestVersion = $versions[0]
- if ($latestVersion -eq $null) {
- throw "Could not find a valid version of the CLI."
- }
-
- # remove existing
- write-host "Attempting to remove existing CLI version (if any)"
- az extension remove -n devcenter
-
- # Install new version
- $downloadUrl = $latestVersion.downloadUrl
- write-host "Installing from url " $downloadUrl
- az extension add --source=$downloadUrl -y
- ```
-
- To execute the script directly in PowerShell:
+To install the Dev Box Azure CLI extension, you first need to install the Azure CLI. The following steps show you how to install the Azure CLI, then the Dev Box CLI extension.
- ```azurecli
- iex "& { $(irm https://aka.ms/DevCenter/Install-DevCenterCli.ps1 ) }"
- ```
-
- The final line of the script enables you to specify the location of the source file to download. If you want to access the file from a different location, update 'source' in the script to point to the downloaded file in the new location.
-
- #### [Install manually](#tab/Option2/)
-
- Remove existing extension if one exists:
-
- ```azurecli
- az extension remove --name devcenter
- ```
+1. Download and install the [Azure CLI](/cli/azure/install-azure-cli).
- Manually run this command in the CLI:
+1. Install the Dev Box CLI extension
+``` azurecli
+az extension add --name devcenter
+```
+1. Check that the `devcenter` extension is installed
+``` azurecli
+az extension list
+```
+### Update the Dev Box CLI extension
+You can update the Dev Box CLI extension if you already have it installed.
- ```azurecli
- az extension add --source https://fidalgosetup.blob.core.windows.net/cli-extensions/devcenter-0.1.0-py3-none-any.whl
- ```
-
-1. Verify that the Dev Box CLI extension installed successfully by using the following command:
+To update a version of the extension that's installed
+``` azurecli
+az extension update --name devcenter
+```
+### Remove the Dev Box CLI extension
- ```azurecli
- az extension list
- ```
+To remove the extension, use the following command
+```azurecli
+az extension remove --name devcenter
+```
- You will see the devcenter extension listed:
- :::image type="content" source="media/how-to-install-dev-box-cli/dev-box-cli-installed.png" alt-text="Screenshot showing the dev box extension listed.":::
+## Get started with the Dev Box CLI extension
-## Configure your Dev Box CLI
+You might find the following commands useful as you work with the Dev Box CLI extension.
-1. Log in to Azure CLI with your work account.
+1. Sign in to Azure CLI with your work account.
```azurecli az login ```
-1. Set your default subscription to the sub where you'll be creating your specific Dev Box resources
+1. Set your default subscription to the subscription where you're creating your specific Dev Box resources.
```azurecli az account set --subscription {subscriptionId} ```
-1. Set default resource group (which means no need to pass into each command)
+1. Set default resource group. Setting a default resource group means you don't need to specify the resource group for each command.
```azurecli az configure --defaults group={resourceGroupName}
In addition to the Azure admin portal and the Dev Box user portal, you can use D
## Next steps
-Discover the Dev Box commands you can use at:
--- [Microsoft Dev Box Preview Azure CLI reference](./cli-reference-subset.md)
+For complete command listings, refer to the [Microsoft Dev Box and Azure Deployment Environments Azure CLI documentation](https://aka.ms/CLI-reference).
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
implementations:
- **\[Preview\]: Linux machines should meet requirements for the Azure compute security baseline** Azure Policy guest configuration definition-- **Vulnerabilities in security configuration on your machines should be remediated** in Azure
- Security Center
+- **Vulnerabilities in security configuration on your machines should be remediated** in Microsoft Defender for Cloud
For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and [Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
hdinsight Apache Troubleshoot Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-troubleshoot-spark.md
Title: Troubleshoot Apache Spark in Azure HDInsight
description: Get answers to common questions about working with Apache Spark and Azure HDInsight. Previously updated : 08/22/2019 Last updated : 03/20/2023
Learn about the top issues and their resolutions when working with Apache Spark
Spark configuration values can be tuned help avoid an Apache Spark application `OutofMemoryError` exception. The following steps show default Spark configuration values in Azure HDInsight:
-1. Log in to Ambari at `https://CLUSTERNAME.azurehdidnsight.net` with your cluster credentials. The initial screen displays an overview dashboard. There are slight cosmetic differences between HDInsight 3.6 and 4.0.
+1. Log in to Ambari at `https://CLUSTERNAME.azurehdidnsight.net` with your cluster credentials. The initial screen displays an overview dashboard. There are slight cosmetic differences between HDInsight 4.0.
1. Navigate to **Spark2** > **Configs**.
Spark configuration values can be tuned help avoid an Apache Spark application `
:::image type="content" source="./media/apache-troubleshoot-spark/apache-spark-ambari-config6c.png" alt-text="Enter a note about the changes you made" border="true":::
- You are notified if any configurations need attention. Note the items, and then select **Proceed Anyway**.
+ You're notified if any configurations need attention. Note the items, and then select **Proceed Anyway**.
:::image type="content" source="./media/apache-troubleshoot-spark/apache-spark-ambari-config6b.png" alt-text="Select Proceed Anyway" border="true":::
-1. Whenever a configuration is saved, you are prompted to restart the service. Select **Restart**.
+1. Whenever a configuration is saved, you're prompted to restart the service. Select **Restart**.
:::image type="content" source="./media/apache-troubleshoot-spark/apache-spark-ambari-config7a.png" alt-text="Select restart" border="true":::
Launch spark-shell by using a command similar to the following. Change the actua
spark-submit --master yarn-cluster --class com.microsoft.spark.application --num-executors 4 --executor-memory 4g --executor-cores 2 --driver-memory 8g --driver-cores 4 /home/user/spark/sparkapplication.jar ```
-### Additional reading
+### Extra reading
[Apache Spark job submission on HDInsight clusters](/archive/blogs/azuredatalake/spark-job-submission-on-hdinsight-101)
iot-develop Quickstart Send Telemetry Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-send-telemetry-iot-hub.md
As a next step, explore the following articles to learn more about building devi
> [!div class="nextstepaction"] > [Control a device connected to an IoT hub](../iot-hub/quickstart-control-device.md) > [!div class="nextstepaction"]
-> [Send telemetry to IoT Central](quickstart-send-telemetry-central.md)
-> [!div class="nextstepaction"]
-> [Connect an MXCHIP AZ3166 devkit to IoT Central](quickstart-devkit-mxchip-az3166.md)
+> [Build a device solution with IoT Hub](set-up-environment.md)
iot-hub-device-update Configure Access Control Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/configure-access-control-device-update.md
Title: Configure Access Control in Device Update for IoT Hub | Microsoft Docs
+ Title: Configure Access Control in Device Update for IoT Hub
description: Configure Access Control in Device Update for IoT Hub.
iot-hub-device-update Connected Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-configure.md
Title: Configure Microsoft Connected Cache for Device Update for Azure IoT Hub | Microsoft Docs
+ Title: Configure Microsoft Connected Cache for Device Update for Azure IoT Hub
description: Overview of Microsoft Connected Cache for Device Update for Azure IoT Hub Last updated 08/19/2022-+
iot-hub-device-update Connected Cache Disconnected Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-disconnected-device-update.md
Title: Disconnected device update using Microsoft Connected Cache | Microsoft Docs
+ Title: Disconnected device update using Microsoft Connected Cache
description: Understand support for disconnected device update using Microsoft Connected Cache Last updated 08/19/2022-+
iot-hub-device-update Connected Cache Industrial Iot Nested https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-industrial-iot-nested.md
Title: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration | Microsoft Docs
+ Title: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration
description: Microsoft Connected Cache within an Azure IoT Edge for Industrial IoT configuration tutorial Last updated 2/16/2021-+
iot-hub-device-update Connected Cache Nested Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-nested-level.md
Title: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy | Microsoft Docs
+ Title: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy
description: Microsoft Connected Cache two level nested Azure IoT Edge Gateway with outbound unauthenticated proxy tutorial Last updated 2/16/2021-+
iot-hub-device-update Connected Cache Single Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/connected-cache-single-level.md
Title: Microsoft Connected Cache preview deployment scenario samples | Microsoft Docs
+ Title: Microsoft Connected Cache preview deployment scenario samples
description: Microsoft Connected Cache preview deployment scenario samples tutorials Last updated 2/16/2021-+
iot-hub-device-update Device Update Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-overview.md
Title: Understand Device Update for Azure IoT Hub Agent| Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub Agent
description: Understand Device Update for Azure IoT Hub Agent. Last updated 9/12/2022-+
iot-hub-device-update Device Update Apt Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-apt-manifest.md
Title: Understand Device Update for Azure IoT Hub apt manifest | Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub apt manifest
description: Understand how Device Update for IoT Hub uses apt manifest for a package-based update. Last updated 2/17/2021-+
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
description: Get started with Device Update for Azure RTOS.
Last updated 3/18/2021-+
iot-hub-device-update Device Update Changelog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-changelog.md
description: Release notes and version history for Device Update for IoT Hub.
Last updated 02/22/2023-+
iot-hub-device-update Device Update Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-compliance.md
Title: Understand Device Update for Azure IoT Hub compliance | Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub compliance
description: Understand how Device Update for Azure IoT Hub measure device update compliance. Last updated 2/11/2021-+
iot-hub-device-update Device Update Configuration File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configuration-file.md
Title: Understand Device Update for Azure IoT Hub Configuration File| Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub Configuration File
description: Understand Device Update for Azure IoT Hub Configuration File. Last updated 08/27/2022-+
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
Title: Understand Device Update for IoT Hub authentication and authorization | Microsoft Docs
+ Title: Understand Device Update for IoT Hub authentication and authorization
description: Understand how Device Update for IoT Hub uses Azure RBAC to provide authentication and authorization for users and service APIs. Last updated 10/21/2022-+
iot-hub-device-update Device Update Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-deployments.md
Title: Understand Device Update for Azure IoT Hub deployments | Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub deployments
description: Understand how updates are deployed. Last updated 12/07/2021-+
iot-hub-device-update Device Update Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-diagnostics.md
Title: Understand Device Update for Azure IoT Hub diagnostic features | Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub diagnostic features
description: Understand what diagnostic features Device Update for IoT Hub has, including deployment error codes in UX and remote log collection. Last updated 9/2/2022-+
iot-hub-device-update Device Update Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-error-codes.md
Title: Error codes for Device Update for Azure IoT Hub | Microsoft Docs
+ Title: Error codes for Device Update for Azure IoT Hub
description: This document provides a table of error codes for various Device Update components. Last updated 06/28/2022-+
iot-hub-device-update Device Update Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-groups.md
Title: Understand Device Update for Azure IoT Hub device groups | Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub device groups
description: Understand how device groups are used. Last updated 2/09/2021-+
iot-hub-device-update Device Update Multi Step Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-multi-step-updates.md
Title: Using multiple steps for Updates with Device Update for Azure IoT Hub| Microsoft Docs
+ Title: Using multiple steps for Updates with Device Update for Azure IoT Hub
description: Using multiple steps for Updates with Device Update for Azure IoT Hub Last updated 11/12/2021-+
iot-hub-device-update Device Update Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-networking.md
Title: Device Update for IoT Hub network requirements | Microsoft Docs
+ Title: Device Update for IoT Hub network requirements
description: Device Update for IoT Hub uses a variety of network ports for different purposes. Last updated 1/11/2021-+
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
Title: Understand how Device Update for IoT Hub uses IoT Plug and Play | Microsoft Docs
+ Title: Understand how Device Update for IoT Hub uses IoT Plug and Play
description: Device Update for IoT Hub uses to discover and manage devices that are over-the-air update capable. Last updated 2/2/2023-+
iot-hub-device-update Device Update Proxy Update Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-proxy-update-troubleshooting.md
Title: Troubleshooting for importing proxy updates to Device Update for Azure IoT Hub | Microsoft Docs
+ Title: Troubleshooting for importing proxy updates to Device Update for Azure IoT Hub
description: This document provides troubleshooting steps for error messages that may occur when importing proxy update to Device Update for IoT Hub. Last updated 1/5/2022-+
iot-hub-device-update Device Update Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-proxy-updates.md
Title: Using Proxy Updates with Device Update for Azure IoT Hub| Microsoft Docs
+ Title: Using Proxy Updates with Device Update for Azure IoT Hub
description: Using Proxy Updates with Device Update for Azure IoT Hub Last updated 11/12/2021-+
iot-hub-device-update Device Update Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-resources.md
Title: Understand Device Update for Azure IoT Hub resources | Microsoft Docs
+ Title: Understand Device Update for Azure IoT Hub resources
description: Understand Device Update for Azure IoT Hub resources Last updated 11/02/2022-+ - # Device update resources To use Device Update for IoT Hub, you need to create a Device Update account and instance.
iot-hub-device-update Device Update Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-security.md
Title: Security for Device Update for Azure IoT Hub | Microsoft Docs
+ Title: Security for Device Update for Azure IoT Hub
description: Understand how Device Update for IoT Hub ensures devices are updated securely. Last updated 08/19/2022-+
iot-hub-device-update Import Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-concepts.md
Title: Understand Device Update for IoT Hub importing | Microsoft Docs
+ Title: Understand Device Update for IoT Hub importing
description: Key concepts for importing a new update into Device Update for IoT Hub. Last updated 06/27/2022-+
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-schema.md
Title: Importing updates into Device Update for IoT Hub - import manifest schema | Microsoft Docs
+ Title: Importing updates into Device Update for IoT Hub - import manifest schema
description: Schema used to create the import manifest required to import updates into Device Update for IoT Hub. Last updated 09/9/2022-+
iot-hub-device-update Monitor Device Update Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/monitor-device-update-iot-hub.md
Title: Monitoring Device Update for IoT Hub
description: Start here to learn how to monitor Device Update for IoT Hub -+ Last updated 9/08/2022
iot-hub-device-update Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/network-security.md
Title: Understand Device Update for IoT Hub network security | Microsoft Docs
+ Title: Understand Device Update for IoT Hub network security
description: This article describes how to use service tags and private endpoints with Device Update for IoT Hub. Last updated 06/26/2022-+
iot-hub-device-update Update Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/update-manifest.md
Title: Device Update for IoT Hub update manifest | Microsoft Docs
+ Title: Device Update for IoT Hub update manifest
description: Learn how properties are sent from the Device Update service to the device during an update Last updated 08/19/2022-+
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
After a successful TLS handshake, IoT Hub can authenticate a device using a symm
## Mutual TLS support
-Mutual TLS authentication ensures that the client _authenticates_ the server (IoT Hub) certificate and the server (IoT Hub) _authenticates_ the [X.509 client certificate or X.509 Thumbprint](tutorial-x509-introduction.md). _Authorization_ is performed by IoT Hub after _authentication_ is complete.
+Mutual TLS authentication ensures that the client _authenticates_ the server (IoT Hub) certificate and the server (IoT Hub) _authenticates_ the [X.509 client certificate or X.509 thumbprint](tutorial-x509-prove-possession.md). _Authorization_ is performed by IoT Hub after _authentication_ is complete.
For AMQP and MQTT protocols, IoT Hub requests a client certificate in the initial TLS handshake. If one is provided, IoT Hub _authenticates_ the client certificate and the client _authenticates_ the IoT Hub certificate. This process is called mutual TLS authentication. When IoT Hub receives an MQTT connect packet or an AMQP link opens, IoT Hub performs _authorization_ for the requesting client and determines if the client requires X.509 authentication. If mutual TLS authentication was completed and the client is authorized to connect as the device, it is allowed. However, if the client requires X.509 authentication and client authentication was not completed during the TLS handshake, then IoT Hub rejects the connection.
iot-hub Tutorial X509 Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-introduction.md
- Title: Tutorial - Use X.509 certificates with Azure IoT Hub | Microsoft Docs
-description: Tutorial - Use X.509 certificates with Azure IoT Hub
---- Previously updated : 01/09/2023--
-#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This introductory article helps me decide which subsequent articles to read for my scenario.
--
-# Tutorial: Use X.509 certificates to authenticate devices with Azure IoT Hub
-
-You can use X.509 certificates to authenticate devices to an Azure IoT Hub.
-
-This multi-part tutorial includes several articles that:
--- Show you how to create X.509 certificates and certificate chains using [OpenSSL](https://www.openssl.org/). OpenSSL is an open-source tool that is used broadly across the industry for cryptography and to create X.509 certificates.--- Show you how to use utilities packaged with the Azure IoT SDKs that can help you quickly create test certificates to use with Azure IoT Hub. Many of these utilities wrap OpenSSL calls.--- Provide instructions for how to authenticate a device with IoT Hub using a certificate chain.-
-Depending on your familiarity with X.509 certificates and the stage of development of your IoT solution, one or more of the tutorials in this section may be helpful. This introductory article will help you choose the best path through the other articles in this tutorial for your scenario.
-
-## X.509 certificate concepts
-
-Before starting any of the articles in this tutorial, you should be familiar with X.509 certificates and X.509 certificate chains. The following articles can help bring you up to speed.
--- To understand X.509 certificate chains and how they're used with IoT Hub, see [X.509 CA certificates for IoT Hub](iot-hub-x509ca-concept.md). Make sure you have a clear understanding of this article before you proceed.--- For an introduction to concepts that underlie the use of X.509 certificates, see [Understand public key cryptography and X.509 public key infrastructure](iot-hub-x509-certificate-concepts.md).--- For a quick review of the fields that can be present in an X.509 certificate, see the [Certificate fields](reference-x509-certificates.md#certificate-fields) section of [Understand X.509 public key certificates](reference-x509-certificates.md).-
-## X.509 certificate scenario paths
-
-Using a CA-signed certificate chain backed by a PKI to authenticate a device provides the best level of security for your devices:
--- In production, we recommend you get your X.509 CA certificates from a public root certificate authority. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. If you already have an X.509 CA certificate, and you know how to create and sign device certificates into a certificate chain, follow the instructions in [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md) to upload your CA certificate to your IoT hub. Then, follow the instructions in [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to authenticate a device with your IoT hub.--- For testing purposes, we recommend using OpenSSL to create an X.509 certificate chain. OpenSSL is used widely across the industry to work with X.509 certificates. You can follow the steps in [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md) to create a root CA and intermediate CA certificate with which to create and sign device certificates. The tutorial also shows how to upload and verify a CA certificate. Then, follow the instructions in [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to authenticate a device with your IoT hub.-
-## Next steps
-
-To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md).
-
-If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles:
-
-* [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md)
-* If you want to use self-signed certificates for testing, see the [Create a self-signed certificate](reference-x509-certificates.md#create-a-self-signed-certificate) section of [X.509 certificates](reference-x509-certificates.md).
-
- >[!IMPORTANT]
- >We recommend that you use certificates signed by an issuing Certificate Authority (CA), even for testing purposes. Never use self-signed certificates in production.
-
-If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md).
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
Title: Attach or detach a compute gallery to a lab plan
+ Title: "Attach/detach a compute gallery to a lab plan"
-description: This article describes how to attach an Azure Compute Gallery to a lab in Azure Lab Services.
+description: This article describes how to attach or detach an Azure compute gallery to a lab plan in Azure Lab Services.
Last updated 03/01/2023
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-> [!NOTE]
-> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Attach or detach a shared image gallery to a lab account in Azure Lab Services](how-to-attach-detach-shared-image-gallery-1.md).
-
-This article shows you how to attach or detach an Azure Compute Gallery to a lab plan.
+This article shows how to attach or detach an Azure compute gallery to a lab plan. If you use a lab account, see how to [attach or detach a compute gallery to a lab account](how-to-attach-detach-shared-image-gallery-1.md).
> [!IMPORTANT]
-> Lab plan administrators must manually [replicate images](../virtual-machines/shared-image-galleries.md) to other regions in the compute gallery. Replicate an Azure Compute Gallery image to the same region as the lab plan to be shown in the list of virtual machine images during lab creation.
+> To show a virtual machine image in the list of images during lab creation, you need to replicate the compute gallery image to the same region as the lab plan. You need to manually [replicate images](../virtual-machines/shared-image-galleries.md) to other regions in the compute gallery.
-Saving images to a compute gallery and replicating those images incurs additional cost. This cost is separate from the Azure Lab Services usage cost. For more information about Azure Compute Gallery pricing, see [Azure Compute Gallery ΓÇô Billing](../virtual-machines/azure-compute-gallery.md#billing).
+Saving images to a compute gallery and replicating those images incurs additional cost. This cost is separate from the Azure Lab Services usage cost. Learn more about [Azure Compute Gallery pricing](../virtual-machines/azure-compute-gallery.md#billing).
## Prerequisites - To change settings for the lab plan, your Azure account needs the [Owner](/azure/role-based-access-control/built-in-roles#owner), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Lab Services Contributor](/azure/role-based-access-control/built-in-roles#lab-services-contributor) role on the lab plan. Learn more about the [Azure Lab Services built-in roles](./administrator-guide.md#rbac-roles). -- To attach an Azure compute gallery to a lab plan, your Azure account needs the following permissions:
+- To attach an Azure compute gallery to a lab plan, your Azure account needs to have the following permissions:
+
+ | Azure role | Scope | Note |
+ | - | -- | - |
+ | [Owner](/azure/role-based-access-control/built-in-roles#owner) | Azure compute gallery | If you attach an existing compute gallery. |
+ | [Owner](/azure/role-based-access-control/built-in-roles#owner) | Resource group | If you create a new compute gallery. |
- - [Owner](/azure/role-based-access-control/built-in-roles#owner) role on the Azure compute gallery resource, if you're using an existing compute gallery
- - [Owner](/azure/role-based-access-control/built-in-roles#owner) role on the resource group, if you're creating a new compute gallery
+ Learn how to [assign an Azure role in Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/role-assignments-steps#step-5-assign-role).
## Scenarios
When you [save a template image of a lab](how-to-use-shared-image-gallery.md#sav
A lab creator can create a template VM based on both generalized and specialized images in Azure Lab Services. > [!IMPORTANT]
-> While using an Azure Compute Gallery, Azure Lab Services supports only images that use less than 128 GB of disk space on their OS drive. Images with more than 128 GB of disk space or multiple disks won't be shown in the list of virtual machine images during lab creation.
+> While using an Azure compute gallery, Azure Lab Services supports only images that use less than 128 GB of disk space on their OS drive. Images with more than 128 GB of disk space or multiple disks won't be shown in the list of virtual machine images during lab creation.
-## Create and attach a compute gallery
+## Attach a new compute gallery to a lab plan
1. Open your lab plan in the [Azure portal](https://portal.azure.com).
In the bottom pane, you see images in the compute gallery. There are no images i
:::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/attached-gallery-empty-list.png" alt-text="Screenshot of the attached image gallery list of images." lightbox="./media/how-to-attach-detach-shared-image-gallery/attached-gallery-empty-list.png":::
-## Attach an existing compute gallery
+## Attach an existing compute gallery to a lab plan
+
+If you already have an Azure compute gallery, you can also attach it to your lab plan. To attach an existing compute gallery, you first need to grant the Azure Lab Services service principal permissions to the compute gallery. Next, you can attach the existing compute gallery to your lab plan.
+
+### Configure compute gallery permissions
+
+The Azure Lab Services service principal needs to have the Owner Azure RBAC role on the Azure compute gallery. There are two Azure Lab Services service principals:
+
+| Name | Application ID | Description |
+| - | -- | - |
+| Azure Lab Services | c7bb12bf-0b39-4f7f-9171-f418ff39b76a | Service principal for Azure Lab Services lab plans (V2). |
+| Azure Lab Services | 1a14be2a-e903-4cec-99cf-b2e209259a0f | Service principal for Azure Lab Services lab accounts (V1). |
+
+To attach a compute gallery to a lab plan, assign the Owner role to the service principal with application ID `c7bb12bf-0b39-4f7f-9171-f418ff39b76a`.
+
+> [!NOTE]
+> When you add a role assignment in the Azure portal, the user interface shows the *object ID* of the service principal, which is different from the *application ID*. The object ID for a service principal can be different in each Azure subscription. You can find the service principal object ID in Azure Active Directory, based on its application ID. Learn more about [Service principal objects](/azure/active-directory/develop/app-objects-and-service-principals#service-principal-object).
+
+Follow these steps to grant permissions to the Azure Lab Service service principal by using the Azure CLI:
+
+1. Open [Azure Cloud Shell](https://shell.azure.com). Alternately, select the **Cloud Shell** button on the menu bar at the upper right in the [Azure portal](https://portal.azure.com).
+
+ Azure Cloud Shell is an interactive, authenticated, browser-accessible terminal for managing Azure resources. Learn how to get started with [Azure Cloud Shell](/azure/cloud-shell/quickstart).
+
+1. Enter the following commands in Cloud Shell:
+
+ 1. Select the service principal object ID, based on the application ID:
+
+ ```azurecli-interactive
+ az ad sp show --id c7bb12bf-0b39-4f7f-9171-f418ff39b76a --query "id" -o tsv
+ ```
+
+ 1. Select the ID of the compute gallery, based on the gallery name:
+
+ ```azurecli-interactive
+ az sig show --gallery-name <gallery-name> --resource-group <gallery-resource-group> --query id -o tsv
+ ```
+
+ Replace the text placeholders *`<gallery-name>`* and *`<gallery-resource-group>`* with the compute gallery name and the name of the resource group that contains the compute gallery. Make sure to remove the angle brackets when replacing the text.
+
+ 1. Assign the Owner role to service principal on the compute gallery:
+
+ ```azurecli-interactive
+ az role assignment create --assignee-object-id <service-principal-object-id> --role Owner --scope <gallery-id>
+ ```
+
+ Replace the text placeholders *`<service-principal-object-id>`* and *`<gallery-id>`* with the outcomes of the previous commands.
+
+Learn more about how to [assign an Azure role in Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/role-assignments-steps#step-5-assign-role).
+
+### Attach the compute gallery
The following procedure shows you how to attach an existing compute gallery to a lab plan.
To detach a compute gallery from your lab, select **Detach** on the toolbar. Con
:::image type="content" source="./media/how-to-attach-detach-shared-image-gallery/attached-gallery-detach.png" alt-text="Screenshot of how to detach the compute gallery from the lab plan.":::
-Only one Azure compute gallery can be attached to a lab plan. To attach another compute gallery, follow the steps to [attach an existing compute gallery](#attach-an-existing-compute-gallery).
+Only one Azure compute gallery can be attached to a lab plan. To attach another compute gallery, follow the steps to [attach an existing compute gallery](#attach-an-existing-compute-gallery-to-a-lab-plan).
## Next steps
lab-services Lab Services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md
In this release, there are a few known issues:
- When using virtual network injection, use caution in making changes to the virtual network and subnet. Changes may cause the lab VMs to stop working. For example, deleting your virtual network will cause all the lab VMs to stop working. We plan to improve this experience in the future, but for now make sure to delete labs before deleting networks. - Moving lab plan and lab resources from one Azure region to another isn't supported.-- Azure Compute [resource provider must be registered](../azure-resource-manager/management/resource-providers-and-types.md) before Azure Lab Services can [create and attach an Azure Compute Gallery resource](how-to-attach-detach-shared-image-gallery.md#create-and-attach-a-compute-gallery).
+- Azure Compute [resource provider must be registered](../azure-resource-manager/management/resource-providers-and-types.md) before Azure Lab Services can [create and attach an Azure Compute Gallery resource](how-to-attach-detach-shared-image-gallery.md#attach-an-existing-compute-gallery-to-a-lab-plan).
### Lab plans replace lab accounts
machine-learning How To Network Isolation Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-isolation-planning.md
Azure Machine Learning requires private IPs; one IP per compute instance, comput
In this diagram, your main VNet requires the IPs for private endpoints. You can have hub-spoke VNets for multiple Azure Machine Learning workspaces with large address spaces. A downside of this architecture is to double the number of private endpoints. ### Network policy enforcement
-You can use [built-in policies](/how-to-integrate-azure-policy.md) if you want to control network isolation parameters with self-service workspace and computing resources creation.
+You can use [built-in policies](how-to-integrate-azure-policy.md) if you want to control network isolation parameters with self-service workspace and computing resources creation.
### Other considerations
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
# Query & compare experiments and runs with MLflow
-Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies.
-
-> [!NOTE]
-> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, we recommend to use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
+Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow. You don't need to install any specific SDK to manage what happens inside of a training job, creating a more seamless transition between local runs and the cloud by removing cloud-specific dependencies. In this article, you'll learn how to query and compare experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python.
MLflow allows you to:
-* Create, delete and search for experiments in a workspace.
-* Start, stop, cancel and query runs for experiments.
+* Create, query, delete and search for experiments in a workspace.
+* Query, delete, and search for runs in a workspace.
* Track and retrieve metrics, parameters, artifacts and models from runs.
-In this article, you'll learn how to manage experiments and runs in your workspace using Azure Machine Learning and MLflow SDK in Python.
-
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Using MLflow SDK in Azure Machine Learning
+See [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments) for a detailed comparison between MLflow Open-Source and MLflow when connected to Azure Machine Learning.
-Use MLflow to query and manage all the experiments in Azure Machine Learning. The MLflow SDK has capabilities to query everything that happens inside of a training job in Azure Machine Learning. See [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments) for a detailed comparison between MLflow Open-Source and MLflow when connected to Azure Machine Learning.
+> [!NOTE]
+> The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
### Prerequisites
for exp in experiments:
print(exp.name) ```
+## Search experiments
+
+The `search_experiments()` method available since Mlflow 2.0 allows searching experiment matching a criteria using `filter_string`. The following query retrieves three experiments with different IDs.
+
+```python
+mlflow.search_experiments(filter_string="experiment_id IN (
+ 'CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')"
+)
+```
+ ## Getting a specific experiment Details about a specific experiment can be retrieved using the `get_experiment_by_name` method:
exp = mlflow.get_experiment_by_name(experiment_name)
print(exp) ```
-## Getting runs inside an experiment
+## Query runs inside an experiment
MLflow allows searching runs inside of any experiment, including multiple experiments at the same time. By default, MLflow returns the data in Pandas `Dataframe` format, which makes it handy when doing further processing our analysis of the runs. Returned data includes columns with:
mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
filter_string="params.num_boost_round='100'") ```
+Specific run field can also be indicated. Fields do not need a qualifier like `params`, `metrics` or `attributes`. The following search query for runs with specific IDs.
+
+```python
+mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string="run_id IN ('CDEFG-1234-5678-90AB', '1234-5678-90AB-CDEFG', '5678-1234-90AB-CDEFG')")
+```
+ ### Filter runs by status You can also filter experiment by status. It becomes useful to find runs that are running, completed, canceled or failed. In MLflow, `status` is an `attribute`, so we can access this value using the expression `attributes.status`. The following table shows the possible values:
model_local_path = mlflow.artifacts.download_artifacts(
) ```
-You can then load the model back from the downloaded artifacts using the typical function `load_model`:
+You can then load the model back from the downloaded artifacts using the typical function `load_model` in the flavor-specific namespace. The following example uses `xgboost`:
```python model = mlflow.xgboost.load_model(model_local_path) ```
-> [!NOTE]
-> The previous example assumes the model was created using `xgboost`. Change it to the flavor applies to your case.
- MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. The method `load_model` uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows: ```python
model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}
``` > [!TIP]
-> You can also load models from the registry using MLflow. View [loading MLflow models with MLflow](how-to-manage-models-mlflow.md#loading-models-from-registry) for details.
+> For query and loading models registered in the Model Registry, view [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
## Getting child (nested) runs
child_runs = mlflow.search_runs(
To compare and evaluate the quality of your jobs and models in Azure Machine Learning Studio, use the [preview panel](./how-to-enable-preview-features.md) to enable the feature. Once enabled, you can compare the parameters, metrics, and tags between the jobs and/or models you selected.
+> [!IMPORTANT]
+> Items marked (preview) in this article are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow) demonstrate and expand upon concepts presented in this article. * [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines. * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure Machine Learning using MLflow. - ## Support matrix for querying runs and experiments The MLflow SDK exposes several methods to retrieve runs, including options to control what is returned and how. Use the following table to learn about which of those methods are currently supported in MLflow when connected to Azure Machine Learning:
The MLflow SDK exposes several methods to retrieve runs, including options to co
| Renaming experiments | **&check;** | | > [!NOTE]
-> - <sup>1</sup> Check the section [Getting runs inside an experiment](#getting-runs-inside-an-experiment) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
+> - <sup>1</sup> Check the section [Query runs inside an experiment](#query-runs-inside-an-experiment) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
> - <sup>2</sup> `!=` for tags not supported. ## Next steps
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
description: Learn how to manually set up permissions that allow your Azure Mana
+ Previously updated : 6/10/2022 Last updated : 3/08/2022 # How to modify access permissions to Azure Monitor By default, when a Grafana instance is created, Azure Managed Grafana grants it the Monitoring Reader role for all Azure Monitor data and Log Analytics resources within a subscription.
-This means that the new Grafana instance can access and search all monitoring data in the subscription, including viewing the Azure Monitor metrics and logs from all resources, and any logs stored in Log Analytics workspaces in the subscription.
+This means that the new Grafana instance can access and search all monitoring data in the subscription. It can view the Azure Monitor metrics and logs from all resources, and any logs stored in Log Analytics workspaces in the subscription.
-In this article, you'll learn how to manually grant permission for Azure Managed Grafana to access an Azure resource using a managed identity.
+In this article, learn how to manually grant permission for Azure Managed Grafana to access an Azure resource using a managed identity.
## Prerequisites
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Edit Azure Monitor permissions
-To change permissions for a specific resource, follow these steps:
+To edit permissions for a specific resource, follow these steps.
+
+### [Portal](#tab/azure-portal)
1. Open a resource that contains the monitoring data you want to retrieve. In this example, we're configuring an Application Insights resource. 1. Select **Access Control (IAM)**.
To change permissions for a specific resource, follow these steps:
:::image type="content" source="./media/permissions/permissions-iam.png" alt-text="Screenshot of the Azure platform to add role assignment in App Insights.":::
-1. The portal lists various roles you can give to your Managed Grafana resource. Select a role. For instance, **Monitoring Reader**. Select this role.
-1. Click **Next**.
+1. The portal lists all the roles you can give to your Azure Managed Grafana resource. Select a role. For instance, **Monitoring Reader**, and select **Next**.
:::image type="content" source="./media/permissions/permissions-role.png" alt-text="Screenshot of the Azure platform and choose Monitor Reader.":::
-1. For **Assign access to**, select **Managed Identity**.
-1. Click **Select members**.
+1. For **Assign access to**, select **Managed identity**.
+1. Click on **Select members**.
:::image type="content" source="media/permissions/permissions-members.png" alt-text="Screenshot of the Azure platform selecting members.":::
-1. Select the **Subscription** containing your Managed Grafana instance
-1. Select a **Managed identity** from the options in the dropdown list
-1. Select the Managed Grafana instance from the list.
+1. Select the **Subscription** containing your Managed Grafana instance.
+1. For **Managed identity**, select **Azure Managed Grafana**.
+1. Select one or several Managed Grafana instances.
1. Click **Select** to confirm :::image type="content" source="media/permissions/permissions-managed-identities.png" alt-text="Screenshot of the Azure platform selecting the instance.":::
-1. Click **Next**, then **Review + assign** to confirm the application of the new permission
+1. Select **Next**, then **Review + assign** to confirm the assignment of the new permission.
For more information about how to use Managed Grafana with Azure Monitor, go to [Monitor your Azure services in Grafana](../azure-monitor/visualize/grafana-plugin.md).
+### [Azure CLI](#tab/azure-cli)
+
+Assign a role assignment using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command.
+
+In the code below, replace the following placeholders:
+
+- `<assignee>`: enter the assignee's object ID. For a managed identity, enter the managed identity's ID.
+- `<roleNameOrId>`: enter the role's name or ID. For Monitoring Reader, enter `Monitoring Reader` or `43d0d8ad-25c7-4714-9337-8ba259a9fe05`.
+- `<scope>`: enter the full ID of the resource Azure Managed Grafana needs access to.
+
+```azurecli
+az role assignment create --assignee "<assignee>" \
+--role "<roleNameOrId>" \
+--scope "<scope>"
+```
+
+Example: assigning permission for an Azure Managed Grafana instance to access an Application Insights resource using a managed identity.
+
+```azurecli
+az role assignment create --assignee "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourcegroups/my-rg/providers/Microsoft.Dashboard/grafana/mygrafanaworkspace" \
+--role "Monitoring Reader" \
+--scope "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourcegroups/my-rg/providers/microsoft.insights/components/myappinsights/
+```
+
+For more information about assigning Azure roles using the Azure CLI, refer to the [Role based access control documentation](../role-based-access-control/role-assignments-cli.md).
+++ ## Next steps > [!div class="nextstepaction"]
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
Title: How to share an Azure Managed Grafana instance
-description: 'Azure Managed Grafana: learn how you can share access permissions and dashboards with your team and customers.'
+description: 'Learn how you can share access permissions to Azure Grafana Managed.'
+ Previously updated : 3/31/2022 Last updated : 3/08/2023
-# How to share an Azure Managed Grafana instance
+# How to share access to Azure Managed Grafana
-A DevOps team may build dashboards to monitor and diagnose an application or infrastructure that it manages. Likewise, a support team may use a Grafana monitoring solution for troubleshooting customer issues. In these scenarios, multiple users will be accessing one Grafana instance. Azure Managed Grafana enables such sharing by allowing you to set the custom permissions on an instance that you own. This article explains what permissions are supported and how to grant permissions to share dashboards with your internal teams or external customers.
+A DevOps team may build dashboards to monitor and diagnose an application or infrastructure that it manages. Likewise, a support team may use a Grafana monitoring solution for troubleshooting customer issues. In these scenarios, multiple users are accessing one Grafana instance.
+
+Azure Managed Grafana enables such collaboration by allowing you to set custom permissions on an instance that you own. This article explains what permissions are supported and how to grant permissions to share an Azure Managed Grafana instance with your stakeholders.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). - An Azure Managed Grafana instance. If you don't have one yet, [create a Managed Grafana instance](./how-to-permissions.md).
+- You must have Grafana Admin permissions on the instance.
## Supported Grafana roles
-Azure Managed Grafana supports the Admin, Viewer and Editor roles:
+Azure Managed Grafana supports the Grafana Admin, Grafana Editor, and Grafana Viewer roles:
-- The Admin role provides full control of the instance including viewing, editing, and configuring data sources.-- The Editor role provides read-write access to the dashboards in the instance.-- The Viewer role provides read-only access to dashboards in the instance.
+- The Grafana Admin role provides full control of the instance including managing role assignments, viewing, editing, and configuring data sources.
+- The Grafana Editor role provides read-write access to the dashboards in the instance.
+- The Grafana Viewer role provides read-only access to dashboards in the instance.
-The Admin role is automatically assigned to the creator of a Grafana instance. More details on Admin, Editor, and Viewer roles can be found at [Grafana organization roles](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#compare-roles).
+More details on Grafana roles can be found in the [Grafana documentation](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#compare-roles).
-Grafana user roles and assignments are fully integrated with the Azure Active Directory (Azure AD). You can add any Azure AD user or security group to a Grafana role and grant them access permissions associated with that role. You can manage these permissions from the Azure portal or the command line. This section explains how to assign users to the Viewer or Editor role in the Azure portal.
+Grafana user roles and assignments are fully [integrated within Azure Active Directory (Azure AD)](../role-based-access-control/built-in-roles.md#grafana-admin). You can assign a Grafana role to any Azure AD user, group, service principal or managed identity, and grant them access permissions associated with that role. You can manage these permissions from the Azure portal or the command line. This section explains how to assign Grafana roles to users in the Azure portal.
> [!NOTE]
-> Azure Managed Grafana doesn't support personal [Microsoft accounts](https://account.microsoft.com) (a.k.a., MSA) currently.
-
-## Sign in to Azure
+> Azure Managed Grafana doesn't support personal Microsoft accounts (MSA) currently.
-Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+## Add a Grafana role assignment
-## Assign an Admin, Viewer or Editor role to a user
+### [Portal](#tab/azure-portal)
-1. Open your Managed Grafana instance.
-1. Select **Access control (IAM)** in the navigation menu.
-1. Click **Add**, then **Add role assignment**
+1. Open your Azure Managed Grafana instance.
+1. Select **Access control (IAM)** in the left menu.
+1. Select **Add role assignment**.
:::image type="content" source="media/share/iam-page.png" alt-text="Screenshot of Add role assignment in the Azure platform.":::
-1. Select one of the Grafana roles to assign to a user or security group. The available roles are:
-
- - Grafana Admin
- - Grafana Editor
- - Grafana Viewer
+1. Select a Grafana role to assign among **Grafana Admin**, **Grafana Editor** or **Grafana Viewer**, then select **Next**.
:::image type="content" source="media/share/role-assignment.png" alt-text="Screenshot of the Grafana roles in the Azure platform.":::
+1. Choose if you want to assign access to a **User, group, or service principal**, or to a **Managed identity**.
+1. Click on **Select members**, pick the members you want to assign to the Grafana role and then confirm with **Select**.
+1. Select **Next**, then **Review + assign** to complete the role assignment.
+ > [!NOTE]
-> Dashboard and data source level sharing will be done from within the Grafana application. For more details, refer to [Grafana permissions](https://grafana.com/docs/grafana/latest/permissions/).
+> Dashboard and data source level sharing are done from within the Grafana application. For more information, refer to [Share a Grafana dashboard or panel](./how-to-share-dashboard.md). [Share a Grafana dashboard] and [Data source permissions](https://grafana.com/docs/grafana/latest/administration/data-source-management/#data-source-permissions).
+
+### [Azure CLI](#tab/azure-cli)
+
+Assign a role using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command.
+
+In the code below, replace the following placeholders:
+
+- `<assignee>`:
+ - For an Azure AD user, enter their email address or the user object ID.
+ - For a group, enter the group object ID.
+ - For a service principal, enter the service principal object ID.
+ - For a managed identity, enter the object ID.
+- `<roleNameOrId>`:
+ - For Grafana Admin, enter `Grafana Admin` or `22926164-76b3-42b3-bc55-97df8dab3e41`.
+ - For Grafana Editor, enter `Grafana Editor` or `a79a5197-3a5c-4973-a920-486035ffd60f`.
+ - For Grafana Viewer, enter `Grafana Viewer` or `60921a7e-fef1-4a43-9b16-a26c52ad4769`.
+- `<scope>`: enter the full ID of the Azure Managed Grafana instance.
+
+```azurecli
+az role assignment create --assignee "<assignee>" \
+--role "<roleNameOrId>" \
+--scope "<scope>"
+```
+
+Example:
+
+```azurecli
+az role assignment create --assignee "name@contoso.com" \
+--role "Grafana Admin" \
+--scope "/subscriptions/abcdef01-2345-6789-0abc-def012345678/resourcegroups/my-rg/providers/Microsoft.Dashboard/grafana/my-grafana"
+```
+For more information about assigning Azure roles using the Azure CLI, refer to the [Role based access control documentation](../role-based-access-control/role-assignments-cli.md).
++ ## Next steps
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
> [Modify access permissions to Azure Monitor](./how-to-permissions.md) > [!div class="nextstepaction"]
-> [Call Grafana APIs in your automation](./how-to-api-calls.md)
+> [Share a Grafana dashboard or panel](./how-to-share-dashboard.md).
managed-instance-apache-cassandra Management Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/management-operations.md
Azure Managed Instance for Apache Cassandra provides an [SLA](https://azure.micr
## Backup and restore
-Snapshot backups are enabled by default and taken every 4 hours with [Medusa](https://github.com/thelastpickle/cassandra-medusa). Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There's no cost for backups. To restore from a backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+Snapshot backups are enabled by default and taken every 24 hours. Backups are stored in an internal Azure Blob Storage account and are retained for up to 2 days (48 hours). There's no cost for the initial 2 backups. Additional backups will be charged, see [pricing](https://azure.microsoft.com/pricing/details/managed-instance-apache-cassandra/). To change the backup interval or retention period, or to restore from an existing backup, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
> [!WARNING] > Backups can be restored to the same VNet/subnet as your existing cluster, but they cannot be restored to the *same cluster*. Backups can only be restored to **new clusters**. Backups are intended for accidental deletion scenarios, and are not geo-redundant. They are therefore not recommended for use as a disaster recovery (DR) strategy in case of a total regional outage. To safeguard against region-wide outages, we recommend a multi-region deployment. Take a look at our [quickstart for multi-region deployments](create-multi-region-cluster.md).
migrate How To Migrate Vmware Vms With Cmk Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-migrate-vmware-vms-with-cmk-disks.md
Title: Migrate VMware virtual machines to Azure with server-side encryption(SSE) and customer-managed keys(CMK) using the Migration and modernization tool description: Learn how to migrate VMware VMs to Azure with server-side encryption(SSE) and customer-managed keys(CMK) using the Migration and modernization tool --++ ms. Last updated 12/12/2022
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-Data-in replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL Flexible service. The external server can be on-premises, in virtual machines, Azure Database for MySQL Single Server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position-based. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+Data-in replication allows you to synchronize data from an external MySQL server into an Azure Database for MySQL flexilbe server. The external server can be on-premises, in virtual machines, Azure Database for MySQL single server, or a database service hosted by other cloud providers. Data-in replication is based on the binary log (binlog) file position-based. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
> [!NOTE] > GTID-based replication is currently not supported for Azure Database for MySQL - Flexible Servers.<br>
It isn't supported to configure Data-in replication for servers that have high a
### Filter
-Parameter `replicate_wild_ignore_table` is used to create replication filter for tables on the replica server. To modify this parameter from Azure portal, navigate to Azure Database for MySQL - Flexible Server used as replica and select "Server Parameters" to view/edit the `replicate_wild_ignore_table` parameter.
+Parameter `replicate_wild_ignore_table` is used to create replication filter for tables on the replica server. To modify this parameter from Azure portal, navigate to Azure Database for MySQL flexible server used as replica and select "Server Parameters" to view/edit the `replicate_wild_ignore_table` parameter.
### Requirements
Parameter `replicate_wild_ignore_table` is used to create replication filter for
- Our recommendation is to have a primary key in each table. If we have a table without primary key, you might face slowness in replication. - The source server should use the MySQL InnoDB engine. - User must have the right permissions to configure binary logging and create new users on the source server.-- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
+- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
- If the source server has SSL enabled, ensure the SSL CA certificate provided for the domain has been included in the `mysql.az_replication_change_master` stored procedure. Refer to the following [examples](./how-to-data-in-replication.md#link-source-and-replica-servers-to-start-data-in-replication) and the `master_ssl_ca` parameter. - Ensure that the machine hosting the source server allows both inbound and outbound traffic on port 3306. - Ensure that the source server has a **public IP address**, that DNS is publicly accessible, or that the source server has a fully qualified domain name (FQDN).
mysql Concepts Data Out Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-out-replication.md
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-Data-out replication allows you to synchronize data out of Azure Database for MySQL - Flexible Server to another MySQL server using MySQL native replication. The MySQL server (replica) can be on-premises, in virtual machines, or a database service hosted by other cloud providers. While [Data-in replication](concepts-data-in-replication.md) helps to move data into Azure Database for MySQL - Flexible Server (replica), Data-out replication would allow you to transfer data out of Azure Database for MySQL - Flexible Server (Primary). With Data-out replication, the binary log (binlog) is made community consumable allowing the Azure Database for MySQL- Flexible server to act as a Primary server for the external replicas. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+Data-out replication allows you to synchronize data out of a Azure Database for MySQL flexible server to another MySQL server using MySQL native replication. The MySQL server (replica) can be on-premises, in virtual machines, or a database service hosted by other cloud providers. While [Data-in replication](concepts-data-in-replication.md) helps to move data into an Azure Database for MySQL flexible server (replica), Data-out replication would allow you to transfer data out of an Azure Database for MySQL flexible server (Primary). With Data-out replication, the binary log (binlog) is made community consumable allowing the an Azure Database for MySQL flexible server to act as a Primary server for the external replicas. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
> [!NOTE] > Data-out replication is not supported on Azure Database for MySQL - Flexible Server, which has Azure authentication configured.
Data-out replication isn't supported on Azure Database for MySQL - Flexible Serv
You must use the replication filter to filter out Azure custom tables on the replica server. This can be achieved by setting Replicate_Wild_Ignore_Table = "mysql.\_\_%" to filter the Azure MySQL internal tables on the replica. To modify this parameter from the Azure portal, navigate to Azure Database for MySQL - Flexible Server and select "Server parameters" to view/edit the Replicate_Wild_Ignore_Table parameter.
-Refer to the following general guidance on the replication filter in MySQL Manual:
+Refer to the following general guidance on the replication filter in MySQL manual:
- MySQL 5.7 Reference Manual - [13.4.2.2 CHANGE REPLICATION FILTER Statement](https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html) - MySQL 5.7 Reference Manual - [16.1.6.3 Replica Server Options and Variables](https://dev.mysql.com/doc/refman/5.7/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) - MySQL 8.0 Reference Manual - [17.2.5.4 Replication Channel Based Filters](https://dev.mysql.com/doc/refman/8.0/en/replication-rules-channel-based-filters.html) ---- ## Next steps - How to configure [Data-out replication](how-to-data-out-replication.md)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Last updated 08/26/2021
Azure Database for MySQL - Flexible Server allows configuring high availability with automatic failover. The high availability solution is designed to ensure that committed data is never lost because of failures and that the database won't be a single point of failure in your software architecture. When high availability is configured, flexible server automatically provisions and manages a standby replica. You're billed for the provisioned compute and storage for both the primary and secondary replica. There are two high availability architectural models:
-* **Zone-redundant HA**. This option is preferred for complete isolation and redundancy of infrastructure across multiple availability zones. It provides the highest level of availability, but it requires you to configure application redundancy across zones. Zone-redundant HA is preferred when you want to achieve the highest level of availability against any infrastructure failure in the availability zone and when latency across the availability zone is acceptable. It can be enabled only when the server is created. Zone-redundant HA is available in a [subset of Azure regions](./overview.md#azure-regions) where the region supports multiple [availability zones](../../availability-zones/az-overview.md) and [zone-redundant Premium file shares](../..//storage/common/storage-redundancy.md#zone-redundant-storage) are available.
+- **Zone-redundant HA**. This option is preferred for complete isolation and redundancy of infrastructure across multiple availability zones. It provides the highest level of availability, but it requires you to configure application redundancy across zones. Zone-redundant HA is preferred when you want to achieve the highest level of availability against any infrastructure failure in the availability zone and when latency across the availability zone is acceptable. It can be enabled only when the server is created. Zone-redundant HA is available in a [subset of Azure regions](./overview.md#azure-regions) where the region supports multiple [availability zones](../../availability-zones/az-overview.md) and [zone-redundant Premium file shares](../..//storage/common/storage-redundancy.md#zone-redundant-storage) are available.
-* **Same-zone HA**. This option is preferred for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone with the lowest network latency. Same-zone HA is available in all [Azure regions](./overview.md#azure-regions) where you can use Azure Database for MySQL - Flexible Server.
+- **Same-zone HA**. This option is preferred for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone with the lowest network latency. Same-zone HA is available in all [Azure regions](./overview.md#azure-regions) where you can use Azure Database for MySQL - Flexible Server.
## Zone-redundant HA architecture When you deploy a server with zone-redundant HA, two servers will be created: -- A primary server in one availability zone-- A standby replica server that has the same configuration as the primary server (compute tier, compute size, storage size, and network configuration) in another availability zone of the same Azure region
+- A primary server in one availability zone.
+- A standby replica server that has the same configuration as the primary server (compute tier, compute size, storage size, and network configuration) in another availability zone of the same Azure region.
You can choose the availability zone for the primary and the standby replica. Placing the standby database servers and standby applications in the same zone reduces latency. It also allows you to better prepare for disaster recovery situations and "zone down" scenarios.
The database server name is used to connect applications to the primary server.
Automatic backups, both snapshots and log backups, are performed on locally redundant storage from the primary database server.
->[!Note]
->For both zone-redundant and same-zone HA:
->* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.
->* The standby server isn't available for read or write operations. It's a passive standby to enable fast failover.
->* Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
+> [!NOTE]
+> For both zone-redundant and same-zone HA:
+> - If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.
+> - The standby server isn't available for read or write operations. It's a passive standby to enable fast failover.
+> - Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
## Failover process
Forced failover triggers a failover that activates the standby replica to become
The overall failover time depends on the current workload and the last checkpoint. In general, it's expected to take between 60 and 120 seconds.
->[!Note]
+> [!NOTE]
>Azure Resource Health event is generated in the event of planned failover, representing the failover time during which server was unavailable. The triggered events can be seen when clicked on "Resource Health" in the left pane. User initiated/ Manual failover is represented by status as **"Unavailable"** and tagged as **"Planned"**. Example - "A failover operation was triggered by an authorized user (Planned)". If your resource remains in this state for an extended period of time, please open a [support ticket](https://azure.microsoft.com/support/create-ticket/) and we will assist you.
Unplanned service downtime can be caused by software bugs or infrastructure faul
The overall failover time is expected to be between 60 and 120 seconds. But, depending on the activity in the primary database server at the time of the failover (like large transactions and recovery time), the failover might take longer.
->[!Note]
->Azure Resource Health event is generated in the event of unplanned failover, representing the failover time during which server was unavailable. The triggered events can be seen when clicked on "Resource Health" in the left pane. Automatic failover is represented by status as **"Unavailable"** and tagged as **"Unplanned"**. Example - "Unavailable : A failover operation was triggered automatically (Unplanned)". If your resource remains in this state for an extended period of time, please open a [support ticket](https://azure.microsoft.com/support/create-ticket/) and we will assist you.
+> [!NOTE]
+> Azure Resource Health event is generated in the event of unplanned failover, representing the failover time during which server was unavailable. The triggered events can be seen when clicked on "Resource Health" in the left pane. Automatic failover is represented by status as **"Unavailable"** and tagged as **"Unplanned"**. Example - "Unavailable : A failover operation was triggered automatically (Unplanned)". If your resource remains in this state for an extended period of time, please open a [support ticket](https://azure.microsoft.com/support/create-ticket/) and we will assist you.
#### How automatic failover detection works in HA enabled servers
The health monitor component continuously does the following checks
* The monitor pings to the nodes Management network Endpoint. If this check fails two times continuously, it triggers automatic failover operation. The scenario like node is unavailable/not responding because of OS issue, networking issue between management components and nodes etc. will be addressed by this health check. * The monitor also runs a simple query on the Instance. If the queries fail to run, automatic failover will be triggered. The scenarios like MySQL demon crashed/ stopped/hung, Backend storage issue etc., will be addressed by this health check.
->[!Note]
->If there are any networking issue between the application and the customer networking endpoint (Private/Public access), either in networking path , on the endpoint or DNS issues in client side, the health check does not monitor this scenario. If you are using private access, make sure that the NSG rules for the VNet does not block the communication to the instance customer networking endpoint on port 3306. For public access make sure that the firewall rules are set and network traffic is allowed on port 3306 (if network path has any other firewalls). The DNS resolution from the client application side also needs to be taken care of.
+> [!NOTE]
+> If there are any networking issue between the application and the customer networking endpoint (Private/Public access), either in networking path , on the endpoint or DNS issues in client side, the health check does not monitor this scenario. If you are using private access, make sure that the NSG rules for the VNet does not block the communication to the instance customer networking endpoint on port 3306. For public access make sure that the firewall rules are set and network traffic is allowed on port 3306 (if network path has any other firewalls). The DNS resolution from the client application side also needs to be taken care of.
## Monitoring for high availability The health of your HA is continuously monitored and reported on the overview page. Here are the replication statuses:
The health of your HA is continuously monitored and reported on the overview pag
## Limitations Here are some considerations to keep in mind when you use high availability:
-* Zone-redundant high availability can be set only when the flexible server is created.
-* High availability isn't supported in the burstable compute tier.
-* Restarting the primary database server to pick up static parameter changes also restarts the standby replica.
-* Data-in Replication isn't supported for HA servers.
-* GTID mode will be turned on as the HA solution uses GTID. Check whether your workload has [restrictions or limitations on replication with GTIDs](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-restrictions.html).
+- Zone-redundant high availability can be set only when the flexible server is created.
+- High availability isn't supported in the burstable compute tier.
+- Restarting the primary database server to pick up static parameter changes also restarts the standby replica.
+- Data-in Replication isn't supported for HA servers.
+- GTID mode will be turned on as the HA solution uses GTID. Check whether your workload has [restrictions or limitations on replication with GTIDs](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-restrictions.html).
>[!Note] >If you are enabling same-zone HA post the server create, you need to make sure the server parameters enforce_gtid_consistencyΓÇ¥ and [ΓÇ£gtid_modeΓÇ¥](./concepts-read-replicas.md#global-transaction-identifier-gtid) is set to ON before enabling HA.
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
Title: 'Quickstart: Create an Azure DB for MySQL - ARM template'
+ Title: 'Quickstart: Create an Azure Database for MySQL - ARM template'
description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration, by using an Azure Resource Manager template.
postgresql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-portal.md
Azure Database for PostgreSQL - Flexible Server supports two types of mutually e
* Public access (allowed IP addresses) * Private access (VNet Integration)
-In this article, we will focus on creation of PostgreSQL server with **Private access (VNet integration)** using Azure portal. With Private access (VNet Integration), you can deploy your flexible server into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the PostgreSQL server are restricted to your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
+In this article, we focus on creation of PostgreSQL server with **Private access (VNet integration)** using Azure portal. With Private access (VNet Integration), you can deploy your flexible server into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the PostgreSQL server are restricted to your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
You can deploy your flexible server into a virtual network and subnet during server creation. After the flexible server is deployed, you cannot move it into another virtual network, subnet or to *Public access (allowed IP addresses)*.
To create a flexible server in a virtual network, you need:
- A [Virtual Network](../../virtual-network/quick-create-portal.md#create-a-virtual-network) > [!Note] > - The virtual network and subnet should be in the same region and subscription as your flexible server.
- > - The virtual network should not have any resource lock set at the VNET or subnet level. Make sure to remove any lock (**Delete** or **Read only**) from your VNET and all subnets before creating the server in a virtual network, and you can set it back after server creation.
+ > - The virtual network should not have any resource lock set at the VNET or subnet level, as locks may interfere with operations on the network and DNS. Make sure to remove any lock (**Delete** or **Read only**) from your VNET and all subnets before creating the server in a virtual network, and you can set it back after server creation.
- To [delegate a subnet](../../virtual-network/manage-subnet-delegation.md#delegate-a-subnet-to-an-azure-service) to **Microsoft.DBforPostgreSQL/flexibleServers**. This delegation means that only Azure Database for PostgreSQL Flexible Servers can use that subnet. No other Azure resource types can be in the delegated subnet. - Add `Microsoft.Storage` to the service end point for the subnet delegated to Flexible servers. This is done by performing following steps: 1. Go to your virtual network page.
- 2. Select the VNET in which you are planning to deploy your flexible server.
+ 2. Select the VNET in which you're planning to deploy your flexible server.
3. Choose the subnet that is delegated for flexible server. 4. On the pull-out screen, under **Service endpoint**, choose `Microsoft.storage` from the drop-down. 5. Save the changes. -- If you want to setup your own private DNS zone to use with the flexible server, please see [private DNS overview](../../dns/private-dns-overview.md) documentation for more details.
+- If you want to set up your own private DNS zone to use with the flexible server, see [private DNS overview](../../dns/private-dns-overview.md) documentation for more details.
## Create Azure Database for PostgreSQL - Flexible Server in an already existing virtual network
To create a flexible server in a virtual network, you need:
3. Select **Flexible server** as the deployment option. 4. Fill out the **Basics** form. 5. Go to the **Networking** tab to configure how you want to connect to your server.
-6. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** and select the already existing *virtual network* and *Subnet* created as part of prerequisites above.
+6. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** and select the already existing *virtual network* and *Subnet* created as part of prerequisites.
7. Under **Private DNS Integration**, by default, a new private DNS zone will be created using the server name. Optionally, you can choose the *subscription* and the *Private DNS zone* from the drop-down list. 8. Select **Review + create** to review your flexible server configuration. 9. Select **Create** to provision the server. Provisioning can take a few minutes. :::image type="content" source="./media/how-to-manage-virtual-network-portal/how-to-inject-flexible-server-vnet.png" alt-text="Injecting flexible server into a VNET"::: >[!Note]
-> After the flexible server is deployed to a virtual network and subnet, you cannot move it to Public access (allowed IP addresses).
+> After the flexible server is deployed to a virtual network and subnet, you can't move it to Public access (allowed IP addresses).
>[!Note] > If you want to connect to the flexible server from a client that is provisioned in another VNET, you have to link the private DNS zone with the VNET. See this [linking the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation on how to do it.
postgresql Tutorial Django App Service Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-app-service-postgres.md
These changes are specific to configuring Django to run in any production enviro
Create a private flexible server and a database inside a virtual network (VNET) using the following command: ```azurecli
-# Create Flexible server in a VNET
+# Create Flexible server in a private virtual network (VNET)
-az postgres flexible-server create --resource-group myresourcegroup --location westus2
+az postgres flexible-server create --resource-group myresourcegroup --vnet myvnet --location westus2
``` This command performs the following actions, which may take a few minutes: - Create the resource group if it doesn't already exist. - Generates a server name if it isn't provided.-- Create a new virtual network for your new postgreSQL server. **Make a note of virtual network name and subnet name** created for your server since you need to add the web app to the same virtual network.
+- Create a new virtual network for your new postgreSQL server, if you choose to do so after prompted. **Make a note of virtual network name and subnet name** created for your server since you need to add the web app to the same virtual network.
- Creates admin username, password for your server if not provided. **Make a note of the username and password** to use in the next step. - Create a database ```postgres``` that can be used for development. You can run [**psql** to connect to the database](quickstart-create-server-portal.md#connect-to-the-postgresql-database-using-psql) to create a different database.
Django database migrations ensure that the schema in the PostgreSQL on Azure dat
### Create a poll question in the app
-4. In a browser, open the URL *http:\//\<app-name>.azurewebsites.net*. The app should display the message "No polls are available" because there are no specific polls yet in the database.
+1. In a browser, open the URL *http:\//\<app-name>.azurewebsites.net*. The app should display the message "No polls are available" because there are no specific polls yet in the database.
-5. Browse to *http:\//\<app-name>.azurewebsites.net/admin*. Sign in using superuser credentials from the previous section (`root` and `postgres1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
+2. Browse to *http:\//\<app-name>.azurewebsites.net/admin*. Sign in using superuser credentials from the previous section (`root` and `postgres1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
-6. Browse again to *http:\//\<app-name>.azurewebsites.net/* to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
+3. Browse again to *http:\//\<app-name>.azurewebsites.net/* to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
**Congratulations!** You're running a Python Django web app in Azure App Service for Linux, with an active Postgres database.
python manage.py migrate
### Review app in production
-Browse to *http:\//\<app-name>.azurewebsites.net* and test the app again in production. (Because you only changed the length of a database field, the change is only noticeable if you try to enter a longer response when creation a question.)
+Browse to *http:\//\<app-name>.azurewebsites.net* and test the app again in production. (Because you only changed the length of a database field, the change is only noticeable if you try to enter a longer response when creating a question.)
> [!TIP] > You can use [django-storages](https://django-storages.readthedocs.io/en/latest/backends/azure.html) to store static & media assets in Azure storage. You can use Azure CDN for gzipping for static files.
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
Title: Certificate rotation for Azure Database for PostgreSQL Single server
-description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for PostgreSQL Single server
+description: Learn about the upcoming changes of root certificate changes that affect Azure Database for PostgreSQL Single server
As per the industry's compliance requirements, CA vendors began revoking CA cert
The new certificate is rolled out and in effect starting December, 2022 (12/2022).
-## What change will be performed starting December 2022 (12/2022)?
+## What change was scheduled to be performed starting December 2022 (12/2022)?
-Starting December 2022, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) will be replaced with a **compliant version** known as [DigiCertGlobalRootG2 root certificate ](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). If your applications take advantage of **verify-ca** or **verify-full** as value of [**sslmode** parameter](https://www.postgresql.org/docs/current/libpq-ssl.html) in the database client connectivity will need to follow directions below to add new certificates to certificate store to maintain connectivity.
+Starting December 2022, the [BaltimoreCyberTrustRoot root certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) is replaced with a **compliant version** known as [DigiCertGlobalRootG2 root certificate ](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). If your applications take advantage of **verify-ca** or **verify-full** as value of [**sslmode** parameter](https://www.postgresql.org/docs/current/libpq-ssl.html) in the database client connectivity need to follow directions to add new certificates to certificate store to maintain connectivity.
## Do I need to make any changes on my client to maintain connectivity?
-There are no code or application changes required on client side. if you follow our certificate update recommendation below, you will still be able to continue to connect as long as **BaltimoreCyberTrustRoot certificate isn't removed** from the combined CA certificate. **We recommend to not remove the BaltimoreCyberTrustRoot from your combined CA certificate until further notice to maintain connectivity.**
+There are no code or application changes required on client side. if you follow our certificate update recommendation below, you'll still be able to continue to connect as long as **BaltimoreCyberTrustRoot certificate isn't removed** from the combined CA certificate. **We recommend to not remove the BaltimoreCyberTrustRoot from your combined CA certificate until further notice to maintain connectivity.**
## Do I need to make any changes to client certificates
-By default, PostgreSQL will not perform any verification of the server certificate. This means that it is still theoretically possible to spoof the server identity (for example by modifying a DNS record or by taking over the server IP address) without the client knowing. In order to prevent any possibility spoofing, SSL certificate verification on the client must be used. Such verification can be set via application client connection string [**ssl mode**](https://www.postgresql.org/docs/13/libpq-ssl.html) value - **verify-ca** or **verify-full**. If these ssl-mode values are chosen you should follow directions in next section.
+By default, PostgreSQL doesn't perform any verification of the server certificate. This means that it's still theoretically possible to spoof the server identity (for example by modifying a DNS record or by taking over the server IP address) without the client knowing. In order to prevent any possibility spoofing, SSL certificate verification on the client must be used. Such verification can be set via application client connection string [**ssl mode**](https://www.postgresql.org/docs/13/libpq-ssl.html) value - **verify-ca** or **verify-full**. If these ssl-mode values are chosen, you should follow directions in next section.
### Client Certificate Update Recommendation * Download BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA from links below: * https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem * https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem
-* Optionally, to prevent future disruption, it is also recommended to add the following roots to the trusted store:
+* Optionally, to prevent future disruption, it's also recommended to add the following roots to the trusted store:
* [DigiCert Global Root G3](https://www.digicert.com/kb/digicert-root-certificates.htm) (thumbprint: 7e04de896a3e666d00e687d33ffad93be83d349e) * [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) (thumbprint: 73a5e64a3bff8316ff0edccc618a906e4eae4d74) * [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) (thumbprint: 999a64c37ff47d9fab95f14769891460eec4c3c5)
By default, PostgreSQL will not perform any verification of the server certifica
* In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem. > [!NOTE]
-> Please don't drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
+> Please don't drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it's safe for them to drop the Baltimore certificate.
## What if we removed the BaltimoreCyberTrustRoot certificate?
-You will start to connectivity errors while connecting to your Azure Database for PostgreSQL server. You will need to configure SSL with [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
+You may start receiving connectivity errors while connecting to your Azure Database for PostgreSQL server. You need to configure SSL with [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity.
## Frequently asked questions
You can identify whether your connections verify the root certificate by reviewi
- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates. - If your connection string doesn't specify sslmode, you don't need to update certificates.
-If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. To understand PostgreSQL sslmode review the [SSL mode descriptions](https://www.postgresql.org/docs/11/libpq-ssl.html#ssl-mode-descriptions) in PostgreSQL documentation.
+If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. To understand PostgreSQL sslmode, review the [SSL mode descriptions](https://www.postgresql.org/docs/11/libpq-ssl.html#ssl-mode-descriptions) in PostgreSQL documentation.
### 4. What is the impact if using App Service with Azure Database for PostgreSQL?
For Azure app services, connecting to Azure Database for PostgreSQL, we can have
### 5. What is the impact if using Azure Kubernetes Services (AKS) with Azure Database for PostgreSQL?
-If you are trying to connect to the Azure Database for PostgreSQL using Azure Kubernetes Services (AKS), it is similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md).
+If you are trying to connect to the Azure Database for PostgreSQL using Azure Kubernetes Services (AKS), it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md).
### 6. What is the impact if using Azure Data Factory to connect to Azure Database for PostgreSQL? For connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed.
-For connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you will need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
+For connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it.
### 7. Do I need to plan a database server maintenance downtime for this change? No. Since the change here is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change. ### 8. If I create a new server after November 30, 2022, will I be impacted?
-For servers created after November 30, 2022, you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) together with new [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) root certificates in your database client SSL certificate store for your applications to connect using SSL.
+For servers created after November 30, 2022, you'll continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) together with new [DigiCertGlobalRootG2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) root certificates in your database client SSL certificate store for your applications to connect using SSL.
### 9. How often does Microsoft update their certificates or what is the expiry policy?
-These certificates used by Azure Database for PostgreSQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
+These certificates used by Azure Database for PostgreSQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft needs to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant always.
-### 10. If I am using read replicas, do I need to perform this update only on the primary server or the read replicas?
+### 10. If I am using read replicas, do I need to perform this update only on the primary server , or the read replicas?
-Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
+Since this update is a client-side change, if the client used to read data from the replica server, you need to apply the changes for those clients as well.
### 11. Do we have server-side query to verify if SSL is being used?
No. There's no action needed if your certificate file already has the **DigiCert
### 13. How can I check the certificate that is sent by the server?
-There are many tools that you can use. For example, DigiCert has a handy [tool](https://www.digicert.com/help/) that will show you the certificate chain of any server name. (This tool will only work with publicly accessible server; it cannot connect to server that is contained in a virtual network (VNET)).
-Another tool you can use is OpenSSL in the command line, you can use the syntax below:
+There are many tools that you can use. For example, DigiCert has a handy [tool](https://www.digicert.com/help/) that shows you the certificate chain of any server name. (This tool works with publicly accessible server; it cannot connect to server that is contained in a virtual network (VNET)).
+Another tool you can use is OpenSSL in the command line, you can use this syntax to check certificates:
```bash openssl s_client -showcerts -connect <your-postgresql-server-name>:443 ``` ### 14. What if I have further questions?
-If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md):
+If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help, please create a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md):
* ForΓÇ»*Issue type*, selectΓÇ»*Technical*. * ForΓÇ»*Subscription*, select your *subscription*. * ForΓÇ»*Service*, selectΓÇ»*My Services*, then selectΓÇ»*Azure Database for PostgreSQL ΓÇô Single Server*.
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-arm-template.md
Title: 'Quickstart: Create an Azure DB for PostgreSQL - ARM template'
+ Title: 'Quickstart: Create an Azure Database for PostgreSQL - ARM template'
description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server by using an Azure Resource Manager template.
private-5g-core Azure Stack Edge Virtual Machine Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-virtual-machine-sizing.md
The following table contains information about the VMs that Azure Private 5G Cor
||||||| | Management Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 80 GB | Management Control Plane to create Kubernetes clusters | | AP5GC Cluster Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 128 GB | Control Plane of the Kubernetes cluster used for AP5GC |
-| AP5GC Cluster Node VM | Standard_F16s_HPN | 16 | 32 | Ephemeral - 128 GB <br>Persistent - 102 GB | AP5GC workload node |
+| AP5GC Cluster Node VM | Standard_F16s_HPN | 16 | 32 | Ephemeral - 128 GB </br> Persistent - 102 GB | AP5GC workload node |
+
+## Remaining usable resource on ASE Pro GPU
+
+The following resources are available within ASE after deploying AP5GC. You can use these resources, for example, to deploy additional virtual machines or storage accounts.
+
+| Resource | Value |
+|--|--|
+| vCPUs | 16 |
+| Memory | 56 GB |
+| Storage | ~3.75 GB |
private-5g-core Modify Site Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-site-plan.md
# Modify a site plan
-The *site plan* determines the throughput and the number of radio access network (RAN) connections for each site, as well as the number of devices that each network supports. The plan you selected when creating the site can be updated to support your deployment requirements as they change. In this how-to guide, you'll learn how to modify a site plan using the Azure portal.
+The *site plan* determines an allowance for the throughput and number of radio access network (RAN) connections for each site, as well as the number of devices that each network supports. The plan you selected when creating the site can be updated to support your deployment requirements as they change. In this how-to guide, you'll learn how to modify a site plan using the Azure portal.
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- Verify pricing and charges associated with the site plan to which you want to move.
+- Verify pricing and charges associated with the site plan to which you want to move. See the [Azure Private 5G Core Pricing page](https://azure.microsoft.com/pricing/details/private-5g-core/) for pricing information.
## Choose the new site plan Use the following table to choose the new site plan that will best fit your requirements.
-| Site Plan | Throughput | Activated SIMs | RANs |
+| Site Plan | Licensed Throughput | Licensed Activated SIMs | Licensed RANs |
||||| | G0 | 100 Mbps | 20 | 2 | | G1 | 1 Gbps | 100 | 5 |
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md
Azure Firewall filters traffic using either:
* [FQDN in application rules](../firewall/features.md#application-fqdn-filtering-rules) for HTTP, HTTPS, and MSSQL. > [!IMPORTANT]
-> The use of application rules over network rules is recommended when inspecting traffic destined to private endpoints in order to maintain flow symmetry. If network rules are used, or an NVA is used instead of Azure Firewall, SNAT must be configured for traffic destined to private endpoints.
+> The use of application rules over network rules is recommended when inspecting traffic destined to private endpoints in order to maintain flow symmetry. If network rules are used, or an NVA is used instead of Azure Firewall, SNAT must be configured for traffic destined to private endpoints in order to maintain flow symmetry.
> [!NOTE] > SQL FQDN filtering is supported in [proxy-mode](/azure/azure-sql/database/connectivity-architecture#connection-policy) only (port 1433). **Proxy** mode can result in more latency compared to *redirect*. If you want to continue using redirect mode, which is the default for clients connecting within Azure, you can filter access using FQDN in firewall network rules.
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
To explore how Azure App Service can bolster the resiliency of your application
## Availability zone support
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability in the case of a local zone failure. When one zone experiences a failure, the remaining two zones support all regional services, capacity, and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
Azure App Service Environment can be deployed across [availability zones (AZ)](../reliability/availability-zones-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
To prepare for availability zone failure, you should over-provision capacity of
### Zone down experience
-Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section.
+Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three.
+
+>[!NOTE]
+>There's no guarantee that requests for additional instances in a zone-down scenario will succeed. The back filling of lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section.
Applications that are deployed in an App Service plan that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications.
reliability Reliability Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-batch.md
+
+ Title: Reliability in Azure Batch
+description: Learn about reliability in Azure Batch
+++++ Last updated : 03/09/2023++
+<!--#Customer intent: I want to understand reliability support in Azure Batch so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
++
+# Reliability in Azure Batch
+
+This article describes reliability support in Azure Batch and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and links to information on [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover).
++
+## Availability zone support
+
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability in the case of a local zone failure. When one zone experiences a failure, the remaining two zones support all regional services, capacity, and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](availability-zones-service-support.md).
+
+There are three types of Azure services that support availability zones: zonal, zone-redundant, and always-available services. You can learn more about these types of services and how they promote resiliency in the [Azure services with availability zone support](availability-zones-service-support.md#azure-services-with-availability-zone-support).
+
+Batch maintains parity with Azure on supporting availability zones.
+
+### Prerequisites
+
+- For [user subscription mode Batch accounts](../batch/accounts.md#batch-accounts), make sure that the subscription in which you're creating your pool doesn't have a zone offer restriction on the requested VM SKU. To see if your subscription doesn't have any restrictions, call the [Resource Skus List API](/rest/api/compute/resource-skus/list?tabs=HTTP) and check the `ResourceSkuRestrictions`. If a zone restriction exists, you can submit a support ticket to remove the zone restriction.
+
+- Because InfiniBand doesn't support inter-zone communication, you can't create a pool with a zonal policy if it has inter-node communication enabled and uses a [VM SKU that supports InfiniBand](../virtual-machines/workloads/hpc/enable-infiniband.md).
+
+- Batch maintains parity with Azure on supporting availability zones. To use the zonal option, your pool must be created in an [Azure region with availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+
+- To allocate your Batch pool across availability zones, the Azure region in which the pool was created must support the requested VM SKU in more than one zone. To validate that the region supports the requested VM SKU in more than one zone, call the [Resource Skus List API](/rest/api/compute/resource-skus/list?tabs=HTTP) and check the `locationInfo` field of `resourceSku`. Ensure that more than one zone is supported for the requested VM SKU. You can also use the [Azure CLI](/rest/api/compute/resource-skus/list?tabs=CLI) to list all available Resource SKUs with the following command:
+
+ ```azurecli
+
+ az vm list-skus
+
+ ```
++
+### Create an Azure Batch pool across availability zones
+
+For examples on how to create a Batch pool across availability zones, see [Create an Azure Batch pool across Availability Zones](/azure/batch/create-pool-availability-zones).
+
+Learn more about creating Batch accounts with the [Azure portal](../batch/batch-account-create-portal.md), the [Azure CLI](../batch/scripts/batch-cli-sample-create-account.md), [PowerShell](../batch/batch-powershell-cmdlets-get-started.md), or the [Batch management API](../batch/batch-management-dotnet.md).
+
+### Zone down experience
+
+During a zone down outage, the nodes within that zone become unavailable. Any nodes within that same node pool from other zone(s) aren't impacted and continue to be available.
+
+Azure Batch account doesn't reallocate or create new nodes to compensate for nodes that have gone down due to the outage. Users are required to add more nodes to the node pool, which are then allocated from other healthy zone(s).
+
+### Fault tolerance
+
+To prepare for a possible availability zone failure, you should over-provision capacity of service to ensure that the solution can tolerate 1/3 loss of capacity and continue to function without degraded performance during zone-wide outages. Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
++
+## Disaster recovery: cross region failover
+
+Azure Batch is available in all Azure regions. However, when a Batch account is created, it must be associated with one specific region. All subsequent operations for that Batch account only apply to that region. For example, pools and associated virtual machines (VMs) are created in the same region as the Batch account.
+
+When designing an application that uses Batch, you must consider the possibility that Batch may not be available in a region. It's possible to encounter a rare situation where there's a problem with the region as a whole, the entire Batch service in the region, or your specific Batch account.
+
+If the application or solution using Batch must always be available, then it should be designed to either failover to another region or always have the workload split between two or more regions. Both approaches require at least two Batch accounts, with each account located in a different region.
+
+You're responsible for setting up cross-region disaster recovery with Azure Batch. If you run multiple Batch accounts across specific regions and take advantage of availability zones, your application can meet your disaster recovery objectives when one of your Batch accounts becomes unavailable.
+
+When providing the ability to failover to an alternate region, all components in a solution must be considered; it's not sufficient to simply have a second Batch account. For example, in most Batch applications, an Azure storage account is required. The storage account and Batch account must be in the same region for acceptable performance.
+
+Consider the following points when designing a solution that can failover:
+
+- Precreate all required services in each region, such as the Batch account and the storage account. There's often no charge for having accounts created, and charges accrue only when the account is used or when data is stored.
+
+- Make sure ahead of time that the [appropriate quotas](/azure/batch/batch-quota-limit) are set for all **user subscription** Batch accounts, to allocate the required number of cores using the Batch account.
+
+- Use templates and/or scripts to automate the deployment of the application in a region.
+
+- Keep application binaries and reference data up to date in all regions. Staying up to date will ensure that the region can be brought online quickly without having to wait for the upload and deployment of files. For example, consider the case where a custom application to install on pool nodes is stored and referenced using Batch application packages. When an update of the application is released, it should be uploaded to each Batch account and referenced by the pool configuration (or make the latest version the default version).
+
+- In the application calling Batch, storage, and any other services, make it easy to switch over clients or the load to different regions.
+
+- Consider frequently switching over to an alternate region as part of normal operation. For example, with two deployments in separate regions, switch over to the alternate region every month.
+
+The duration of time to recover from a disaster depends on the setup you choose. Batch itself is agnostic regarding whether you're using multiple accounts or a single account. In active-active configurations, where two Batch instances are receiving traffic simultaneously, disaster recovery is faster than for an active-passive configuration. Which configuration you choose should be based on business needs (different regions, latency requirements) and technical considerations.
++
+### Single-region geography disaster recovery
+How you implement disaster recovery in Batch is the same, whether you're working in a single-region or multi-region geography. The only differences are which SKU you use for storage, and whether you intend to use the same or different storage account across all regions.
+
+### Disaster recovery testing
+
+You should perform your own disaster recovery testing of your Batch enabled solution. It's considered a best practice to enable easy switching between client and service load across different regions.
+
+Testing your disaster recovery plan for Batch can be as simple as alternating Batch accounts. For example, you could rely on a single Batch account in a specific region for one operational day. Then, on the next day, you could switch to a second Batch account in a different region. Disaster recovery is primarily managed on the client side. This multiple-account approach to disaster recovery takes care of RTO and RPO expectations in either single-region or multiple-region geographies.
+
+### Capacity and proactive disaster recovery resiliency
+
+Microsoft and its customers operate under the Shared Responsibility model. Microsoft is responsible for platform and infrastructural resiliency. You are responsible for addressing disaster recovery for any specific service you deploy and control. To ensure that recovery is proactive:
+
+- You should always predeploy secondaries. The predeployment of secondaries is necessary because there's no guarantee of capacity at time of impact for those who haven't preallocated such resources.
+
+- Precreate all required services in each region, such as your Batch accounts and associated storage accounts. There's no charge for creating new accounts; charges accrue only when the account is used or when data is stored.
+
+- Make sure [appropriate quotas](../batch/batch-quota-limit.md) are set on all subscriptions ahead of time, so you can allocate the required number of cores using the Batch account. As with other Azure services, there are limits on certain resources associated with the Batch service. Many of these limits are default quotas applied by Azure at the subscription or account level. Keep these quotas in mind as you design and scale up your Batch workloads.
++
+>[!NOTE]
+>If you plan to run production workloads in Batch, you may need to increase one or more of the quotas above the default. To raise a quota, you can request a quota increase at no charge. For more information, see [Request a quota increase](../batch/batch-quota-limit.md#increase-a-quota).
+
+#### Storage
+
+You must configure Batch storage to ensure data is backed up cross-region; customer responsibility is the default. Most Batch solutions use Azure Storage for storing [resource files](../batch/resource-files.md) and output files. For example, your Batch tasks (including standard tasks, start tasks, job preparation tasks, and job release tasks) typically specify resource files that reside in a storage account. Storage accounts also store data that is processed and any output data that is generated. Understanding possible data loss across the regions of your service operations is an important consideration. You must also confirm whether data is rewritable or read-only.
+
+Batch supports the following types of Azure Storage accounts:
+- General-purpose v2 (GPv2) accounts
+- General-purpose v1 (GPv1) accounts
+- Blob storage accounts (currently supported for pools in the Virtual Machine configuration)
+
+For more information about storage accounts, see [Azure storage account overview](../storage/common/storage-account-overview.md).
+
+You can associate a storage account with your Batch account when you create the account or do this step later.
+
+If you're setting up a separate storage account for each region your service is available in, you must use zone-redundant storage (ZRS) accounts. Use geo-zone-redundant storage (GZRS) accounts if you're using the same storage account across multiple paired regions. For geographies that contain a single region, you must create a zone-redundant storage (ZRS) account because GZRS isn't available.
+
+Capacity planning is another important consideration with storage and should be addressed proactively. Consider your cost and performance requirements when choosing a storage account. For example, the GPv2 and blob storage account options support greater [capacity and scalability limits](https://azure.microsoft.com/blog/announcing-larger-higher-scale-storage-accounts/) compared with GPv1. (Contact Azure Support to request an increase in a storage limit.) These account options can improve the performance of Batch solutions that contain a large number of parallel tasks that read from or write to the storage account.
+
+When a storage account is linked to a Batch account, think of it as the autostorage account. An autostorage account is required if you plan to use the [application packages](../batch/batch-application-packages.md) capability, as it's used to store the application package .zip files. An autostorage account can also be used for [task resource files](../batch/resource-files.md#storage-container-name-autostorage); since the autostorage account is already linked to the Batch account, this avoids the need for shared access signature (SAS) URLs to access the resource files.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](/azure/availability-zones/overview)
reliability Reliability Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bot.md
For more information on deploying bots with local data residency and regional co
## Availability zone support
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability in the case of a local zone failure. When one zone experiences a failure, the remaining two zones support all regional services, capacity, and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
For regional bots, Azure Bot Service supports zone redundancy by default. You don't need to set it up or reconfigure for availability zone support.
For regional bots, Azure Bot Service supports zone redundancy by default. You d
### Zone down experience
-During a zone-wide outage, the customer should expect a brief degradation of performance, until the service's self-healing re-balances underlying capacity to adjust to healthy zones. This is not dependent on zone restoration; it is expected that the Microsoft-managed service self-healing state will compensate for a lost zone, leveraging capacity from other zones.
-
+During a zone-wide outage, the customer should expect a brief degradation of performance, until the service's self-healing rebalances underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state compensates for a lost zone, using capacity from other zones.
### Cross-region disaster recovery in multi-region geography
-Azure Bot Service runs in active-active mode for both global and regional services. When an outage occurs, you don't need to detect errors or manage the service. Azure Bot Service automatically performs auto-failover and auto recovery in a multi-region geographical architecture. For the EU bot regional service, Azure Bot Service provides two full regions inside Europe with active/active replication to ensure redundancy. For the global bot service, all available regions/geographies can be served as the global footprint.
-
+Azure Bot Service runs in active-active mode for both global and regional services. When an outage occurs, you don't need to detect errors or manage the service. Azure Bot Service automatically performs autofailover and auto recovery in a multi-region geographical architecture. For the EU bot regional service, Azure Bot Service provides two full regions inside Europe with active/active replication to ensure redundancy. For the global bot service, all available regions/geographies can be served as the global footprint.
## Next steps
reliability Reliability Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-containers.md
This article describes reliability support in Azure Container Instances (ACI) an
## Availability zone support
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](availability-zones-service-support.md).
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability in the case of a local zone failure. When one zone experiences a failure, the remaining two zones support all regional services, capacity, and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](availability-zones-service-support.md).
There are three types of Azure services that support availability zones: zonal, zone-redundant, and always-available services. You can learn more about these types of services and how they promote resiliency in the [Azure services with availability zone support](availability-zones-service-support.md#azure-services-with-availability-zone-support).
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
This article describes reliability support in Azure Data Manager for Energy, and
## Availability zone support
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If there's a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](availability-zones-overview.md).
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability in the case of a local zone failure. When one zone experiences a failure, the remaining two zones support all regional services, capacity, and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](availability-zones-overview.md).
Azure Data Manager for Energy Preview supports zone-redundant instance by default and there's no setup required.
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
Availability zone support for Azure Functions is available on both Premium (Elas
## Availability zone support
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](availability-zones-service-support.md).
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability in the case of a local zone failure. When one zone experiences a failure, the remaining two zones support all regional services, capacity, and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Availability zone service and regional support](availability-zones-service-support.md).
There are three types of Azure services that support availability zones: zonal, zone-redundant, and always-available services. You can learn more about these types of services and how they promote resiliency in the [Azure services with availability zone support](availability-zones-service-support.md#azure-services-with-availability-zone-support).
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance is a collection of service-specific reliability guide
[Azure App Configuration](../azure-app-configuration/faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-does-app-configuration-ensure-high-data-availability)| [Azure App Service](./reliability-app-service.md)| [Azure Application Gateway (V2)](../application-gateway/application-gateway-autoscaling-zone-redundant.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Batch](../batch/create-pool-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Batch](reliability-batch.md)|
[Azure Bot Service](reliability-bot.md)| [Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Cognitive Search](../search/search-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
remote-rendering Powershell Example Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/samples/powershell-example-scripts.md
Next to the `.ps1` files there's an `arrconfig.json` that you need to fill out:
"accountSettings": { "arrAccountId": "<fill in the account ID from the Azure Portal>", "arrAccountKey": "<fill in the account key from the Azure Portal>",
- "region": "<select from available regions>"
+ "arrAccountDomain": "<select from available regions or specify the full url>"
}, "renderingSessionSettings": {
+ "remoteRenderingDomain": "<select from available regions or specify the full url>",
"vmSize": "<select standard or premium>", "maxLeaseTime": "<hh:mm:ss>" },
Next to the `.ps1` files there's an `arrconfig.json` that you need to fill out:
### accountSettings For `arrAccountId` and `arrAccountKey`, see [Create an Azure Remote Rendering account](../how-tos/create-an-account.md).
-For `region` see the [list of available regions](../reference/regions.md).
+The `arrAccountDomain` should be a region from [list of available regions](../reference/regions.md). If you're running on a nonpublic Azure region, you have to specify the full url to the account authentication service in your region.
### renderingSessionSettings This structure must be filled out if you want to run **RenderingSession.ps1**: - **vmSize:** Selects the size of the virtual machine. Select [*standard*](../reference/vm-sizes.md) or [*premium*](../reference/vm-sizes.md). Shut down rendering sessions when you don't need them anymore.-- **maxLeaseTime:** The duration for which you want to lease the VM. It will be shut down when the lease expires. The lease time can be extended later (see below).
+- **maxLeaseTime:** The duration for which you want to lease the VM. The VM shuts down when the lease expires. The lease time can be extended later (see [here](#change-session-properties)).
+- **remoteRenderingDomain:** The region where the remote rendering VM resides in.
+ - Can differ from the arrAccountDomain, but still should be a region from [list of available regions](../reference/regions.md)
+ - If you're running on a nonpublic Azure region, you have to specify the full url to the remote rendering service in your region.
### assetConversionSettings
Normal usage with a fully filled out arrconfig.json:
.\RenderingSession.ps1 ```
-The script will call the [session management REST API](../how-tos/session-rest-api.md) to spin up a rendering VM with the specified settings. On success, it will retrieve the *sessionId*. Then it will poll the session properties until the session is ready or an error occurred.
+The script calls the [session management REST API](../how-tos/session-rest-api.md) to spin up a rendering VM with the specified settings. On success, it retrieves the *sessionId*. Afterwards it polls the session properties until the session is ready or an error occurred.
To use an **alternative config** file:
To use an **alternative config** file:
You can **override individual settings** from the config file: ```PowerShell
-.\RenderingSession.ps1 -Region <region> -VmSize <vmsize> -MaxLeaseTime <hh:mm:ss>
+.\RenderingSession.ps1 -ArrAccountDomain <arrAccountDomain> -RemoteRenderingDomain <remoteRenderingDomain> -VmSize <vmsize> -MaxLeaseTime <hh:mm:ss>
``` To only **start a session without polling**, you can use:
At the moment, we only support changing the maxLeaseTime of a session.
This script is used to convert input models into the Azure Remote Rendering specific runtime format. > [!IMPORTANT]
-> Make sure you have filled out the *accountSettings* and *assetConversionSettings* sections in arrconfig.json.
+> Make sure you have filled out the *accountSettings* and *assetConversionSettings* sections, and the *remoteRenderingDomain* option in the *renderingSessionSettings* in arrconfig.json.
The script demonstrates the two options to use storage accounts with the service:
Using a linked storage account is the preferred way to use the conversion servic
.\Conversion.ps1 ```
-1. Upload all files contained in the `assetConversionSettings.modelLocation` to the input blob container under the given `inputFolderPath`..
+1. Upload all files contained in the `assetConversionSettings.modelLocation` to the input blob container under the given `inputFolderPath`.
1. Call the [model conversion REST API](../how-tos/conversion/conversion-rest-api.md) to kick off the [model conversion](../how-tos/conversion/model-conversion.md) 1. Poll the conversion status until the conversion succeeded or failed. 1. Output details of the converted file location (storage account, output container, file path in the container).
You can **override individual settings** from the config file using the followin
* **Id:** ConversionId used with GetConversionStatus * **ArrAccountId:** arrAccountId of accountSettings * **ArrAccountKey:** override for arrAccountKey of accountSettings
-* **Region:** override for region of accountSettings
+* **ArrAccountDomain:** override for arrAccountDomain of accountSettings
+* **RemoteRenderingDomain:** override for remoteRenderingDomain of renderingSessionSettings
* **ResourceGroup:** override for resourceGroup of assetConversionSettings * **StorageAccountName:** override for storageAccountName of assetConversionSettings * **BlobInputContainerName:** override for blobInputContainer of assetConversionSettings
You can **override individual settings** from the config file using the followin
* **OutputFolderPath:** override for the outputFolderPath of assetConversionSettings * **OutputAssetFileName:** override for outputAssetFileName of assetConversionSettings
-For example you can combine a number of the given options like this:
+For example you can combine the given options like this:
```PowerShell .\Conversion.ps1 -LocalAssetDirectoryPath "C:\\models\\box" -InputAssetPath box.fbx -OutputFolderPath another/converted/box -OutputAssetFileName newConversionBox.arrAsset
Only upload data from the given LocalAssetDirectoryPath.
``` Only start the conversion process of a model already uploaded to blob storage (don't run Upload, don't poll the conversion status)
-The script will return a *conversionId*.
+The script returns a *conversionId*.
```PowerShell .\Conversion.ps1 -ConvertAsset
sap Manage With Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-with-azure-rbac.md
There are [Azure built-in roles](../../role-based-access-control/built-in-roles.
- The **Azure Center for SAP solutions reader** role has permissions to view all VIS resources. > [!NOTE]
-> If you're creating a new user-assigned managed identity when you deploy a new SAP system or register an existing system, the user must also have the **Managed Identity Contributor** role. This role is required to make role assignments to a user-assigned managed identity.
+> To use an existing user-assigned managed identity for deploying a new SAP system or registering an existing system, the user must also have the **Managed Identity Operator** role. This role is required to assign a user-assigned managed identity to the Virtual Instance for SAP solutions resource.
+
+> [!NOTE]
+> If you're creating a new user-assigned managed identity when you deploy a new SAP system or register an existing system, the user must also have the **Managed Identity Contributor** and **Managed Identity Operator** roles. These roles are required to create a user-assigned identity, make necessary role assignments to it and assign it to the VIS resource.
## Deploy infrastructure for new SAP system
To deploy infrastructure for a new SAP system, a *user* and *user-assigned manag
| Built-in roles for *users* | | - | | **Azure Center for SAP solutions administrator** |
+| **Managed Identity Operator** |
| Minimum permissions for *users* | | - |
To register an existing SAP system and manage that system with Azure Center for
| Built-in roles for *users* | | - | | **Azure Center for SAP solutions administrator** |
+| **Managed Identity Operator** |
| Minimum permissions for *users* | | - |
sentinel Top Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/top-workbooks.md
Access workbooks in Microsoft Sentinel under **Threat Management** > **Workbooks
|**Security Alerts** | Provides a Security Alerts dashboard for alerts in your Microsoft Sentinel environment. <br><br>For more information, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). | |**Security Operations Efficiency** | Intended for security operations center (SOC) managers to view overall efficiency metrics and measures regarding the performance of their team. <br><br>For more information, see [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md). | |**Threat Intelligence** | Provides insights into threat indicators, including type and severity of threats, threat activity over time, and correlation with other data sources, including Office 365 and firewalls. <br><br>For more information, see [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md) and our [TechCommunity blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-azure-sentinel-threat-intelligence-workbook/ba-p/2858265). |
-|**Zero Trust (TIC3.0)** | Provides an automated visualization of Zero Trust principles, cross-walked to the [Trusted Internet Connections framework](https://www.cisa.gov/trusted-internet-connections). <br><br>For more information, see the [Zero Trust (TIC 3.0) workbook announcement blog](https://techcommunity.microsoft.com/t5/public-sector-blog/announcing-the-azure-sentinel-zero-trust-tic3-0-workbook/ba-p/2313761). |
+|**Zero Trust (TIC3.0)** | Provides an automated visualization of Zero Trust principles, cross-walked to the [Trusted Internet Connections framework](https://www.cisa.gov/resources-tools/programs/trusted-internet-connections-tic). <br><br>For more information, see the [Zero Trust (TIC 3.0) workbook announcement blog](https://techcommunity.microsoft.com/t5/public-sector-blog/announcing-the-azure-sentinel-zero-trust-tic3-0-workbook/ba-p/2313761). |
sentinel Use Matching Analytics To Detect Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-matching-analytics-to-detect-threats.md
# Use matching analytics to detect threats
-Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This rule will match Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators.
+Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. Check the prerequisites to validate which logs this rule will match indicators with.
> [!IMPORTANT] > Matching analytics is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
One or more of the following data sources must be connected:
- Common Event Format (CEF) - DNS (Preview) - Syslog
+- Office activity logs
+- Azure activity logs
:::image type="content" source="media/use-matching-analytics-to-detect-threats/data-sources.png" alt-text="A screenshot showing the Microsoft Threat Intelligence Analytics rule data source connections.":::
Microsoft Threat Intelligence Analytics matches your logs with domain, IP and U
- **Syslog** events where `Facility == "cron"` ingested into the **Syslog** table will match domain and IPv4 indicators directly from the `SyslogMessage` field.
+- **Office activity logs** ingested into the **OfficeActivity** table will match IPv4 indicators directly from the `ClientIP` field.
+
+- **Azure activity logs** ingested into the **AzureActivity** table will match IPv4 indicators directly from the `CallerIpAddress` field.
+ ## Triage an incident generated by matching analytics
service-bus-messaging How To Use Java Message Service 20 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/how-to-use-java-message-service-20.md
To learn more about how to prepare your developer environment for Java on Azure,
To utilize all the features available in the premium tier, add the following library to the build path of the project.
-[Azure-servicebus-jms](https://search.maven.org/artifact/com.microsoft.azure/azure-servicebus-jms)
+[Azure-servicebus-jms](https://central.sonatype.com/artifact/com.microsoft.azure/azure-servicebus-jms/1.0.0)
> [!NOTE]
-> To add the [Azure-servicebus-jms](https://search.maven.org/artifact/com.microsoft.azure/azure-servicebus-jms) to the build path, use the preferred dependency management tool for your project like [Maven](https://maven.apache.org/) or [Gradle](https://gradle.org/).
+> To add the [Azure-servicebus-jms](https://central.sonatype.com/artifact/com.microsoft.azure/azure-servicebus-jms/1.0.0) to the build path, use the preferred dependency management tool for your project like [Maven](https://maven.apache.org/) or [Gradle](https://gradle.org/).
## Coding Java applications
For more information on Azure Service Bus and details about Java Message Service
* [Service Bus AMQP 1.0 Developer's Guide](service-bus-amqp-dotnet.md) * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [Java Message Service API(external Oracle doc)](https://docs.oracle.com/javaee/7/api/javax/jms/package-summary.html)
-* [Learn how to migrate from ActiveMQ to Service Bus](migrate-jms-activemq-to-servicebus.md)
+* [Learn how to migrate from ActiveMQ to Service Bus](migrate-jms-activemq-to-servicebus.md)
service-bus-messaging Jms Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/jms-developer-guide.md
Each connection factory is an instance of `ConnectionFactory`, `QueueConnectionF
To simplify connecting with Azure Service Bus, these interfaces are implemented through `ServiceBusJmsConnectionFactory`, `ServiceBusJmsQueueConnectionFactory` and `ServiceBusJmsTopicConnectionFactory` respectively. > [!IMPORTANT]
-> Java applications leveraging JMS 2.0 API can connect to Azure Service Bus using the connection string, or using a `TokenCredential` for leveraging Azure Active Directory (AAD) backed authentication.
+> Java applications leveraging JMS 2.0 API can connect to Azure Service Bus using the connection string, or using a `TokenCredential` for leveraging Azure Active Directory (AAD) backed authentication. When using AAD backed authentication, ensure to [assign roles and permissions](service-bus-managed-service-identity.md#assigning-azure-roles-for-access-rights) to the identity as needed.
# [System Assigned Managed Identity](#tab/system-assigned-managed-identity-backed-authentication)
Create a [system assigned managed identity](../active-directory/managed-identiti
TokenCredential tokenCredential = new DefaultAzureCredentialBuilder().build(); ```
-The Connection factory can then be instantiated with the below parameters.-
+The Connection factory can then be instantiated with the below parameters.
* Token credential - Represents a credential capable of providing an OAuth token.
- * Connection string - the connection string for the Azure Service Bus Premium tier namespace.
+ * Host - the hostname of the Azure Service Bus Premium tier namespace.
* ServiceBusJmsConnectionFactorySettings property bag, which contains * connectionIdleTimeoutMS - idle connection timeout in milliseconds. * traceFrames - boolean flag to collect AMQP trace frames for debugging. * *other configuration parameters*
-The factory can be created as shown here. The connection string is a required parameter, but the other properties are optional.
+The factory can be created as shown here. The token credential and host are required parameters, but the other properties are optional.
```java String host = "<YourNamespaceName>.servicebus.windows.net";
TokenCredential tokenCredential = new DefaultAzureCredentialBuilder()
.build(); ```
-The Connection factory can then be instantiated with the below parameters.-
+The Connection factory can then be instantiated with the below parameters.
* Token credential - Represents a credential capable of providing an OAuth token.
- * Connection string - the connection string for the Azure Service Bus Premium tier namespace.
+ * Host - the hostname of the Azure Service Bus Premium tier namespace.
* ServiceBusJmsConnectionFactorySettings property bag, which contains * connectionIdleTimeoutMS - idle connection timeout in milliseconds. * traceFrames - boolean flag to collect AMQP trace frames for debugging. * *other configuration parameters*
-The factory can be created as shown here. The connection string is a required parameter, but the other properties are optional.
+The factory can be created as shown here. The token credential and host are required parameters, but the other properties are optional.
+
+```java
+String host = "<YourNamespaceName>.servicebus.windows.net";
+ConnectionFactory factory = new ServiceBusJmsConnectionFactory(tokenCredential, host, null);
+```
+
+# [Service Principal](#tab/service-principal-backed-authentication)
+
+Create a [service principal](authenticate-application.md#register-your-application-with-an-azure-ad-tenant) on Azure, and use this identity to create a `TokenCredential`.
+
+```java
+TokenCredential tokenCredential = new new ClientSecretCredentialBuilder()
+                .tenantId("")
+                .clientId("")
+                .clientSecret("")
+                .build();;
+```
+
+The Connection factory can then be instantiated with the below parameters.
+ * Token credential - Represents a credential capable of providing an OAuth token.
+ * Host - the hostname of the Azure Service Bus Premium tier namespace.
+ * ServiceBusJmsConnectionFactorySettings property bag, which contains
+ * connectionIdleTimeoutMS - idle connection timeout in milliseconds.
+ * traceFrames - boolean flag to collect AMQP trace frames for debugging.
+ * *other configuration parameters*
+
+The factory can be created as shown here. The token credential and host are required parameters, but the other properties are optional.
```java String host = "<YourNamespaceName>.servicebus.windows.net";
ConnectionFactory factory = new ServiceBusJmsConnectionFactory(tokenCredential,
# [Connection string authentication](#tab/connection-string-authentication)
-The Connection factory can be instantiated with the below parameters -
+The Connection factory can be instantiated with the below parameters.
* Connection string - the connection string for the Azure Service Bus Premium tier namespace. * ServiceBusJmsConnectionFactorySettings property bag, which contains * connectionIdleTimeoutMS - idle connection timeout in milliseconds.
service-connector How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-manage-authentication.md
+
+ Title: Manage authentication in Service Connector
+description: Learn how to select and manage authentication parameters in Service Connector.
+++ Last updated : 03/07/2023+++
+# Manage authentication within Service Connector
+
+In this guide, learn about the different authentication options available in Service Connector, and how to customize environment variables.
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free).
+- An Azure App Service, Azure Container Apps or Azure Spring Apps instance.
+- This guide assumes that you already know how the basics of connecting services using Service Connector. To review our quickstarts, go to [App Service](quickstart-portal-app-service-connection.md), [Container Apps](quickstart-portal-container-apps.md) or [Azure Spring Apps](quickstart-portal-spring-cloud-connection.md).
+
+## Start creating a new connection
+
+1. Within your App Service, Container Apps or Azure Spring Apps instance, open Service Connector and fill out the form in the **Basics** tab with the required information about your compute and target services.
+1. Select **Next : Authentication**.
+
+## Select an authentication option
+
+Select one of the four different authentication options offered by Service Connector to connect your Azure services together:
+
+- **System assigned managed identity**: provides an automatically managed identity tied to the resource in Azure Active Directory (Azure AD)
+- **User assigned managed identity**: provides an identity that can be used on multiple resources
+- **Connection string**: provides one or multiple key-value pairs with secrets or tokens
+- **Service principal**: creates a service principal that defines the access policy and permissions for the user/application in the Azure AD tenant
+
+Service Connector offers the following authentication options:
+
+| Target resource | System assigned managed identity | User assigned managed identity | Connection string | Service principal |
+|-|--|--|--|--|
+| App Configuration | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Azure SQL | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Azure Cache for Redis | | | ![yes icon](./media/green-check.png) | |
+| Azure Cache for Redis Enterprise | | | ![yes icon](./media/green-check.png) | |
+| Azure Cosmos DB - Cassandra | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Azure Cosmos - Gremlin | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Azure Cosmos DB for MongoDB | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Azure Cosmos Table | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Azure Cosmos - SQL | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Blob Storage | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Confluent Cloud | | | ![yes icon](./media/green-check.png) | |
+| Event Hubs | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Keyvault | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| MySQL single server | ![yes icon](./media/green-check.png) | | | |
+| MySQL flexible server | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Postgres single server | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Postgres, flexible server | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | |
+| Storage Queue | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Storage File | | | ![yes icon](./media/green-check.png) | |
+| Storage Table | | | ![yes icon](./media/green-check.png) | |
+| Service Bus | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| SignalR | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| WebPub Sub | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+## Review or update authentication configuration
+
+## [System assigned managed identity](#tab/managed-identity)
+
+When using a system-assigned managed identity, optionally review or update its authentication configuration by following these steps:
+
+1. Select **Advanced** to display more options.
+1. Under **Role**, review the default role selected for your source service or choose another one from the list.
+1. Under **Configuration information**, Service Connector lists a series of configuration settings that will be generated when you create the connection. This list consists of environment variables or application properties. It varies depending on the target resource and authentication method selected. Optionally select the edit button in front of each configuration setting to edit its key.
+1. Select **Done** to confirm.
+
+ :::image type="content" source="./media/manage-authentication/managed-identity-advanced.png" alt-text="Screenshot of the Azure portal, showing advanced authentication configuration for a system-assigned managed identity.":::
+
+## [User assigned managed identity](#tab/user-assigned-identity)
+
+When using a user-assigned managed identity, review or edit its authentication settings by following these steps:
+
+1. Under **Subscription**, select the Azure subscription that contains your user-assigned managed identity.
+1. Under **User assigned managed identity**, select the managed identity you want to use.
+
+ :::image type="content" source="./media/manage-authentication/user-assigned-identity-basic.png" alt-text="Screenshot of the Azure portal, showing basic authentication configuration for a user-assigned managed identity.":::
+
+1. Optionally select **Advanced** to display more options.
+ 1. Under **Role**, review the default role selected for your source service or choose another one from the list.
+ 1. Under **Configuration information**, Service Connector lists a series of configuration settings that will be generated when you create the connection. This list consists of environment variables or application properties and varies depending on the target resource and authentication method selected. Optionally select the edit button in front of each configuration setting to edit its key.
+ 1. Select **Done** to confirm.
+
+ :::image type="content" source="./media/manage-authentication/user-assigned-identity-advanced.png" alt-text="Screenshot of the Azure portal, showing advanced authentication configuration for a user-assigned managed identity.":::
+
+## [Connection string](#tab/connection-string)
+
+When using a connection string, review or edit its authentication settings by following these steps:
+
+1. Optionally select **Store Secret in Key Vault** to save your connection credentials in Azure Key Vault. This option lets you select an existing Key Vault connection from a drop-down list or create a new connection to a new or an existing Key Vault.
+
+ :::image type="content" source="./media/manage-authentication/connection-string-basic-with-key-vault.png" alt-text="Screenshot of the Azure portal, showing basic authentication configuration to authenticate with a connection-string.":::
+
+1. Optionally select **Advanced** to display more options.
+ 1. Under **Configuration information**, Service Connector lists a series of configuration settings that will be generated when you create the connection. This list consists of environment variables or application properties and varies depending on the target resource and authentication method selected. Optionally select the edit button in front of each configuration setting to edit its key.
+ 1. Select **Done** to confirm.
+
+ :::image type="content" source="./media/manage-authentication/connection-string-advanced.png" alt-text="Screenshot of the Azure portal, showing advanced authentication configuration to authenticate with a connection-string.":::
+
+## [Service principal](#tab/service-principal)
+
+When connecting Azure services using a service principal, review or edit authentication settings by following these steps:
+
+1. Choose a service principal by entering an object ID or name and selecting your service principal.
+1. Under **Secret**, enter the secret of the service principal.
+1. Optionally select **Store Secret in Key Vault** to save your connection credentials in Azure Key Vault. This option lets you select an existing Key Vault connection from a drop-down list or create a new connection to a new or an existing Key Vault.
+
+ :::image type="content" source="./media/manage-authentication/service-principal-basic-with-key-vault.png" alt-text="Screenshot of the Azure portal, showing basic authentication configuration to authenticate with a service principal.":::
+
+1. Optionally select **Advanced** to display more options.
+ 1. Under **Configuration information**, Service Connector lists a series of configuration settings that will be generated when you create the connection. This list consists of environment variables or application properties and varies depending on the target resource and authentication method selected. Optionally select the edit button in front of each configuration setting to edit its key.
+ 1. Select **Done** to confirm.
+
+ :::image type="content" source="./media/manage-authentication/service-principal-advanced.png" alt-text="Screenshot of the Azure portal, showing advanced authentication configuration to authenticate with a service principal.":::
+
+1. Select **Review + Create** and then **Create** to finalize the creation of the connection.
+++
+## Check authentication configuration
+
+You can review authentication configuration on the following pages in the Azure portal:
+
+- When creating the connection, select the **Review + Create** tab and check the information listed under **Authentication**.
+
+ :::image type="content" source="./media/manage-authentication/review-authentication.png" alt-text="Screenshot of the Azure portal, showing a summary of connection authentication configuration.":::
+
+- After you've created the connection, in the **Service connector** page, configuration keys are listed.
+ :::image type="content" source="./media/manage-authentication/review-keys-after-creation.png" alt-text="Screenshot of the Azure portal, showing a summary of authentication configuration keys.":::
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Service Connector internals](./concept-service-connector-internals.md)
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
The following image shows the three types of repository authentication supported
| `Host key algorithm` | No | The algorithm for `hostKey`: one of `ssh-dss`, `ssh-rsa`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, and `ecdsa-sha2-nistp521`. (Required if supplying `Host key`). | | `Strict host key checking` | No | Optional value that indicates whether the backend should be ignored if it encounters an error when using the provided `Host key`. Valid values are `true` and `false`. The default value is `true`. |
-> [!NOTE]
-> Application Configuration Service for Tanzu uses RSA keys with SHA-1 signatures for now. If you're using GitHub, for RSA public keys added to GitHub before November 2, 2021, the corresponding private key is supported. For RSA public keys added to GitHub after November 2, 2021, the corresponding private key is not supported, and we suggest using basic authentication instead.
- To validate access to the target URI, select **Validate**. After validation completes successfully, select **Apply** to update the configuration settings. ## Refresh strategies
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
async function createContainer(blobServiceClient, containerName){
A root container, with the specific name `$root`, enables you to reference a blob at the top level of the storage account hierarchy. For example, you can reference a blob _without using a container name in the URI_:
-`https://myaccount.blob.core.windowsJavaScript/default.html`
+`https://myaccount.blob.core.windows.net/default.html`
The root container must be explicitly created or deleted. It isn't created by default as part of service creation. The same code displayed in the previous section can create the root. The container name is `$root`.
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
This article shows how to copy a blob in a storage account using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). It also shows how to abort an asynchronous copy operation. > [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md).
+> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create-javascript.md).
## About copying blobs
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
This article shows how to delete blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). If you've enabled [soft delete for blobs](soft-delete-blob-overview.md), you can restore deleted blobs during the retention period. > [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md).
+> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create-javascript.md).
## Delete a blob
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
Previously updated : 01/30/2023 Last updated : 03/20/2023 ms.devlang: csharp
This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for .NET. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
-[Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [API reference](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Give Feedback](https://github.com/Azure/azure-sdk-for-net/issues)
+[API reference](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-net/issues)
## Prerequisites
This article shows you how to connect to Azure Blob Storage by using the Azure B
## Set up your project
-Open a command prompt and change directory (`cd`) into your project folder. Then, install the Azure Blob Storage client library for .NET package by using the `dotnet add package` command.
+This section walks you through preparing a project to work with the Azure Blob Storage client library for .NET.
+
+From your project directory, install packages for the Azure Blob Storage and Azure Identity client libraries using the `dotnet add package` command. The Azure.Identity package is needed for passwordless connections to Azure services.
```console
-cd myProject
dotnet add package Azure.Storage.Blobs
+dotnet add package Azure.Identity
``` Add these `using` statements to the top of your code file. ```csharp
+using Azure.Identity;
using Azure.Storage.Blobs; using Azure.Storage.Blobs.Models; using Azure.Storage.Blobs.Specialized; ```
+Blob client library information:
+ - [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs): Contains the primary classes (_client objects_) that you can use to operate on the service, containers, and blobs. - [Azure.Storage.Blobs.Specialized](/dotnet/api/azure.storage.blobs.specialized): Contains classes that you can use to perform operations specific to a blob type, such as block blobs.
To authorize with Azure AD, you'll need to use a security principal. The type of
An easy and secure way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) instance. You can then use that credential to create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object.
-The following example creates a `BlobServiceClient` object using `DefaultAzureCredential`:
+The following example creates a `BlobServiceClient` object authorized using `DefaultAzureCredential`:
```csharp
-public static void GetBlobServiceClient(ref BlobServiceClient blobServiceClient, string accountName)
+public BlobServiceClient GetBlobServiceClient(string accountName)
{
- TokenCredential credential = new DefaultAzureCredential();
-
- string blobUri = "https://" + accountName + ".blob.core.windows.net";
+ BlobServiceClient client = new(
+ new Uri($"https://{accountName}.blob.core.windows.net"),
+ new DefaultAzureCredential());
- blobServiceClient = new BlobServiceClient(new Uri(blobUri), credential);
+ return client;
} ```
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
The following example gets a container URL and a blob URL by accessing the clien
## See also - [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md)-- [DownloadStreaming]() - [Get Blob](/rest/api/storageservices/get-blob) (REST API)
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for Java. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
-[API reference documentation](/jav?toc=/azure/storage/blobs/toc.json#blob-samples)
+[API reference](/jav?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-java/issues)
## Prerequisites
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
- # Get started with Azure Blob Storage and JavaScript This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library v12 for JavaScript. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service. The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files.
-[Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Give Feedback](https://github.com/Azure/azure-sdk-for-js/issues)
+[API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-js/issues)
## Prerequisites
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
Last updated 11/30/2022
+ms.devlang: javascript
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for Python. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
-[API reference documentation](/python/api/azure-storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Samples](../common/storage-samples-python.md?toc=/azure/storage/blobs/toc.json#blob-samples)
+[API reference](/python/api/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Samples](../common/storage-samples-python.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-python/issues)
## Prerequisites
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
Use the BlobClient.[beginCopyFromURL](/javascript/api/@azure/storage-blob/blobcl
The batch represents an aggregated set of operations on blobs, such as [delete](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-deleteblobs-1) or [set access tier](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-setblobsaccesstier-1). You need to pass in the correct credential to successfully perform each operation. In this example, the same credential is used for a set of blobs in the same container.
-Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient). Use the client to create a batch with the [createBatch()](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-createbatch) method. When the batch is ready, [submit]/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-submitbatch) the batch for processing. Use the returned structure to validate each blob's operation was successful.
+Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient). Use the client to create a batch with the [createBatch()](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-createbatch) method. When the batch is ready, [submit](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-submitbatch) the batch for processing. Use the returned structure to validate each blob's operation was successful.
:::code language="javascript" source="~/azure-storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/batch-set-access-tier.js" id="Snippet_BatchChangeAccessTier" highlight="16,20":::
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
const listOptions = {
includeCopy: false, // include metadata from previous copies includeDeleted: false, // include deleted blobs includeDeletedWithVersions: false, // include deleted blobs with versions
- includeLegalHost: false, // include legal host id
+ includeLegalHold: false, // include legal hold
includeMetadata: true, // include custom metadata includeSnapshots: true, // include snapshots includeTags: true, // include indexable tags
- includeUncommittedBlobs: false, // include uncommitted blobs
+ includeUncommitedBlobs: false, // include uncommitted blobs
includeVersions: false, // include all blob version prefix: '' // filter by blob name prefix };
The Azure SDK for JavaScript contains libraries that build on top of the Azure R
### See also - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)-- [Blob versioning](versioning-overview.md)
+- [Blob versioning](versioning-overview.md)
virtual-desktop Agent Updates Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/agent-updates-diagnostics.md
Title: Set up diagnostics for monitoring agent updates
description: How to set up diagnostic reports to monitor agent updates. Previously updated : 01/31/2023 Last updated : 03/20/2023
To see when agent updates are happening or to make sure that the Scheduled Agent
| project TimeGenerated, AgentVersion, SessionHostName, LastUpgradeTimeStamp, UpgradeState, UpgradeErrorMsg | summarize arg_min(TimeGenerated, *) by AgentVersion | sort by TimeGenerated asc
- ```
-
-## Use diagnostics to check for unsuccessful agent updates
-
-To check if an agent component update was unsuccessful:
-
-1. Access the logs in your Log Analytics workspace.
-
-2. Select the **+** button to create a new query.
-
-3. Copy and paste the following Kusto query to see when the agent has updated for the specified session host. Make sure to change the **sessionHostName** parameter to the name of your session host.
-
- ```kusto
- WVDAgentHealthStatus
- | where TimeGenerated >= ago(30d)
- | where SessionHostName == "sessionHostName"
- | where MaintenanceWindowMissed == true
- | project TimeGenerated, AgentVersion, SessionHostName, LastUpgradeTimeStamp, UpgradeState, UpgradeErrorMsg, MaintenanceWindowMissed
- | sort by TimeGenerated asc
- ```
+ ```
## Next steps
virtual-desktop App Attach Msixmgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-msixmgr.md
Title: Using MSIXMGR tool - Azure
description: How to use the MSIXMGR tool for Azure Virtual Desktop. Previously updated : 02/23/2021 Last updated : 03/20/2023
The MSIXMGR tool is for expanding MSIX-packaged applications into MSIX images. The tool takes an MSIX-packaged application (.MSIX) and expands it into a VHD, VHDx, or CIM file. The resulting MSIX image is stored in the Azure Storage account that your Azure Virtual Desktop deployment uses.This article will show you how to use the MSIXMGR tool. >[!NOTE]
->To guarantee compatibility, make sure the CIMs storing your MSIX images are generated on the OS version you're running in your Azure Virtual Desktop host pools. MSIXMGR can create CIM files, but you can only use those files with a host pool running Windows 10 20H2.
+>To guarantee compatibility, make sure the CIM files storing your MSIX images are generated on a version of Windows that is lower than or equal to the version of Windows where you are planning to run the MSIX packages. For example, CIM files generated on Windows 11 may not work on Windows 10.
## Requirements
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Previously updated : 02/21/2023 Last updated : 03/20/2023
Once you're connected to your remote app or desktop, you may be prompted for aut
Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys when using the [Windows Desktop client](users/connect-windows.md). Passwordless authentication is enabled automatically when the session host and local PC are using the following operating systems:
- - Windows 11 Enterprise single or multi-session with the [2022-09 Cumulative Updates for Windows 11 Preview (KB5017383)](https://support.microsoft.com/kb/KB5017383) or later installed.
- - Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-09 Cumulative Updates for Windows 10 Preview (KB5017380)](https://support.microsoft.com/kb/KB5017380) or later installed.
- - Windows Server 2022 with the [2022-09 Cumulative Update for Microsoft server operating system preview (KB5017381)](https://support.microsoft.com/kb/KB5017381) or later installed.
+ - Windows 11 Enterprise single or multi-session with the [2022-10 Cumulative Updates for Windows 11 (KB5018418)](https://support.microsoft.com/kb/KB5018418) or later installed.
+ - Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-10 Cumulative Updates for Windows 10 (KB5018410)](https://support.microsoft.com/kb/KB5018410) or later installed.
+ - Windows Server 2022 with the [2022-10 Cumulative Update for Microsoft server operating system (KB5018421)](https://support.microsoft.com/kb/KB5018421) or later installed.
To disable passwordless authentication on your host pool, you must [customize an RDP property](customize-rdp-properties.md). You can find the **WebAuthn redirection** property under the **Device redirection** tab in the Azure portal or set the **redirectwebauthn** property to **0** using PowerShell.
virtual-desktop Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/cli-powershell.md
Some PowerShell cmdlets require you to provide the object ID of Azure Virtual De
- To retrieve the object IDs of all RemoteApp applications in an application group, run the following command: ```azurepowershell
- Get-AzWvdApplication -ApplicationGroupName <ApplicationGroupName> -ResourceGroupName <ResourceGroupName> | FT Name, FilePath, ObjectId
+ Get-AzWvdApplication -ApplicationGroupName <ApplicationGroupName> -ResourceGroupName <ResourceGroupName> | Select-Object Name, FilePath, ObjectId
```
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
Previously updated : 01/05/2023 Last updated : 03/20/2023 # Configure single sign-on for Azure Virtual Desktop using Azure AD Authentication
For information on using passwordless authentication within the session, see [In
Single sign-on is available on session hosts using the following operating systems: -- Windows 11 Enterprise single or multi-session with the [2022-09 Cumulative Updates for Windows 11 Preview (KB5017383)](https://support.microsoft.com/kb/KB5017383) or later installed.-- Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-09 Cumulative Updates for Windows 10 Preview (KB5017380)](https://support.microsoft.com/kb/KB5017380) or later installed.-- Windows Server 2022 with the [2022-09 Cumulative Update for Microsoft server operating system preview (KB5017381)](https://support.microsoft.com/kb/KB5017381) or later installed.
+- Windows 11 Enterprise single or multi-session with the [2022-10 Cumulative Updates for Windows 11 (KB5018418)](https://support.microsoft.com/kb/KB5018418) or later installed.
+- Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-10 Cumulative Updates for Windows 10 (KB5018410)](https://support.microsoft.com/kb/KB5018410) or later installed.
+- Windows Server 2022 with the [2022-10 Cumulative Update for Microsoft server operating system (KB5018421)](https://support.microsoft.com/kb/KB5018421) or later installed.
Session hosts must be Azure AD-joined or [Hybrid Azure AD-Joined](../active-directory/devices/hybrid-azuread-join-plan.md).
virtual-desktop Set Up Customize Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-customize-master-image.md
If Windows Defender is configured in the VM, make sure it's configured to not sc
This configuration only removes scanning of VHD and VHDX files during attachment, but won't affect real-time scanning.
-For more detailed instructions for how to configure Windows Defender on Windows Server, see [Configure Windows Defender Antivirus exclusions on Windows Server](/windows/security/threat-protection/windows-defender-antivirus/configure-server-exclusions-windows-defender-antivirus/).
+For more detailed instructions for how to configure Windows Defender, see [Configure Windows Defender Antivirus exclusions on Windows Server](/windows/security/threat-protection/windows-defender-antivirus/configure-server-exclusions-windows-defender-antivirus/).
To learn more about how to configure Windows Defender to exclude certain files from scanning, see [Configure and validate exclusions based on file extension and folder location](/windows/security/threat-protection/windows-defender-antivirus/configure-extension-file-exclusions-windows-defender-antivirus/).
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
vm-windows Previously updated : 07/13/2020 Last updated : 03/13/2023 ms.devlang: azurecli
This information can be seen in the Azure portal or you can use PowerShell.
(Get-AzAutomationRegistrationInfo -ResourceGroupName <resourcegroupname> -AutomationAccountName <accountname>).PrimaryKey ```
-For the Node Configuration name, make sure the node configuration exists in Azure State Configuration. If it does not, the extension deployment will return a failure. Also make sure you are using the name of the *Node Configuration* and not the Configuration.
+> [!WARNING]
+> For the Node Configuration name, make sure the node configuration exists in Azure State Configuration. If it does not, the extension deployment will return a failure.
+
+Make sure you are using the name of the *Node Configuration* and not the Configuration.
A Configuration is defined in a script that is used [to compile the Node Configuration (MOF file)](../../automation/automation-dsc-compile.md). The name will always be the Configuration followed by a period `.` and either `localhost` or a specific computer name.
virtual-machines Extensions Rmpolicy Howto Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-ps.md
+ Previously updated : 03/23/2018 Last updated : 03/20/2023
This tutorial uses Azure PowerShell within the Cloud Shell, which is constantly
In order to restrict what extensions can be installed, you need to have a [rule](../../governance/policy/concepts/definition-structure.md#policy-rule) to provide the logic to identify the extension.
-This example shows you how to deny extensions published by 'Microsoft.Compute' by creating a rules file in Azure Cloud Shell, but if you are working in PowerShell locally, you can also create a local file and replace the path ($home/clouddrive) with the path to the local file on your machine.
+This example shows you how to deny extensions published by 'Microsoft. Compute' by creating a rules file in Azure Cloud Shell, but if you're working in PowerShell locally, you can also create a local file and replace the path ($home/clouddrive) with the path to the local file on your machine.
In a [Cloud Shell](https://shell.azure.com/powershell), type:
Copy and paste the following .json into the file.
} ```
-When you are done, hit the **Ctrl + O** and then **Enter** to save the file. Hit **Ctrl + X** to close the file and exit.
+When you're done, hit the **Ctrl + O** and then **Enter** to save the file. Hit **Ctrl + X** to close the file and exit.
## Create a parameters file You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the extensions to block.
-This example shows you how to create a parameters file for VMs in Cloud Shell, but if you are working in PowerShell locally, you can also create a local file and replace the path ($home/clouddrive) with the path to the local file on your machine.
+This example shows you how to create a parameters file for VMs in Cloud Shell, but if you're working in PowerShell locally, you can also create a local file and replace the path ($home/clouddrive) with the path to the local file on your machine.
In [Cloud Shell](https://shell.azure.com/powershell), type:
Copy and paste the following .json into the file.
} ```
-When you are done, hit the **Ctrl + O** and then **Enter** to save the file. Hit **Ctrl + X** to close the file and exit.
+When you're done, hit the **Ctrl + O** and then **Enter** to save the file. Hit **Ctrl + X** to close the file and exit.
## Create the policy A policy definition is an object used to store the configuration that you would like to use. The policy definition uses the rules and parameters files to define the policy. Create a policy definition using the [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition) cmdlet.
- The policy rules and parameters are the files you created and stored as .json files in your cloud shell.
+ The policy rules and parameter values below are the files you created and stored as .json files in your Cloud Shell. Replace the file paths as needed.
```azurepowershell-interactive
$definition = New-AzPolicyDefinition `
## Assign the policy
-This example assigns the policy to a resource group using [New-AzPolicyAssignment](/powershell/module/az.resources/new-azpolicyassignment). Any VM created in the **myResourceGroup** resource group will not be able to install the VM Access Agent or Custom Script extensions.
+This example assigns the policy to a resource group using [New-AzPolicyAssignment](/powershell/module/az.resources/new-azpolicyassignment). Any VM created in the **myResourceGroup** resource group won't be able to install the VM Access Agent or Custom Script extensions.
Use the [Get-AzSubscription | Format-Table](/powershell/module/az.accounts/get-azsubscription) cmdlet to get your subscription ID to use in place of the one in the example.
$assignment
## Test the policy
-To test the policy, try to use the VM Access extension. The following should fail with the message "Set-AzVMAccessExtension : Resource 'myVMAccess' was disallowed by policy."
+To test the policy, try to use the VM Access extension. The following should fail with the message "Set-AzVMAccessExtension: Resource 'myVMAccess' was disallowed by policy."
```azurepowershell-interactive Set-AzVMAccessExtension `
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
Title: Azure VM Image Builder overview
description: In this article, you learn about VM Image Builder for virtual machines in Azure. Previously updated : 10/15/2021 Last updated : 03/15/2023 -+ # Azure VM Image Builder overview
The VM Image Builder service is available in the following regions:
- Qatar Central - USGov Arizona (public preview) - USGov Virginia (public preview)
+- China North 3 (public preview)
-To access the Azure VM Image Builder public preview in the Fairfax regions (USGov Arizona and USGov Virginia), you must register the *Microsoft.VirtualMachineImages/FairfaxPublicPreview* feature. To do so, run the following command:
+To access the Azure VM Image Builder public preview in the Fairfax regions (USGov Arizona and USGov Virginia), you must register the *Microsoft.VirtualMachineImages/FairfaxPublicPreview* feature. To do so, run the following command in either PowerShell or Azure CLI:
### [Azure PowerShell](#tab/azure-powershell)
az feature register --namespace Microsoft.VirtualMachineImages --name FairfaxPub
+To access the Azure VM Image Builder public preview in the China North 3 region, you must register the *Microsoft.VirtualMachineImages/FairfaxPublicPreview* feature. To do so, run the following command in either PowerShell or Azure CLI:
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.VirtualMachineImages -Name MooncakePublicPreview
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az feature register --namespace Microsoft.VirtualMachineImages --name MooncakePublicPreview
+```
+ ## OS support VM Image Builder supports the following Azure Marketplace base operating system images:
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
Previously updated : 12/07/2022 Last updated : 03/16/2023
Azure Disk Encryption does not work for the following Linux scenarios, features,
- Migrating a VM that is encrypted with ADE, or has **ever** been encrypted with ADE, to [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md). - Encrypting VMs in failover clusters. - Encryption of [Azure ultra disks](../disks-enable-ultra-ssd.md).
+- Encryption of VMs in subscriptions that have the [Secrets should have the specified maximum validity period](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F342e8053-e12e-4c44-be01-c3c2f318400f) policy enabled with the [DENY effect](../../governance/policy/concepts/effects.md).
## Next steps
virtual-machines Disks Enable Host Based Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-host-based-encryption-cli.md
description: Use encryption at host to enable end-to-end encryption on your Azur
Previously updated : 03/15/2022 Last updated : 03/20/2023
az vmss update -n $vmssName \
## Finding supported VM sizes
-Legacy VM Sizes are not supported. You can find the list of supported VM sizes by either:
+Legacy VM Sizes are not supported. You can find the list of supported VM sizes by either using resource SKU APIs or the Azure PowerShell module. You can't find the supported sizes using the CLI.
-Calling the [Resource Skus API](/rest/api/compute/resourceskus/list) and checking that the `EncryptionAtHostSupported` capability is set to **True**.
+When calling the [Resource Skus API](/rest/api/compute/resourceskus/list), check that the `EncryptionAtHostSupported` capability is set to **True**.
```json {
Calling the [Resource Skus API](/rest/api/compute/resourceskus/list) and checkin
} ```
-Or, calling the [Get-AzComputeResourceSku](/powershell/module/az.compute/get-azcomputeresourcesku) PowerShell cmdlet.
+For the Azure PowerShell module, use the [Get-AzComputeResourceSku](/powershell/module/az.compute/get-azcomputeresourcesku) cmdlet.
```powershell $vmSizes=Get-AzComputeResourceSku | where{$_.ResourceType -eq 'virtualMachines' -and $_.Locations.Contains('CentralUSEUAP')}
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
Title: Create an Azure Image Builder Bicep file or ARM JSON template
description: Learn how to create a Bicep file or ARM JSON template to use with Azure Image Builder. - Previously updated : 09/06/2022+ Last updated : 03/15/2023
The location is the region where the custom image will be created. The following
- Qatar Central - USGov Arizona (Public Preview) - USGov Virginia (Public Preview)-
+- China North 3 (Public Preview)
+Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.VirtualMachineImages -Name MooncakePublicPreview
> [!IMPORTANT] > Register the feature `Microsoft.VirtualMachineImages/FairfaxPublicPreview` to access the Azure Image Builder public preview in Azure Government regions (USGov Arizona and USGov Virginia).
+> [!IMPORTANT]
+> Register the feature `Microsoft.VirtualMachineImages/MooncakePublicPreview` to access the Azure Image Builder public preview in the China North 3 region.
+ Use the following command to register the feature for Azure Image Builder in Azure Government regions (USGov Arizona and USGov Virginia). ### [Azure PowerShell](#tab/azure-powershell)
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
vm-linux Previously updated : 11/10/2021- Last updated : 03/15/2023+ # Prepare a Red Hat-based virtual machine for Azure
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets In this article, you'll learn how to prepare a Red Hat Enterprise Linux (RHEL) virtual machine for use in Azure. The versions of RHEL that are covered in this article are 6.7+ and 7.1+. The hypervisors for preparation that are covered in this article are Hyper-V, kernel-based virtual machine (KVM), and VMware. For more information about eligibility requirements for participating in Red Hat's Cloud Access program, see [Red Hat's Cloud Access website](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) and [Running RHEL on Azure](https://access.redhat.com/ecosystem/ccsp/microsoft-azure). For ways to automate building RHEL images, see [Azure Image Builder](../image-builder-overview.md).
+> [!NOTE]
+> Be aware of versions that are End Of Life (EOL) and no longer supported by Redhat. Uploaded image that are, at or beyond EOL will be supported on a reasonable business effort basis. Link to Redhat's [Product Lifecycle](https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204)
+ ## Hyper-V Manager
virtual-machines Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows.md
Previously updated : 01/24/2023 Last updated : 03/16/2023
Azure Disk Encryption does not work for the following scenarios, features, and t
- Migrating a VM that is encrypted with ADE, or has **ever** been encrypted with ADE, to [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md). - Encrypting VMs in failover clusters. - Encryption of [Azure ultra disks](../disks-enable-ultra-ssd.md).
+- Encryption of VMs in subscriptions that have the [Secrets should have the specified maximum validity period](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F342e8053-e12e-4c44-be01-c3c2f318400f) policy enabled with the [DENY effect](../../governance/policy/concepts/effects.md).
## Next steps
virtual-machines Run Scripts In Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-scripts-in-vm.md
Previously updated : 05/02/2018+ Last updated : 03/20/2023
Learn more about the different features that are available to run scripts and co
* [Custom Script Extension](../extensions/custom-script-windows.md) * [Run Command](run-command.md) * [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md)
-* [Serial console](/troubleshoot/azure/virtual-machines/serial-console-windows)
+* [Serial console](/troubleshoot/azure/virtual-machines/serial-console-windows)