Updates from: 09/05/2022 01:07:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Alert Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md
Previously updated : 07/09/2020 Last updated : 09/04/2022
To check which service principal is missing and must be recreated, complete the
1. In the Azure portal, select **Azure Active Directory** from the left-hand navigation menu. 1. Select **Enterprise applications**. Choose *All applications* from the **Application Type** drop-down menu, then select **Apply**.
-1. Search for each of the following application IDs. If no existing application is found, follow the *Resolution* steps to create the service principal or re-register the namespace.
+1. Search for each of the following application IDs. For Azure Global, search for AppId value *2565bd9d-da50-47d4-8b85-4c97f669dc36*. For other Azure clouds, search for AppId value *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. If no existing application is found, follow the *Resolution* steps to create the service principal or re-register the namespace.
| Application ID | Resolution | | : | : |
To check which service principal is missing and must be recreated, complete the
### Recreate a missing Service Principal
-If application ID *2565bd9d-da50-47d4-8b85-4c97f669dc36* is missing from your Azure AD directory, use Azure AD PowerShell to complete the following steps. For more information, see [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2).
+If application ID *2565bd9d-da50-47d4-8b85-4c97f669dc36* is missing from your Azure AD directory in Azure Global, use Azure AD PowerShell to complete the following steps. For other Azure clouds, use AppId value *6ba9a5d4-8456-4118-b521-9c5ca10cdf84*. For more information, see [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2).
1. If needed, install the Azure AD PowerShell module and import it as follows:
active-directory Application Proxy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-security.md
Previously updated : 04/21/2021 Last updated : 09/02/2022
The following diagram shows how Azure AD enables secure remote access to your on
## Security benefits
-Azure AD Application Proxy offers the following security benefits:
+Azure AD Application Proxy offers many security benefits including authenticated access, conditional access, traffic termination, all outbound access, cloud scale analytics and machine learning, and remote access as a service. It is important to note that even with all of the added security provided by Application Proxy, the systems being accessed must continually be updated with the latest patches.
### Authenticated access
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
IDs users create, own, and control independently of any organization or governme
**2. Trust System**. In order to be able to resolve DID documents, DIDs are typically recorded on an underlying network of some kind that represents a trust system. Microsoft currently supports two trust systems, which are: -- ION (Identity Overlay Network) ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms; the linear progression of Bitcoin's time chain is all that's required for its operation. We have open sourced a [npm package](https://www.npmjs.com/package/@decentralized-identity/ion-tools) to make working with the ION network easy to integrate into your apps and services. Libraries include creating a new DID, generating keys and anchoring your DID on the Bitcoin blockchain.
+- ION (Identity Overlay Network) ION is a Layer 2 open, permissionless network based on the purely deterministic Sidetree protocol, which requires no special tokens, trusted validators, or other consensus mechanisms; the linear progression of Bitcoin's time chain is all that's required for its operation. We have open sourced an [npm package](https://www.npmjs.com/package/@decentralized-identity/ion-tools) to make working with the ION network easy to integrate into your apps and services. Libraries include creating a new DID, generating keys and anchoring your DID on the Bitcoin blockchain.
- DID:Web is a permission based model that allows trust using a web domainΓÇÖs existing reputation.
In order to be able to resolve DID documents, DIDs are typically recorded on an
Enables real people to use decentralized identities and Verifiable Credentials. Authenticator creates DIDs, facilitates issuance and presentation requests for verifiable credentials and manages the backup of your DID's seed through an encrypted wallet file. **4. Microsoft Resolver**.
-An API that look up and resolve DIDs using the ```did:web``` or the ```did:ion``` methods and return the DID Document Object (DDO). The DDO includes DPKI metadata associated with the DID such as public keys and service endpoints.
+An API that looks up and resolves DIDs using the ```did:web``` or the ```did:ion``` methods and returns the DID Document Object (DDO). The DDO includes DPKI metadata associated with the DID such as public keys and service endpoints.
**5. Entra Verified ID Service**. An issuance and verification service in Azure and a REST API for [W3C Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) that are signed with the ```did:web``` or the ```did:ion``` method. They enable identity owners to generate, present, and verify claims. This forms the basis of trust between users of the systems.
The scenario we use to explain how VCs work involves:
Today, Alice provides a username and password to log onto WoodgroveΓÇÖs networked environment. Woodgrove is deploying a verifiable credential solution to provide a more manageable way for Alice to prove that she is an employee of Woodgrove. Proseware accepts verifiable credentials issued by Woodgrove as proof of employment to offer corporate discounts as part of their corporate discount program.
-Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identity and issues a signed verfiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employement on the Proseware site. After a succesfull presentation of the credential, Prosware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she has presented her proof of employment verifiable credential.
+Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identity and issues a signed verifiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employment on the Proseware site. After a successful presentation of the credential, Proseware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she has presented her proof of employment verifiable credential.
![microsoft-did-overview](media/decentralized-identifier-overview/did-overview.png)
Alice requests Woodgrove Inc for a proof of employment verifiable credential. Wo
There are three primary actors in the verifiable credential solution. In the following diagram: -- **Step 1**, the **user** requests a verifiable credential from an issuer.-- **Step 2**, the **issuer** of the credential attests that the proof the user provided is accurate and creates a verifiable credential signed with their DID and the userΓÇÖs DID is the subject.-- **In Step 3**, the user signs a verifiable presentation (VP) with their DID and sends to the **verifier.** The verifier then validates the credential by matching with the public key placed in the DPKI.
+- In **Step 1**, the **user** requests a verifiable credential from an issuer.
+- In **Step 2**, the **issuer** of the credential attests that the proof the user provided is accurate and creates a verifiable credential signed with their DID for which the userΓÇÖs DID is the subject.
+- In **Step 3**, the user signs a verifiable presentation (VP) with their DID and sends it to the **verifier.** The verifier then validates the credential by matching it against the public key placed in the DPKI.
The roles in this scenario are: ![roles in a verifiable credential environment](media/decentralized-identifier-overview/issuer-user-verifier.png)
-**issuer** ΓÇô The issuer is an organization that creates an issuance solution requesting information from a user. The information is used to verify the userΓÇÖs identity. For example, Woodgrove, Inc. has an issuance solution that enables them to create and distribute verifiable credentials (VCs) to all their employees. The employee uses the Authenticator app to sign in with their username and password, which passes an ID token to the issuing service. Once Woodgrove, Inc. validates the ID token submitted, the issuance solution creates a VC that includes claims about the employee and is signed with Woodgrove, Inc. DID. The employee now has a verifiable credential that is signed by their employer, which includes the employees DID as the subject DID.
+### Issuer
-**user** ΓÇô The user is the person or entity that is requesting a VC. For example, Alice is a new employee of Woodgrove, Inc. and was previously issued her proof of employment verifiable credential. When Alice needs to provide proof of employment in order to get a discount at Proseware, she can grant access to the credential in her Authenticator app by signing a verifiable presentation that proves Alice is the owner of the DID. Proseware is able to validate the credential was issued by Woodgrove, Inc.and Alice is the owner of the credential.
+The issuer is an organization that creates an issuance solution requesting information from a user. The information is used to verify the userΓÇÖs identity. For example, Woodgrove, Inc. has an issuance solution that enables them to create and distribute verifiable credentials (VCs) to all their employees. The employee uses the Authenticator app to sign in with their username and password, which passes an ID token to the issuing service. Once Woodgrove, Inc. validates the ID token submitted, the issuance solution creates a VC that includes claims about the employee and is signed with Woodgrove, Inc. DID. The employee now has a verifiable credential that is signed by their employer, which includes the employees DID as the subject DID.
-**verifier** ΓÇô The verifier is a company or entity who needs to verify claims from one or more issuers they trust. For example, Proseware trusts Woodgrove, Inc. does an adequate job of verifying their employeesΓÇÖ identity and issuing authentic and valid VCs. When Alice tries to order the equipment she needs for her job, Proseware will use open standards such as SIOP and Presentation Exchange to request credentials from the User proving they are an employee of Woodgrove, Inc. For example, Proseware might provide Alice a link to a website with a QR code she scans with her phone camera. This initiates the request for a specific VC, which Authenticator will analyze and give Alice the ability to approve the request to prove her employment to Proseware. Proseware can use the verifiable credentials service API or SDK, to verify the authenticity of the verifiable presentation. Based on the information provided by Alice they give Alice the discount. If other companies and organizations know that Woodgrove, Inc. issues VCs to their employees, they can also create a verifier solution and use the Woodgrove, Inc. verifiable credential to provide special offers reserved for Woodgrove, Inc. employees.
+### User
+
+The user is the person or entity that is requesting a VC. For example, Alice is a new employee of Woodgrove, Inc. and was previously issued her proof of employment verifiable credential. When Alice needs to provide proof of employment in order to get a discount at Proseware, she can grant access to the credential in her Authenticator app by signing a verifiable presentation that proves Alice is the owner of the DID. Proseware is able to validate the credential was issued by Woodgrove, Inc. and Alice is the owner of the credential.
+
+### Verifier
+
+The verifier is a company or entity who needs to verify claims from one or more issuers they trust. For example, Proseware trusts Woodgrove, Inc. does an adequate job of verifying their employeesΓÇÖ identity and issuing authentic and valid VCs. When Alice tries to order the equipment she needs for her job, Proseware will use open standards such as SIOP and Presentation Exchange to request credentials from the User proving they are an employee of Woodgrove, Inc. For example, Proseware might provide Alice a link to a website with a QR code she scans with her phone camera. This initiates the request for a specific VC, which Authenticator will analyze and give Alice the ability to approve the request to prove her employment to Proseware. Proseware can use the verifiable credentials service API or SDK, to verify the authenticity of the verifiable presentation. Based on the information provided by Alice they give Alice the discount. If other companies and organizations know that Woodgrove, Inc. issues VCs to their employees, they can also create a verifier solution and use the Woodgrove, Inc. verifiable credential to provide special offers reserved for Woodgrove, Inc. employees.
+
+> [!NOTE]
+> The verifier can use open standards to perform the presentation and verification, or simply [configure their own Azure AD tenant](verifiable-credentials-configure-tenant.md) to let the Azure AD Verifiable Credentials service perform most of the work.
## Next steps
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
Authorization: Bearer <token>
"clientName": "Verifiable Credential Expert Sample" }, "type": "VerifiedCredentialExpert",
- "manifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert1",
+ "manifestUrl": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredentials/contracts/VerifiedCredentialExpert1",
"pin": { "value": "3539", "length": 4
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
If your trust system for the tenant is Web, you need register your website ID to
1. At the Website ID registration, select Review. ![Screenshot of website registration page.](media/how-to-register-didwebsite/how-to-register-didwebsite-domain.png)
-1. Copy or download the DID document being displayed in the box
+1. Copy or download the DID document being displayed in the box.
![Screenshot of did.json.](media/how-to-register-didwebsite/how-to-register-didwebsite-diddoc.png)
-1. Upload the file to your webserver. The DID document JSON file needs to be uploaded to location /.well-known/did.json on your webserver.
+1. Upload the file to your webserver. The DID document JSON file needs to be uploaded to location `/.well-known/did.json` on your webserver.
1. Once the file is available on your webserver, you need to select the **Refresh registration status** button to verify that the system can request the file. ## When is the DID document in the did.json file used?
The DID document contains the public keys for your issuer and is used during bot
## When does the did.json file need to be republished to the webserver?
-The DID document in the did.json file needs to be republished if you changed the Linked Domain or if you rotate your signing keys.
+The DID document in the `did.json` file needs to be republished if you changed the Linked Domain or if you rotate your signing keys.
## How can I verify that the registration is working?
-The portal verifies that the `did.json` is reachable and correct when you click the [**Refresh registration status** button](#how-do-i-register-my-website-id). You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, bad SSL certificate or URL not being public. If the did.json file can be requested anonymously in a browser, without warnings or errors, the portal will not be able to complete the **Refresh registration status** step either.
+The portal verifies that the `did.json` is reachable and correct when you click the [**Refresh registration status** button](#how-do-i-register-my-website-id). You should also consider verifying that you can request that URL in a browser to avoid errors like not using https, a bad SSL certificate or the URL not being public. If the `did.json` file cannot be requested anonymously in a browser, without warnings or errors, the portal will not be able to complete the **Refresh registration status** step either.
## Next steps
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
However, there are scenarios where using a decentralized architecture with verif
* accessing resources outside the trust boundary, such as accessing partnersΓÇÖ resources, with a portable credential issued by the organization.
-
+ ### Decentralized identity systems
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The issuance request payload contains information about your verifiable credenti
"clientName": "Verifiable Credential Expert Sample" }, "type": "VerifiedCredentialExpert",
- "manifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert",
+ "manifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredentials/contracts/VerifiedCredentialExpert",
"claims": { "given_name": "Megan", "family_name": "Bowen"
The callback endpoint is called when a user scans the QR code, uses the deep lin
| `requestId`| string | Mapped to the original request when the payload was posted to the Verifiable Credentials service.| | `requestStatus` |string |The status returned for the request. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the issuance flow.</li><li>`issuance_successful`: The issuance of the verifiable credentials was successful.</li><li>`issuance_error`: There was an error during issuance. For details, see the `error` property.</li></ul> | | `state` |string| Returns the state value that you passed in the original payload. |
-| `error`| error | When the `code` property value is `Issuance_error`, this property contains information about the error.|
+| `error`| error | When the `code` property value is `issuance_error`, this property contains information about the error.|
| `error.code` | string| The return error code. | | `error.message`| string| The error message. |
The callback endpoint might be called with an error message. The following table
|Message |DefinitionΓÇ» | |||
-| `fetch_contract_error*`| Unable to fetch the verifiable credential contract. This error usually happens when the API can't fetch the manifest you specify in the request payload [RequestIssuance object](#issuance-request-payload).|
-| `issuance_service_error*` | The Verifiable Credentials service isn't able to validate requirements, or something went wrong in Verifiable Credentials.|
+| `fetch_contract_error`| Unable to fetch the verifiable credential contract. This error usually happens when the API can't fetch the manifest you specify in the request payload [RequestIssuance object](#issuance-request-payload).|
+| `issuance_service_error` | The Verifiable Credentials service isn't able to validate requirements, or something went wrong in Verifiable Credentials.|
| `unspecified_error`| This error is uncommon, but worth investigating. | The following example demonstrates a callback payload when an error occurred:
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuer-openid.md
The configured redirect URI is used by Authenticator so it knows when the sign-i
The authorization request sent to your identity provider uses the following format. ```HTTP
-GET /authorize?client_id=<client-id>&redirect_uri=portableidentity%3A%2F%2Fverify&response_mode=query&response_type=code&scope=openid&state=12345&nonce=12345 HTTP/1.1
+GET /authorize?client_id=<client-id>&redirect_uri=vcclient%3A%2F%2Fopenid%2F&response_mode=query&response_type=code&scope=openid&state=12345&nonce=12345 HTTP/1.1
Host: www.contoso.com Connection: Keep-Alive ```
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
Verifiable credentials can be used to enable faster onboarding by replacing some
* **External identities** - invitation: When an existing user in your organization invites an external user to be onboarded in the target system, the RP can generate a link with a unique identifier that represents the invitation transaction and sends it to the external userΓÇÖs email address. This unique identifier should be sufficient to correlate the VC verification request to the invitation record or underlying data and continue the provisioning workflow. The attributes in the VC can be used to validate or complete the external user attributes.
- * **External identities** ΓÇô self-service: When external identities sign up to the target system through self-service (for example, a B2C application) the attributes in the VC can be used to populate the initial attributes of the user account. The VC attributes can also be used to find out if a profile already exists.
+ * **External identities** - self-service: When external identities sign up to the target system through self-service (for example, a B2C application) the attributes in the VC can be used to populate the initial attributes of the user account. The VC attributes can also be used to find out if a profile already exists.
* **Interaction with target identity systems**: The service-to-service communication between the web front end and your target identity systems needs to be secured as a highly privileged system, because it can create accounts. Grant the web front end the least privileged roles possible. Some examples include:
- * To create a new user in Azure AD, the RP website can use a service principal that is granted the MS Graph scope of User.ReadWrite.All to create users, and the scope UserAuthenticationMethod.ReadWrite.All to reset authentication method
+ * To create a new user in Azure AD, the RP website can use a service principal that is granted the MS Graph scope of `User.ReadWrite.All` to create users, and the scope `UserAuthenticationMethod.ReadWrite.All` to reset their authentication method.
- * To invite users to Azure AD using B2B collaboration, the RP website can use a service principal that is granted the MS Graph scope of User.Invite.All to create invitations.
+ * To invite users to Azure AD using B2B collaboration, the RP website can use a service principal that is granted the MS Graph scope of `User.Invite.All` to create invitations.
* If your RP is running in Azure, use Managed Identities to call Microsoft Graph. Using managed identities removes the risks of managing service principal credentials in code or configuration files. To learn more about Managed identities, go to [Managed identities for Azure resources.](../managed-identities-azure-resources/overview.md)
Similarly, you can use a VC to generate a temporary access pass that will allow
**Interaction with Azure AD**: The service-to-service communication between the web front end and Azure AD must be secured as a highly privileged system because it can reset employeesΓÇÖ credentials. Grant the web front end the least privileged roles possible. Some examples include:
-* Grant the RP website the ability to use a service principal granted the MS Graph scope UserAuthenticationMethod.ReadWrite.All to reset authentication methods. DonΓÇÖt grant the User.ReadWrite.All, which enables the ability to create and delete users.
+* Grant the RP website the ability to use a service principal granted the MS Graph scope `UserAuthenticationMethod.ReadWrite.All` to reset authentication methods. DonΓÇÖt grant `User.ReadWrite.All`, which enables the ability to create and delete users.
* If your RP is running in Azure, use Managed Identities to call Microsoft Graph. This removes the risks around managing service principal credentials in code or configuration files. For more information, see [Managed identities for Azure resources.](../managed-identities-azure-resources/overview.md)
Below are some IAM considerations when incorporating VCs to relying parties. Rel
* A successful presentation of the VC can be considered a coarse-grained authorization gate by itself. The VC attributes can also be consumed for fine-grained authorization decisions.
-* Determine if an expired VC has meaning in your application; if so check the value of the ΓÇ£expΓÇ¥ claim (the expiration time) of the VC as part of the authorization checks. One example where expiration is not relevant is requiring a government-issued document such as a driverΓÇÖs license to validate if the subject is older than 18. The date of birth claim is valid, even if the VC is expired.
+* Determine if an expired VC has meaning in your application; if so check the value of the `exp` claim (the expiration time) of the VC as part of the authorization checks. One example where expiration is not relevant is requiring a government-issued document such as a driverΓÇÖs license to validate if the subject is older than 18. The date of birth claim is valid, even if the VC is expired.
* Determine if a revoked VC has meaning to your authorization decision.
You can use information in presented VCs to build a user profile. If you want to
* If the application requires a persistent user profile store:
- * Consider using the ΓÇ£subΓÇ¥ claim as an immutable identifier of the user. This is an opaque unique attribute that will be constant for a given subject/RP pair.
+ * Consider using the `sub` claim as an immutable identifier of the user. This is an opaque unique attribute that will be constant for a given subject/RP pair.
* Define a mechanism to deprovision the user profile from the application. Due to the decentralized nature of the Microsoft Entra Verified ID system, there is no application user provisioning lifecycle.
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
The `RequestCredential` provides information about the requested credentials the
### Configuration.Validation type
-The `Configuration.Validation` provides information about the presented credentials should be validated. It contains the following properties:
+The `Configuration.Validation` provides information about how the presented credentials should be validated. It contains the following properties:
|Property |Type |Description | |||| | `allowRevoked` | Boolean | Determines if a revoked credential should be accepted. Default is `false` (it shouldn't be accepted). |
-| `validateLinkedDomain` | Boolean | Determines if the linked domain should be validated. Default is `false`. Setting this flag to `false` means you as a Relying Party application accept credentials from unverified linked domain. Setting this flag to `true` means the linked domain will be validated and only verified domains will be accepted. |
+| `validateLinkedDomain` | Boolean | Determines if the linked domain should be validated. Default is `false`. Setting this flag to `false` means you as a Relying Party application accept credentials from an unverified linked domain. Setting this flag to `true` means the linked domain will be validated and only verified domains will be accepted. |
## Successful response
active-directory Rules And Display Definitions Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/rules-and-display-definitions-model.md
Rules and Display definitions are used to define a credential. You can read more
| Property | Type | Description | | -- | -- | -- |
-|`attestations`| [idTokenAttestation](#idtokenattestation-type) and/or [idTokenHintAttestation](#idtokenhintattestation-type) and/or [verifiablePresentationAttestation](#verifiablepresentationattestation-type) and/or [selfIssuedAttestation](#selfissuedattestation-type) |
-|`validityInterval` | number | represents the lifespan of the credential |
-|`vc`| vcType array | types for this contract |
+| `attestations`| [idTokenAttestation](#idtokenattestation-type) and/or [idTokenHintAttestation](#idtokenhintattestation-type) and/or [verifiablePresentationAttestation](#verifiablepresentationattestation-type) and/or [selfIssuedAttestation](#selfissuedattestation-type) |
+| `validityInterval` | number | represents the lifespan of the credential in seconds |
+| `vc`| [vcType](#vctype-type) | verifiable credential types for this contract |
### idTokenAttestation type
-When you sign in the user from within Authenticator, you can use the returned ID token from the OIDC compatible provider as input.
+When you sign in the user from within Authenticator, you can use the returned ID token from the OpenID Connect compatible provider as input.
| Property | Type | Description | | -- | -- | -- | | `mapping` | [claimMapping](#claimmapping-type) (optional) | rules to map input claims into output claims in the verifiable credential | | `configuration` | string (url) | location of the identity provider's configuration document | | `clientId` | string | client ID to use when obtaining the ID token |
-| `redirectUri` | string | redirect uri to use when obtaining the ID token MUST BE vcclient://openid/ |
+| `redirectUri` | string | redirect uri to use when obtaining the ID token; MUST BE `vcclient://openid/` |
| `scope` | string | space delimited list of scopes to use when obtaining the ID token | | `required` | boolean (default false) | indicating whether this attestation is required or not |
-| `trustedIssuers` | optional string (array) | a list of DIDs allowed to issue the verifiable credential for this contract. This property is only used for specific scenarios where the `idtoken` hint can come from another issuer |
+| `trustedIssuers` | optional string (array) | a list of DIDs allowed to issue the verifiable credential for this contract. This property is only used for specific scenarios where the `id_token_hint` can come from another issuer |
### idTokenHintAttestation type
-This flow uses the IDTokenHint, which is provided as payload through the Request REST API. The mapping is the same as for the ID Token attestation.
+This flow uses the ID Token Hint, which is provided as payload through the Request REST API. The mapping is the same as for the ID Token attestation.
| Property | Type | Description | | -- | -- | -- | | `mapping` | [claimMapping](#claimmapping-type) (optional) | rules to map input claims into output claims in the verifiable credential | | `required` | boolean (default false) | indicating whether this attestation is required or not |
-| `trustedIssuers` | optional string (array) | a list of DIDs allowed to issue the verifiable credential for this contract. This property is only used for specific scenarios where the idtoken hint can come from another issuer |
+| `trustedIssuers` | optional string (array) | a list of DIDs allowed to issue the verifiable credential for this contract. This property is only used for specific scenarios where the `id_token_hint` can come from another issuer |
### verifiablePresentationAttestation type
-When you want the user to present another VC as input for a new issued VC. The wallet will allow the user to select the VC during issuance.
+When you want the user to present another verifiable credential as input for a new issued verifiable credential. The wallet will allow the user to select the verifiable credential during issuance.
| Property | Type | Description | | -- | -- | -- |
When you want the user to enter information themselves. This type is also called
| `required` | boolean (default false) | indicating whether this mapping is required or not | | `type` | string (optional) | type of claim |
+### vcType type
+
+| Property | Type | Description |
+| -- | -- | -- |
+| `type` | string (array) | a list of verifiable credential types this contract can issue |
+ ## Example rules definition: ``` {
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
The following diagram illustrates the Microsoft Entra Verified ID architecture a
- To clone the repository that hosts the sample app, install [GIT](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download), or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization,read this [FAQ](verifiable-credentials-faq.md#i-can-not-use-ngrok-what-do-i-do).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization,read this [FAQ](verifiable-credentials-faq.md#i-cannot-use-ngrok-what-do-i-do).
- A mobile device with Microsoft Authenticator: - Android version 6.2206.3973 or later installed. - iOS version 6.6.2 or later installed.
The following JSON demonstrates a complete *appsettings.json* file:
"CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]", "IssuerAuthority": "did:web:example.com...", "VerifierAuthority": "did:web:example.com...",
- "CredentialManifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert"
+ "CredentialManifest": "https://verifiedid.did.msidentity.com/v1.0/12345678-0000-0000-0000-000000000000/verifiableCredentials/contracts/VerifiedCredentialExpert"
} } ```
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Last updated 08/11/2022
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-Microsoft Entra Verified ID is a decentralized identity solution that helps you safeguard your organization. The service allows you to issue and verify credentials. Issuers can use the Verified ID service to issue their own customized verifiable credentials. Verifiers can use the service's free REST API to easily request and accept verifiable credentials in their apps and services.
+Microsoft Entra Verified ID is a decentralized identity solution that helps you safeguard your organization. The service allows you to issue and verify credentials. Issuers can use the Verified ID service to issue their own customized verifiable credentials. Verifiers can use the service's free REST API to easily request and accept verifiable credentials in their apps and services. In both cases, you will have to configure your Azure AD tenant so that you can use it to either issue your own verifiable credentials, or verify the presentation of a user's verifiable credentials that were issued by another organization. In case you are both an issuer and a verifier, you can use a single Azure AD tenant to both issue your own verifiable credentials as well as verify those of others.
In this tutorial, you learn how to configure your Azure AD tenant to use the verifiable credentials service.
After you create your key vault, Verifiable Credentials generates a set of keys
The Verifiable credentials service request is the Request Service API, and it needs access to Key Vault in order to sign issuance and presentation requests.
-1. Select **+ Add Access Policy** and select the service principal **Verifiable Credentials Service Request** with AppId **3db474b9-6a0c-4840-96ac-1fceb342124**.
+1. Select **+ Add Access Policy** and select the service principal **Verifiable Credentials Service Request** with AppId **3db474b9-6a0c-4840-96ac-1fceb342124f**.
1. For **Key permissions**, select permissions **Get** and **Sign**.
To set up Verified ID, follow these steps:
## Register an application in Azure AD
- Verified ID needs to get access tokens to issue and verify. To get access tokens, register a web application and grant API permission for the API Verified ID Request Service that you set up in the previous step.
+Your application needs to get access tokens when it wants to call into Microsoft Entra Verified ID so it can issue or verify credentials. To get access tokens, you have to register an application and grant API permission for the Verified ID Request Service. For example, use the following steps for a web application:
1. Sign in to the [Azure portal](https://portal.azure.com/) with your administrative account.
To add the required permissions, follow these steps:
1. Select **APIs my organization uses**.
-1. Search for the service principal that you created earlier, **Verifiable Credentials Service Request**, and select it.
+1. Search for the **Verifiable Credentials Service Request** service principal, and select it.
![Screenshot that shows how to select the service principal.](media/verifiable-credentials-configure-tenant/add-app-api-permissions-select-service-principal.png)
To add the required permissions, follow these steps:
1. Domain verification. 1. Select on each section and download the JSON file under each. 1. Crete a website that you can use to distribute the files. If you specified **https://contoso.com** as your domain, the URLs for each of the files would look as shown below:
- - https://contoso.com/.well-known/did.json
- - https://contoso.com/.well-known/did-configuration.json.
+ - `https://contoso.com/.well-known/did.json`
+ - `https://contoso.com/.well-known/did-configuration.json`
Once that you have successfully completed the verification steps, you are ready to continue to the next tutorial.
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
Last updated 08/16/2022
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-In [Issue Microsoft Entra Verified ID credentials from an application](verifiable-credentials-configure-issuer.md), you learn how to issue and verify credentials by using the same Azure Active Directory (Azure AD) tenant. In this tutorial, you go over the steps needed to present and verify your first verifiable credential: a verified credential expert card.
+In [Issue Microsoft Entra Verified ID credentials from an application](verifiable-credentials-configure-issuer.md), you learn how to issue and verify credentials by using the same Azure Active Directory (Azure AD) tenant. In a real-world scenario, where the issuer and verifier are separate organizations, the verifier uses *their own* Azure AD tenant to perform the verification of the credential that was issued by the other organization. In this tutorial, you go over the steps needed to present and verify your first verifiable credential: a verified credential expert card.
As a verifier, you unlock privileges to subjects that possess verified credential expert cards. In this tutorial, you run a sample application from your local computer that asks you to present a verified credential expert card, and then verifies it.
In this article, you learn how to:
- If you want to clone the repository that hosts the sample app, install [Git](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download) or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization, please read this [FAQ](verifiable-credentials-faq.md#i-can-not-use-ngrok-what-do-i-do).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization, please read this [FAQ](verifiable-credentials-faq.md#i-cannot-use-ngrok-what-do-i-do).
- A mobile device with Microsoft Authenticator: - Android version 6.2206.3973 or later installed. - iOS version 6.6.2 or later installed.
The following JSON demonstrates a complete *appsettings.json* file:
"ClientId": "555555555-0000-0000-0000-000000000000", "ClientSecret": "123456789012345678901234567890", "VerifierAuthority": "did:ion:EiDJzvzaBMb_EWTWUFEasKzL2nL-BJPhQTzYWjA_rRz3hQ:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJzaWdfMmNhMzY2YmUiLCJwdWJsaWNLZXlKd2siOnsiY3J2Ijoic2VjcDI1NmsxIiwia3R5IjoiRUMiLCJ4IjoiZDhqYmduRkRGRElzR1ZBTWx5aDR1b2RwOGV4Q2dpV3dWUGhqM0N...",
- "CredentialManifest": " https://verifiedid.did.msidentity.com/v1.0/987654321-0000-0000-0000-000000000000/verifiableCredential/contracts/VerifiedCredentialExpert"
+ "CredentialManifest": " https://verifiedid.did.msidentity.com/v1.0/987654321-0000-0000-0000-000000000000/verifiableCredentials/contracts/VerifiedCredentialExpert"
} } ```
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
Resetting requires that you opt out and opt back into the Entra Verified ID serv
1. Navigate to the Verified ID in the Azure portal. 1. Navigate to the Organization Settings. 1. Copy your organizationΓÇÖs Decentralized Identifier (DID).
-1. Go to the ION Explorer and paste the DID in the search box
+1. Go to the [ION Explorer](https://identity.foundation/ion/explorer) and paste the DID in the search box
1. Inspect your DID document and search for the ` ΓÇ£#hubΓÇ¥ ` node. ```json
Yes, after reconfiguring your service, your tenant has a new DID use to issue an
No, at this point it isn't possible to keep your tenant's DID after you have opt-out of the service.
-### I can not use ngrok, what do I do?
+### I cannot use ngrok, what do I do?
-The tutorials for deploying and running the [samples](verifiable-credentials-configure-issuer.md#prerequisites) describes the use of the `ngrok` tool as an application proxy. This tool is sometimes blocked by IT admins from being used in corporate networks. An alternative is to deploy the sample to [Azure AppServices](../../app-service/overview.md) and run it in the cloud. The following links helps you deploy the respective sample to Azure AppServices. The Free pricing tier will be sufficient for hosting the sample. For each tutorial, you need to start by first creating the Azure AppService instance, then skip creating the app since you already have an app and then continue the tutorial with deploying it.
+The tutorials for deploying and running the [samples](verifiable-credentials-configure-issuer.md#prerequisites) describes the use of the `ngrok` tool as an application proxy. This tool is sometimes blocked by IT admins from being used in corporate networks. An alternative is to deploy the sample to [Azure App Service](../../app-service/overview.md) and run it in the cloud. The following links help you deploy the respective sample to Azure App Service. The Free pricing tier will be sufficient for hosting the sample. For each tutorial, you need to start by first creating the Azure App Service instance, then skip creating the app since you already have an app and then continue the tutorial with deploying it.
-- Dotnet - [Publish to AppServices](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs#publish-your-web-app)-- Node - [Deploy to AppServices](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-vscode#deploy-to-azure)-- Java - [Deploy to AppServices](../../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven#4deploy-the-app). You need to add the maven plugin for Azure AppServices to the sample.
+- Dotnet - [Publish to App Service](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs#publish-your-web-app)
+- Node - [Deploy to App Service](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-vscode#deploy-to-azure)
+- Java - [Deploy to App Service](../../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven#4deploy-the-app). You need to add the maven plugin for Azure App Service to the sample.
- Python - [Deploy using VSCode](../../app-service/quickstart-python.md?tabs=flask%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#3deploy-your-application-code-to-azure)
-Regardless of which language of the sample you are using, they will pickup the Azure AppService hostname (https://something.azurewebsites.net/) and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure AppServices. Troubleshooting/debugging will not be as easy as running the sample on your local machine, where traces to the console window shows you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
+Regardless of which language of the sample you are using, they will pick up the Azure App Service hostname (https://something.azurewebsites.net/) and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure App Service. Troubleshooting/debugging will not be as easy as running the sample on your local machine, where traces to the console window show you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
## Next steps
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Microsoft Entra Verified ID is now generally available (GA) as the new member of
## July 2022 - The Request Service APIs have a **new hostname** `verifiedid.did.msidentity.com`. The `beta.did.msidentity` and the `beta.eu.did.msidentity` will continue to work, but you should change your application and configuration. Also, you no longer need to specify `.eu.` for an EU tenant.-- Request Service API have **new endpoints** and **updated JSON payloads**. For issuance, see [Issuance API specification](issuance-request-api.md#issuance-request-payload) and for presentation, see [Presentation API specification](presentation-request-api.md#presentation-request-payload). The old endpoints and JSON payloads will continue to work, but you should change your applications to use the new endpoints and payloads.
+- The Request Service APIs have **new endpoints** and **updated JSON payloads**. For issuance, see [Issuance API specification](issuance-request-api.md#issuance-request-payload) and for presentation, see [Presentation API specification](presentation-request-api.md#presentation-request-payload). The old endpoints and JSON payloads will continue to work, but you should change your applications to use the new endpoints and payloads.
- Request Service API **[Error codes](error-codes.md)** have been **updated** - The **[Admin API](admin-api.md)** is made **public** and is documented. The Azure portal is using the Admin API and with this REST API you can automate the onboarding or your tenant and creation of credential contracts. - Find issuers and credentials to verify via the [The Microsoft Entra Verified ID Network](how-use-vcnetwork.md).
Microsoft Entra Verified ID is now generally available (GA) as the new member of
- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you'll need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service). - We are rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform:
- - Introducing Managed Credentials, Managed Credentials are verifiable credentials that no longer use of Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions.
+ - Introducing Managed Credentials, which are verifiable credentials that no longer use Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions.
- Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md). - Administrators can create a Verified Employee Managed Credential using the [new quick start](how-to-use-quickstart-verifiedemployee.md). The Verified Employee is a verifiable credential of type verifiedEmployee that is based on a pre-defined set of claims from your tenant's Azure Active Directory.
Sample contract file:
"lastName": { "claim": "$.family_name" } }, "configuration": "https://self-issued.me",
- "client_id": "",
- "redirect_uri": ""
+ "clientId": "",
+ "redirectUri": ""
} ] },
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
The self-hosted gateway also supports a number of protocols including `localsysl
| Field | Default | Description | | - | - | - | | telemetry.logs.std | `text` | Enables logging to standard streams. Value can be `none`, `text`, `json` |
-| telemetry.logs.local | `none` | Enables local logging. Value can be `none`, `auto`, `localsyslog`, `rfc5424`, `journal`, `json` |
+| telemetry.logs.local | `auto` | Enables local logging. Value can be `none`, `auto`, `localsyslog`, `rfc5424`, `journal`, `json` |
| telemetry.logs.local.localsyslog.endpoint | n/a | Specifies localsyslog endpoint. | | telemetry.logs.local.localsyslog.facility | n/a | Specifies localsyslog [facility code](https://en.wikipedia.org/wiki/Syslog#Facility). e.g., `7` | telemetry.logs.local.rfc5424.endpoint | n/a | Specifies rfc5424 endpoint. |
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
The following metadata information is requested by the agent from Azure:
* Guest configuration policy assignments * Extension requests - install, update, and delete.
+> [!NOTE]
+> Azure Arc-enabled servers doesn't store/process customer data outside the region the customer deploys the service instance in.
+ ## Deployment options and requirements To deploy the agent and connect a machine, certain [prerequisites](prerequisites.md) must be met. There are also [networking requirements](network-requirements.md) to be aware of.
azure-arc Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/data-residency.md
- Title: Data residency
-description: Data residency and information about Azure Arc-enabled servers.
- Previously updated : 08/05/2021---
-# Azure Arc-enabled servers: Data residency
-
-This article explains the concept of data residency and how it applies to Azure Arc-enabled servers.
-
-Azure Arc-enabled servers is **[available](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc)** in the **United States, Europe, United Kingdom, Australia, and Asia Pacific**.
-
-## Data residency
-
-Azure Arc-enabled servers store [Azure VM extension](manage-vm-extensions.md) configuration settings (that is, property values) the extension requires specifying before attempting to enable on the connected machine. For example, when you enable the Log Analytics VM extension, it asks for the Log Analytics **workspace ID** and **primary key**.
-
-Metadata information about the connected machine is also collected by the Azure Connected Machine agent and sent to Azure. A full list of metadata collected is available in the [instance metadata documentation](agent-overview.md#instance-metadata).
-
-Azure Arc-enabled servers allow you to specify the region where your data is stored. Microsoft may replicate to other regions for data resiliency, but Microsoft does not replicate or move data outside the geography. This data is stored in the region where the Azure Arc machine resource is configured. For example, if the machine is registered with Arc in the East US region, this data is stored in the US region.
-
-> [!NOTE]
-> For South East Asia, your data is not replicated outside of this region.
-
-For more information about our regional resiliency and compliance support, see [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/).
-
-## Next steps
-
-Learn more about designing for [Azure resiliency](/azure/architecture/reliability/architect).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
Azure Arc-enabled servers has a limit for the number of instances that can be cr
To learn more about resource type limits, see the [Resource instance limit](../../azure-resource-manager/management/resources-without-resource-group-limit.md#microsofthybridcompute) article.
+## Data residency
+
+Azure Arc-enabled servers doesn't store/process customer data outside the region the customer deploys the service instance in.
+ ## Next steps * Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review the [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-functions Azfd0001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0001.md
+
+ Title: "AZFD0001: AzureWebJobsStorage app setting is not present."
+
+description: "AZFD0001: AzureWebJobsStorage app setting is not present"
+++ Last updated : 09/03/2022++
+# AZFD0001: AzureWebJobsStorage app setting is not present.
+
+This event occurs when the function app doesn't have the `AzureWebJobsStorage` app setting configured for the function app.
+
+| | Value |
+|-|-|
+| **Event ID** |AZFD0001|
+| **Category** |[Usage]|
+| **Severity** |Error|
+
+## Event description
+The `AzureWebJobsStorage` app setting is used to store the connection string of the Azure Storage account associated with the function app. The Azure Functions runtime uses this connection for core behaviors such as coordinating singleton execution of timer triggers, default app key storage, and storing diagnostic events.
+
+For more information, see [AzureWebJobsStorage](../../functions-app-settings.md#azurewebjobsstorage).
+
+## How to resolve the event
+
+Create a new app setting on your function app with name `AzureWebJobsStorage` with a valid storage account connection string as the value. For more information, see [Work with application settings](../../functions-how-to-use-azure-function-app-settings.md#settings).
+
+## When to suppress the event
+
+This event shouldn't be suppressed.
azure-functions Azfd0002 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0002.md
+
+ Title: "AZFD0002: Value of AzureWebJobsStorage app setting is invalid."
+
+description: "AZFD0002: Value of AzureWebJobsStorage app setting is invalid."
+++ Last updated : 09/03/2022++
+# AZFD0002: Value of AzureWebJobsStorage app setting is invalid.
+
+This event occurs when the value of the `AzureWebJobsStorage` app setting is set to either an invalid Azure Storage account connection string or to a Key Vault reference.
+
+| | Value |
+|-|-|
+| **Event ID** |AZFD0002|
+| **Category** |[Usage]|
+| **Severity** |Error|
+
+## Event description
+The `AzureWebJobsStorage` app setting is used to store the connection string of the storage account associated with the function app. The Azure Functions runtime uses this connection for core behaviors such as coordinating singleton execution of timer triggers, default app key storage, and storing diagnostic events. This app setting needs to be set to a valid connection string.
+
+For more information, see [AzureWebJobsStorage](../../functions-app-settings.md#azurewebjobsstorage).
+
+## How to resolve the event
+Update the value of the `AzureWebJobsStorage` app setting on your function app with a valid storage account connection string.
+
+## When to suppress the event
+You should suppress this event when your function app uses an Azure Key Vault reference in the `AzureWebjobsStorage` app setting instead of a connection string. For more information, see [Source application settings from Key Vault](../../../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json#source-application-settings-from-key-vault)
azure-functions Azfd0003 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0003.md
+
+ Title: "AZFD0003: Encountered a StorageException while trying to fetch the diagnostic events."
+
+description: "AZFD0003: Encountered a StorageException while trying to fetch the diagnostic events."
+++ Last updated : 09/03/2022++
+# AZFD0003: Encountered a StorageException while trying to fetch the diagnostic events.
+
+This event occurs when the Azure Storage account connection string value in the `AzureWebJobsStorage` app setting either doesn't have permissions to access Azure Table Storage or generates exceptions when trying to connect to storage.
+
+| | Value |
+|-|-|
+| **Event ID** |AZFD0003|
+| **Category** |[Usage]|
+| **Severity** |Error|
+
+## Event description
+The `AzureWebJobsStorage` app setting is used to store the connection string of the storage account associated with the function app. The Azure Functions runtime uses this connection for core behaviors such as coordinating singleton execution of timer triggers, default app key storage, and storing diagnostic events.
+
+The connection string set in `AzureWebJobsStorage` must be for an account that has permissions to store and read diagnostic events from Table Storage. The complete set of read, write, delete, add, and create events must be supported.
+
+For more information, see [AzureWebJobsStorage](../../functions-app-settings.md#azurewebjobsstorage).
+
+## How to resolve the event
+Make sure that the storage account for the connection string stored in `AzureWebJobsStorage` has permissions to read, write, delete, add and create in Table Storage. Clients should be able to access Storage using this connection string without generating exceptions.
+
+## When to suppress the event
+This event should not be suppressed.
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Upgrade to the latest release of the Log Analytics agent for Windows and Linux m
| Environment | Installation method | Upgrade method | |--|-|-|
-| Azure VM | Log Analytics agent VM extension for Windows/Linux | Agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property _autoUpgradeMinorVersion_ to **false**. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. Major version upgrade is always manual. See [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](https://docs.azure.cn/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion?view=azure-dotnet). |
+| Azure VM | Log Analytics agent VM extension for Windows/Linux | Agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property _autoUpgradeMinorVersion_ to **false**. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. Only Linux agent supports automatic update post deployment with _enableAutomaticUpgrade_ property(See [Enable Auto-Update for the Linux Agent](#enable-auto-update-for-the-linux-agent)). Major version upgrade is always manual(See [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](https://docs.azure.cn/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion?view=azure-dotnet)). |
| Custom Azure VM images | Manual install of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.| | Non-Azure VMs | Manual install of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. |
Set-AzVMExtension \
-ExtensionName OmsAgentForLinux \ -ExtensionType OmsAgentForLinux \ -Publisher Microsoft.EnterpriseCloud.Monitoring \
- -TypeHandlerVersion latestVersion
+ -TypeHandlerVersion latestVersion \
-ProtectedSettingString '{"workspaceKey":"myWorkspaceKey"}' \ -SettingString '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}' \
- -EnableAutomaticUpgrade $true
+ -EnableAutomaticUpgrade $true
``` # [Azure CLI](#tab/CLILinux) ```powershell
az vm extension set \
--protected-settings '{"workspaceKey":"myWorkspaceKey"}' \ --settings '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}' \ --version latestVersion \enable-auto-upgrade true
+ --enable-auto-upgrade true
```
Perform the following steps if your Linux computers need to communicate through
``` sudo /opt/microsoft/omsagent/bin/service_control restart [<workspace id>] ```
+ If you see "cURL failed to perform on this base url" in the log, you can try removing '\n' in proxy.conf EOF to resolve the failure:
+ ```
+ od -c /etc/opt/microsoft/omsagent/proxy.conf
+ cat /etc/opt/microsoft/omsagent/proxy.conf | tr -d '\n' > /etc/opt/microsoft/omsagent/proxy2.conf
+ rm /etc/opt/microsoft/omsagent/proxy.conf
+ mv /etc/opt/microsoft/omsagent/proxy2.conf /etc/opt/microsoft/omsagent/proxy.conf
+ sudo chown omsagent:omiusers /etc/opt/microsoft/omsagent/proxy.conf
+ sudo /opt/microsoft/omsagent/bin/service_control restart [<workspace id>]
+ ```
## Uninstall agent Use one of the following procedures to uninstall the Windows or Linux agent using the command line or setup wizard.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The default pricing for Log Analytics is a Pay-As-You-Go model that's based on i
Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. >[!NOTE]
->The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event. On average across all event types, the billed size is about 25% less than the incoming data size. This can be up to 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
+>The billable data volume calculation is generally substantially smaller than the size of the entire incoming JSON-packaged event. Including the effect of the standard columns excluded from billing, on average across all event types the billed size is around 25% less than the incoming data size. This can be up to 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
### Excluded columns The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
This section lists the most common service limits you might encounter as you use
[!INCLUDE [sentinel-service-limits](../../../includes/sentinel-limits-notebooks.md)]
-### Threat intelligence limits
+### Repositories limits
-### Watchlist limits
+### Threat intelligence limits
### User and Entity Behavior Analytics (UEBA) limits [!INCLUDE [sentinel-service-limits](../../../includes/sentinel-limits-ueba.md)]
+### Watchlist limits
++ ## Service Bus limits [!INCLUDE [azure-servicebus-limits](../../../includes/service-bus-quotas-table.md)]
communication-services Call Recording Unmixed Audio Private Preview Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/call-recording-unmixed-audio-private-preview-quickstart.md
+
+ Title: Azure Communication Services Unmixed Audio Recording API quickstart
+
+description: Private Preview quickstart for Unmixed Audio Call Recording APIs.
+++ Last updated : 09/07/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Unmixed Audio Call Recording Quickstart
++
+This quickstart gets you started recording voice and video calls. This quickstart assumes you've already used the [Calling client SDK](get-started-with-video-calling.md) to build the end-user calling experience. Using the **Calling Server APIs and SDKs** you can enable and manage recordings.
+++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+
+- Learn more about [Call Recording](../../concepts/voice-video-calling/call-recording.md)
+- Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
ms.suite: integration Previously updated : 08/18/2021 Last updated : 09/02/2021 tags: connectors
When you need to send related messages in a specific order, you can use the [*se
When you create a logic app, you can select the **Correlated in-order delivery using service bus sessions** template, which implements the sequential convoy pattern. For more information, see [Send related messages in order](../logic-apps/send-related-messages-sequential-convoy.md). + ## Delays in updates to your logic app taking effect If a Service Bus trigger's polling interval is small, such as 10 seconds, updates to your logic app workflow might not take effect for up to 10 minutes. To work around this problem, you can disable the logic app, make the changes, and then enable the logic app workflow again.
+## Troubleshooting
+
+Occasionally, operations such as completing a message or renewing a session produce the following error:
+
+``` json
+{
+ "status": 400,
+ "message": "No session available to complete the message with the lock token 'ce440818-f26f-4a04-aca8-555555555555'. clientRequestId: facae905-9ba4-44f4-a42a-888888888888",
+ "error": {
+ "message": "No session available to complete the message with the lock token 'ce440818-f26f-4a04-aca8-555555555555'."
+ }
+}
+```
+
+The Service Bus connector uses in-memory cache to support all operations associated with the sessions. The Service Bus message receiver is cached in the memory of the role instance (virtual machine) that receives the messages. To process all requests, all calls for the connection get routed to this same role instance. This behavior is required because all the Service Bus operations in a session require the same receiver that receives the messages for a specific session.
+
+The chance exists that requests might not get routed to the same role instance, due to reasons such as an infrastructure update, connector deployment, and so on. If this event happens, requests fail because the receiver that performs the operations in the session isn't available in the role instance that serves the request.
+
+As long as this error happens only occasionally, the error is expected. When the error happens, the message is still preserved in the service bus. The next trigger or workflow run tries to process the message again.
+ <a name="connector-reference"></a> ## Connector reference
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
ms.suite: integration Previously updated : 06/01/2022 Last updated : 09/02/2022 # Schedule and run recurring workflows with the Recurrence trigger in Azure Logic Apps
The Recurrence trigger is part of the built-in Schedule connector and runs nativ
1. In the designer, follow the corresponding steps, based on whether your logic app workflow is [Consumption or Standard](../logic-apps/logic-apps-overview.md#resource-environment-differences).
- **Consumption**
+### [Consumption](#tab/consumption)
1. On the designer, under the search box, select **Built-in**. 1. In the search box, enter **recurrence**.
The Recurrence trigger is part of the built-in Schedule connector and runs nativ
![Screenshot for Consumption logic app workflow designer with "Recurrence" trigger selected.](./media/connectors-native-recurrence/add-recurrence-trigger-consumption.png)
- **Standard**
+### [Standard](#tab/standard)
1. On the designer, select **Choose operation**. 1. On the **Add a trigger** pane, under the search box, select **Built-in**.
The Recurrence trigger is part of the built-in Schedule connector and runs nativ
![Screenshot for Standard logic app workflow designer with "Recurrence" trigger selected.](./media/connectors-native-recurrence/add-recurrence-trigger-standard.png) ++ 1. Set the interval and frequency for the recurrence. In this example, set these properties to run your workflow every week, for example: **Consumption**
The following example shows how a Recurrence trigger definition might appear in
} ```
+> [!NOTE]
+>
+> In the Recurrence trigger definition, the `evaluatedRecurrence` property appears along with the `recurrence` property
+> when any expression or parameter reference appears in the recurrence criteria. This `evaluatedRecurrence` property
+> represents the evaluated values from the expression or parameter reference. If the recurrence criteria doesn't
+> specify any expressions or parameter references, the `evaluatedRecurrence` and `recurrence` properties are the same.
+ The following example shows how to update the trigger definition so that the trigger runs only once on the last day of each month: ```json
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
az containerapp env create \
--location "$LOCATION" ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az containerapp env create `
- --name $CONTAINERAPPS_ENVIRONMENT `
- --resource-group $RESOURCE_GROUP `
- --location $LOCATION
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
++
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
``` ## Set up a storage queue
-Choose a name for `STORAGE_ACCOUNT`. Storage account names must be *unique within Azure* and be from 3 to 24 characters in length containing numbers and lowercase letters only.
+Begin by defining a name for the storage account. Storage account names must be *unique within Azure* and be from 3 to 24 characters in length containing numbers and lowercase letters only.
# [Bash](#tab/bash) ```bash
-STORAGE_ACCOUNT="<storage account name>"
+STORAGE_ACCOUNT_NAME="<STORAGE_ACCOUNT_NAME>"
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$STORAGE_ACCOUNT="<storage account name>"
+```azurepowershell
+$StorageAcctName = "<StorageAccountName>"
```
Create an Azure Storage account.
```azurecli az storage account create \
- --name $STORAGE_ACCOUNT \
+ --name $STORAGE_ACCOUNT_NAME \
--resource-group $RESOURCE_GROUP \ --location "$LOCATION" \ --sku Standard_RAGRS \ --kind StorageV2 ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$STORAGE_ACCOUNT = New-AzStorageAccount `
- -Name $STORAGE_ACCOUNT_NAME `
- -ResourceGroupName $RESOURCE_GROUP `
- -Location $LOCATION `
- -SkuName Standard_RAGRS `
- -Kind StorageV2
+```azurepowershell
+$StorageAcctArgs = @{
+ Name = $StorageAcctName
+ ResourceGroupName = $ResourceGroupName
+ Location = $location
+ SkuName = 'Standard_RAGRS'
+ Kind = 'StorageV2'
+}
+$StorageAcct = New-AzStorageAccount @StorageAcctArgs
```
Next, get the connection string for the queue.
# [Bash](#tab/bash) ```azurecli
-QUEUE_CONNECTION_STRING=`az storage account show-connection-string -g $RESOURCE_GROUP --name $STORAGE_ACCOUNT --query connectionString --out json | tr -d '"'`
+QUEUE_CONNECTION_STRING=`az storage account show-connection-string -g $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME --query connectionString --out json | tr -d '"'`
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
- $QUEUE_CONNECTION_STRING=(az storage account show-connection-string -g $RESOURCE_GROUP --name $STORAGE_ACCOUNT_NAME --query connectionString --out json) -replace '"',''
+Here we use Azure CLI as there isn't an equivalent PowerShell cmdlet to get the connection string for the storage account queue.
+
+```azurepowershell
+ $QueueConnectionString = (Get-AzStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAcctName).Context.ConnectionString
```
+<!--
+
+ $QueueConnectionString = (az storage account show-connection-string -g $ResourceGroupName --name $StorageAcctName --query connectionString --out json) -replace '"',''
+-->
Now you can create the message queue.
```azurecli az storage queue create \
- --name "myqueue" \
- --account-name $STORAGE_ACCOUNT \
+ --name 'myqueue" \
+ --account-name $STORAGE_ACCOUNT_NAME \
--connection-string $QUEUE_CONNECTION_STRING ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$queue = New-AzStorageQueue ΓÇôName "myqueue" `
- -Context $STORAGE_ACCOUNT.Context
+```azurepowershell
+$Queue = New-AzStorageQueue -Name 'myqueue' -Context $StorageAcct.Context
```
az storage message put \
--connection-string $QUEUE_CONNECTION_STRING ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-$queueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("Hello Queue Reader App")
-$queue.CloudQueue.AddMessageAsync($QueueMessage)
+```azurepowershell
+$QueueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("Hello Queue Reader App")
+$Queue.CloudQueue.AddMessageAsync($QueueMessage).GetAwaiter().GetResult()
```
+A result of `Microsoft.Azure.Storage.Core.NullType` is returned when the message is added to the queue.
+ ## Deploy the background application
az deployment group create --resource-group "$RESOURCE_GROUP" \
location="$LOCATION" ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$params = @{
- environment_name = $CONTAINERAPPS_ENVIRONMENT
- location = $LOCATION
- queueconnection=$QUEUE_CONNECTION_STRING
+```azurepowershell
+$Params = @{
+ environment_name = $ContainerAppsEnvironment
+ location = $Location
+ queueconnection = $QueueConnectionString
}
-New-AzResourceGroupDeployment `
- -ResourceGroupName $RESOURCE_GROUP `
- -TemplateParameterObject $params `
- -TemplateFile ./queue.json `
- -SkipTemplateParameterPrompt
+$DeploymentArgs = @{
+ ResourceGroupName = $ResourceGroupName
+ TemplateParameterObject = $Params
+ TemplateFile = './queue.json'
+ SkipTemplateParameterPrompt = $true
+}
+New-AzResourceGroupDeployment @DeploymentArgs
```
The application scales out to 10 replicas based on the queue length as defined i
## Verify the result
-The container app runs as a background process. As messages arrive from the Azure Storage Queue, the application creates log entries in Log analytics. You must wait a few minutes for the analytics to arrive for the first time before you are able to query the logged data.
+The container app runs as a background process. As messages arrive from the Azure Storage Queue, the application creates log entries in Log analytics. You must wait a few minutes for the analytics to arrive for the first time before you're able to query the logged data.
Run the following command to see logged messages. This command requires the Log analytics extension, so accept the prompt to install extension when requested.
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPP
az monitor log-analytics query \ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID'" \
+ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s | take 5" \
--out table ```
-# [PowerShell](#tab/powershell)
-
-```powershell
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+# [Azure PowerShell](#tab/azure-powershell)
-$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID'"
+```azurepowershell
+$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'queuereader' and Log_s contains 'Message ID' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s | take 5"
$queryResults.Results ```
$queryResults.Results
## Clean up resources
-Once you are done, run the following command to delete the resource group that contains your Container Apps resources.
+Once you're done, run the following command to delete the resource group that contains your Container Apps resources.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
# [Bash](#tab/bash)
az group delete \
--resource-group $RESOURCE_GROUP ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
```
-This command deletes the entire resource group including the Container Apps instance, storage account, Log Analytics workspace, and any other resources in the resource group.
+
container-apps Dapr Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-github-actions.md
+
+ Title: Tutorial - Deploy a Dapr application with GitHub Actions for Azure Container Apps
+description: Learn about multiple revision management by deploying a Dapr application with GitHub Actions and Azure Container Apps.
+++++ Last updated : 09/02/2022+++
+# Tutorial: Deploy a Dapr application with GitHub Actions for Azure Container Apps
+
+[GitHub Actions](https://docs.github.com/en/actions) gives you the flexibility to build an automated software development lifecycle workflow. In this tutorial, you'll see how revision-scope changes to a Container App using [Dapr](https://docs.dapr.io) can be deployed using a GitHub Actions workflow.
+
+Dapr is an open source project that helps developers with the inherent challenges presented by distributed applications, such as state management and service invocation. Azure Container Apps integrates with a [managed version of Dapr](./dapr-overview.md).
+
+In this tutorial, you'll:
+
+> [!div class="checklist"]
+> - Configure a GitHub Actions workflow for deploying the end-to-end solution to Azure Container Apps.
+> - Modify the source code with a [revision-scope change](revisions.md#revision-scope-changes) to trigger the Build and Deploy GitHub workflow.
+> - Learn how revisions are created for container apps in multi-revision mode.
+
+The [sample solution](https://github.com/Azure-Samples/container-apps-store-api-microservice) consists of three Dapr-enabled microservices and uses Dapr APIs for service-to-service communication and state management.
++
+> [!NOTE]
+> This tutorial focuses on the solution deployment outlined below. If you're interested in building and running the solution on your own, [follow the README instructions within the repo](https://github.com/azure-samples/container-apps-store-api-microservice/build-and-run.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+ - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Contributor or Owner permissions on the Azure subscription.
+- A GitHub account.
+ - If you don't have one, sign up for [free](https://github.com/join).
+- Install [Git](https://github.com/git-guides/install-git).
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
+
+## Set up the environment
+
+In the console, set the following environment variables:
+
+# [Bash](#tab/bash)
+
+```bash
+RESOURCE_GROUP="my-containerapp-store"
+LOCATION="canadacentral"
+GITHUB_USERNAME="your-GitHub-username"
+SUBSCRIPTION_ID="your-subscription-id"
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$RESOURCE_GROUP="my-containerapp-store"
+$LOCATION="canadacentral"
+$GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
+$SUBSCRIPTION_ID="<YOUR_SUBSCRIPTION_ID>"
+```
+++
+Sign in to Azure from the CLI using the following command, and follow the prompts in your browser to complete the authentication process.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az login
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az login
+```
+++
+Ensure you're running the latest version of the CLI via the upgrade command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az upgrade
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az upgrade
+```
+++
+Now that you've validated your Azure CLI setup, bring the application code to your local machine.
+
+## Get application code
+
+1. Navigate to the [sample GitHub repo](https://github.com/Azure-Samples/container-apps-store-api-microservice.git) and click **Fork** in the top-right corner of the page.
+
+1. Use the following [git](https://git-scm.com/downloads) command with your GitHub username to clone **your fork** of the repo to your development environment:
+
+# [Bash](#tab/bash)
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/container-apps-store-api-microservice.git
+```
+
+# [PowerShell](#tab/powershell)
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/container-apps-store-api-microservice.git
+```
+++
+Navigate into the cloned directory.
+
+```console
+cd container-apps-store-api-microservice
+```
+
+The repository includes the following resources:
+
+- The source code for each application
+- Deployment manifests
+- A GitHub Actions workflow file
+
+## Deploy Dapr solution using GitHub Actions
+
+The GitHub Actions workflow YAML file in the `/.github/workflows/` folder executes the following steps in the background as you work through this tutorial:
+
+| Section | Tasks |
+| - | -- |
+| **Authentication** | Log in to a private container registry (GitHub Container Registry) |
+| **Build** | Build & push the container images for each microservice |
+| **Authentication** | Log in to Azure |
+| **Deploy using bicep** | 1. Create a resource group <br>2. Deploy Azure Resources for the solution using bicep |
+
+The following resources are deployed via the bicep template in the `/deploy` path of the repository:
+
+- Log Analytics workspace
+- Application Insights
+- Container apps environment
+- Order service container app
+- Inventory container app
+- Azure Cosmos DB
+
+### Create a service principal
+
+The workflow requires a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) to authenticate to Azure. In the console, run the following command and replace `<SERVICE_PRINCIPAL_NAME>` with your own unique value.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az ad sp create-for-rbac \
+ --name <SERVICE_PRINCIPAL_NAME> \
+ --role "contributor" \
+ --scopes /subscriptions/$SUBSCRIPTION_ID \
+ --sdk-auth
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az ad sp create-for-rbac `
+ --name <SERVICE_PRINCIPAL_NAME> `
+ --role "contributor" `
+ --scopes /subscriptions/$SUBSCRIPTION_ID `
+ --sdk-auth
+```
++
+The output is the role assignment credentials that provide access to your resource. The command should output a JSON object similar to:
+
+```json
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>"
+ (...)
+ }
+```
+
+Copy the JSON object output and save it to a file on your machine. You use this file as you authenticate from GitHub.
+
+### Configure GitHub Secrets
+
+1. While in GitHub, browse to your forked repository for this tutorial.
+1. Select the **Settings** tab.
+1. Select **Secrets** > **Actions**.
+1. On the **Actions secrets** page, select **New repository secret**.
+
+ :::image type="content" source="media/dapr-github-actions/secrets-actions.png" alt-text="Screenshot of selecting settings, then actions from under secrets in the menu, then the new repository secret button.":::
+
+1. Create the following secrets:
+
+ | Name | Value |
+ | - | -- |
+ | `AZURE_CREDENTIALS` | The JSON output you saved earlier from the service principal creation |
+ | `RESOURCE_GROUP` | Set as **my-containerapp-store** |
+
+ :::image type="content" source="media/dapr-github-actions/secrets.png" alt-text="Screenshot of all three secrets once created.":::
+
+### Trigger the GitHub Action
+
+To build and deploy the initial solution to Azure Container Apps, run the "Build and deploy" workflow.
+
+1. Open the **Actions** tab in your GitHub repository.
+1. In the left side menu, select the **Build and Deploy** workflow.
+
+ :::image type="content" source="media/dapr-github-actions/run-workflow.png" alt-text="Screenshot of the Actions tab in GitHub and running the workflow.":::
+
+1. Select **Run workflow**.
+1. In the prompt, leave the *Use workflow from* value as **Branch: main**.
+1. Select **Run workflow**.
+
+### Verify the deployment
+
+After the workflow successfully completes, verify the application is running in Azure Container Apps.
+
+1. Navigate to the [Azure portal](https://portal.azure.com).
+1. In the search field, enter **my-containerapp-store** and select the **my-containerapp-store** resource group.
+
+ :::image type="content" source="media/dapr-github-actions/search-resource-group.png" alt-text="Screenshot of searching for and finding my container app store resource group.":::
+
+1. Navigate to the container app called **node-app**.
+
+ :::image type="content" source="media/dapr-github-actions/node-app.png" alt-text="Screenshot of the node app container app in the resource group list of resources.":::
+
+1. Select the **Application Url**.
+
+ :::image type="content" source="media/dapr-github-actions/app-url.png" alt-text="Screenshot of the application url.":::
+
+1. Ensure the application was deployed successfully by creating a new order:
+ 1. Enter an **Id** and **Item**.
+ 1. Select **Create**.
+
+ :::image type="content" source="media/dapr-github-actions/create-order.png" alt-text="Screenshot of creating an order via the application url.":::
+
+ If the order is persisted, you're redirected to a page that says "Order created!"
+
+1. Navigate back to the previous page.
+
+1. View the item you created via the **View Order** form:
+ 1. Enter the item **Id**.
+ 1. Select **View**.
+
+ :::image type="content" source="media/dapr-github-actions/view-order.png" alt-text="Screenshot of viewing the order via the view order form.":::
+
+ The page is redirected to a new page displaying the order object.
+
+1. In the Azure portal, navigate to **Application** > **Revision Management** in the **node-app** container.
+
+ Note that, at this point, only one revision is available for this app.
+
+ :::image type="content" source="media/dapr-github-actions/single-revision-view.png" alt-text="Screenshot of checking the number of revisions at this point of the tutorial.":::
++
+## Modify the source code to trigger a new revision
+
+Container Apps run in single-revision mode by default. In the Container Apps bicep module, we explicitly set the revision mode to multiple. This means that once the source code is changed and committed, the GitHub build/deploy workflow builds and pushes a new container image to GitHub Container Registry. Changing the container image is considered a [revision-scope](revisions.md#revision-scope-changes) change and results in a new container app revision.
+
+> [!NOTE]
+> [Application-scope](revisions.md#application-scope-changes) changes do not create a new revision.
+
+To demonstrate the inner-loop experience for creating revisions via GitHub actions, you'll make a change to the frontend application and commit this change to your repo.
+
+1. Return to the console, and navigate into the *node-service/views* directory in the forked repository.
+
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ cd node-service/views
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```azurecli
+ cd node-service/views
+ ```
+
+
+1. Open the *index.jade* file in your editor of choice.
++
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ code index.jade .
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```azurecli
+ code index.jade .
+ ```
+
+
+1. At the bottom of the file, uncomment the following code to enable deleting an order from the Dapr state store.
+
+ ```jade
+ h2= 'Delete Order'
+ br
+ br
+ form(action='/order/delete', method='post')
+ div.input
+ span.label Id
+ input(type='text', name='id', placeholder='foo', required='required')
+ div.actions
+ input(type='submit', value='View')
+ ```
+
+1. Stage the changes and push to the `main` branch of your fork using git.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ git add .
+ git commit -m '<commit message>'
+ git push origin main
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```azurecli
+ git add .
+ git commit -m '<commit message>'
+ git push origin main
+ ```
+
+
+### View the new revision
+
+1. In the GitHub UI of your fork, select the **Actions** tab to verify the GitHub **Build and Deploy** workflow is running.
+
+1. Once the workflow is complete, navigate to the **my-containerapp-store** resource group in the Azure portal.
+
+1. Select the **node-app** container app.
+
+1. In the left side menu, select **Application** > **Revision Management**.
+
+ :::image type="content" source="media/dapr-github-actions/revision-mgmt.png" alt-text="Screenshot that shows Revision Management in the left side menu.":::
+
+ Since our container app is in **multiple revision mode**, Container Apps created a new revision and automatically sets it to `active` with 100% traffic.
+
+ :::image type="content" source="media/dapr-github-actions/two-revisions.png" alt-text="Screenshot that shows both the inactive and active revisions on the node app.":::
+
+1. Select each revision in the **Revision management** table to view revision details.
+
+ :::image type="content" source="media/dapr-github-actions/revision-details.png" alt-text="Screenshot of the revision details for the active node app revision.":::
+
+1. View the new revision in action by refreshing the node-app UI.
+
+1. Test the application further by deleting the order you created in the container app.
+
+ :::image type="content" source="media/dapr-github-actions/delete-order.png" alt-text="Screenshot of deleting the order created earlier in the tutorial.":::
+
+ The page is redirected to a page indicating that the order is removed.
+
+## Clean up resources
+
+Once you've finished the tutorial, run the following command to delete your resource group, along with all the resources you created in this tutorial.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete \
+ --resource-group $RESOURCE_GROUP
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force
+```
++++
+## Next steps
+
+Learn more about how [Dapr integrates with Azure Container Apps](./dapr-overview.md).
+
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Based on your needs, you can "plug in" certain Dapr component types like state s
# [YAML](#tab/yaml)
-When defining a Dapr component via YAML, you'll pass your component manifest into the Azure CLI. When configuring multiple components, you'll need to create a separate YAML file and run the Azure CLI command for each component.
-
-For example, deploy a `pubsub.yaml` component using the following command:
+When defining a Dapr component via YAML, you'll pass your component manifest into the Azure CLI. For example, deploy a `pubsub.yaml` component using the following command:
```azurecli az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub --yaml "./pubsub.yaml"
scopes:
# [Bicep](#tab/bicep)
-This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
-
-The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
```bicep resource daprComponent 'daprComponents@2022-03-01' = {
resource daprComponent 'daprComponents@2022-03-01' = {
# [ARM](#tab/arm)
-A Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
-
-This resource defines a Dapr component called `dapr-pubsub` via ARM. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via ARM. The Dapr component is defined as a child resource of your Container Apps environment. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
```json {
Version upgrades are handled transparently by Azure Container Apps. You can find
## Next Steps
-Now that you've learned about Dapr and some of the challenges it solves, try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart].
+Now that you've learned about Dapr and some of the challenges it solves:
+
+- Try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart].
+- Walk through a tutorial [using GitHub Actions to automate changes for a multi-revision, Dapr-enabled container app][dapr-github-actions].
<!-- Links Internal --> [dapr-quickstart]: ./microservices-dapr.md [dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md [aca-secrets]: ./manage-secrets.md
+[dapr-github-actions]: ./dapr-github-actions.md
<!-- Links External --> [dapr-concepts]: https://docs.dapr.io/concepts/overview/
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
Previously updated : 03/21/2022 Last updated : 08/31/2022 zone_pivot_groups: container-apps-registry-types
az containerapp env create \
--location $LOCATION ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
-```azurecli
-az containerapp env create `
- --name $CONTAINERAPPS_ENVIRONMENT `
- --resource-group $RESOURCE_GROUP `
- --location $LOCATION
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
```
The example shown in this article demonstrates how to use a custom container ima
- Enable external or internal ingress - Provide minimum and maximum replica values or scale rules
-For details on how to provide values for any of these parameters to the `create` command, run `az containerapp create --help`.
- ::: zone pivot="container-apps-private-registry"
-If you are using Azure Container Registry (ACR), you can login to your registry and forego the need to use the `--registry-username` and `--registry-password` parameters in the `az containerapp create` command and eliminate the need to set the REGISTRY_USERNAME and REGISTRY_PASSWORD variables.
- # [Bash](#tab/bash)
-```azurecli
-az acr login --name <REGISTRY_NAME>
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-az acr login --name <REGISTRY_NAME>
-```
---
-# [Bash](#tab/bash)
+For details on how to provide values for any of these parameters to the `create` command, run `az containerapp create --help`.
```bash CONTAINER_IMAGE_NAME=<CONTAINER_IMAGE_NAME>
REGISTRY_PASSWORD=<REGISTRY_PASSWORD>
(Replace the \<placeholders\> with your values.)
-If you have logged in to ACR, you can omit the `--registry-username` and `--registry-password` parameters in the `az containerapp create` command.
- ```azurecli az containerapp create \ --name my-container-app \
az containerapp create \
--registry-password $REGISTRY_PASSWORD ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$CONTAINER_IMAGE_NAME=<CONTAINER_IMAGE_NAME>
-$REGISTRY_SERVER=<REGISTRY_SERVER>
-$REGISTRY_USERNAME=<REGISTRY_USERNAME>
-$REGISTRY_PASSWORD=<REGISTRY_PASSWORD>
+```azurepowershell
+$ContainerImageName = "<CONTAINER_IMAGE_NAME>"
+$RegistryServer = "<REGISTRY_SERVER>"
+$RegistryUsername = "<REGISTRY_USERNAME>"
+$RegistryPassword = "<REGISTRY_PASSWORD>"
``` (Replace the \<placeholders\> with your values.)
-If you have logged in to ACR, you can omit the `--registry-username` and `--registry-password` parameters in the `az containerapp create` command.
-
-```powershell
-az containerapp create `
- --name my-container-app `
- --resource-group $RESOURCE_GROUP `
- --image $CONTAINER_IMAGE_NAME `
- --environment $CONTAINERAPPS_ENVIRONMENT `
- --registry-server $REGISTRY_SERVER `
- --registry-username $REGISTRY_USERNAME `
- --registry-password $REGISTRY_PASSWORD
+```azurepowershell
+$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -EnvName $ContainerAppsEnvironment).Id
+
+$TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image $ContainerImageName
+
+$RegistrySecretObj = New-AzContainerAppSecretObject -Name registry-secret -Value $RegistryPassword
+
+$RegistryArgs = @{
+ PasswordSecretRef = 'registry-secret'
+ Server = $RegistryServer
+ Username = $RegistryUsername
+}
+
+$RegistryObj = New-AzContainerAppRegistryCredentialObject @RegistryArgs
+
+$ContainerAppArgs = @{
+ Name = 'my-container-app'
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ManagedEnvironmentId = $EnvId
+ TemplateContainer = $TemplateObj
+ ConfigurationRegistry = $RegistryObj
+ ConfigurationSecret = $RegistrySecretObj
+}
+
+New-AzContainerApp @ContainerAppArgs
```
az containerapp create \
--name my-container-app \ --resource-group $RESOURCE_GROUP \ --environment $CONTAINERAPPS_ENVIRONMENT+
+If you have enabled ingress on your container app, you can add `--query properties.configuration.ingress.fqdn` to the `create` command to return the public URL for the application.
+ ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az containerapp create `
- --image <REGISTRY_CONTAINER_NAME> `
- --name my-container-app `
- --resource-group $RESOURCE_GROUP `
- --environment $CONTAINERAPPS_ENVIRONMENT
+```azurepowershell
+$TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image "<REGISTRY_CONTAINER_NAME>"
+```
+
+(Replace the \<REGISTRY_CONTAINER_NAME\> with your value.)
+
+```azurepowershell
+$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -EnvName $ContainerAppsEnvironment).Id
+
+$ContainerAppArgs = @{
+ Name = "my-container-app"
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ManagedEnvironmentId = $EnvId
+ TemplateContainer = $TemplateObj
+}
+New-AzContainerApp @ContainerAppArgs
```
Before you run this command, replace `<REGISTRY_CONTAINER_NAME>` with the full n
::: zone-end
-If you have enabled ingress on your container app, you can add `--query properties.configuration.ingress.fqdn` to the `create` command to return the public URL for the application.
- ## Verify deployment
-To verify a successful deployment, you can query the Log Analytics workspace. You might have to wait 5ΓÇô10 minutes after deployment for the analytics to arrive for the first time before you are able to query the logs.
+To verify a successful deployment, you can query the Log Analytics workspace. You might have to wait a few minutes after deployment for the analytics to arrive for the first time before you're able to query the logs. This depends on the console logging implemented in your container app.
-After about 5-10 minutes has passed, use the following steps to view logged messages.
+Use the following commands to view console log messages.
# [Bash](#tab/bash)
az monitor log-analytics query \
--out table ```
-# [PowerShell](#tab/powershell)
-
-```powershell
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
-
+# [Azure PowerShell](#tab/azure-powershell)
-az monitor log-analytics query `
- --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated" `
- --out table
+```azurepowershell
+$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated"
+$queryResults.Results
```
az monitor log-analytics query `
If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
+ # [Bash](#tab/bash) ```azurecli
-az group delete \
- --name $RESOURCE_GROUP
+az group delete --name $RESOURCE_GROUP
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-az group delete `
- --name $RESOURCE_GROUP
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
```
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
In this quickstart, you create a secure Container Apps environment and deploy yo
[!INCLUDE [container-apps-create-cli-steps.md](../../includes/container-apps-create-cli-steps.md)]
-To create the environment, run the following command:
- # [Bash](#tab/bash)
+To create the environment, run the following command:
+ ```azurecli az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \
az containerapp env create \
--location $LOCATION ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
-```azurecli
-az containerapp env create `
- --name $CONTAINERAPPS_ENVIRONMENT `
- --resource-group $RESOURCE_GROUP `
- --location $LOCATION
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
```
az containerapp create \
--query properties.configuration.ingress.fqdn ```
-# [PowerShell](#tab/powershell)
+> [!NOTE]
+> Make sure the value for the `--image` parameter is in lower case.
-```azurecli
-az containerapp create `
- --name my-container-app `
- --resource-group $RESOURCE_GROUP `
- --environment $CONTAINERAPPS_ENVIRONMENT `
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest `
- --target-port 80 `
- --ingress 'external' `
- --query properties.configuration.ingress.fqdn
-```
+By setting `--ingress` to `external`, you make the container app available to public requests.
-
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$ImageParams = @{
+ Name = 'my-container-app'
+ Image = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+$EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
+
+$AppArgs = @{
+ Name = 'my-container-app'
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ManagedEnvironmentId = $EnvId
+ IdentityType = 'SystemAssigned'
+ TemplateContainer = $TemplateObj
+ IngressTargetPort = 80
+ IngressExternal = $true
+
+}
+New-AzContainerApp @AppArgs
+```
> [!NOTE]
-> Make sure the value for the `--image` parameter is in lower case.
+> Make sure the value for the `Image` parameter is in lower case.
-By setting `--ingress` to `external`, you make the container app available to public requests.
+By setting `IngressExternal` to `$true`, you make the container app available to public requests.
++ ## Verify deployment
-The `create` command returned the fully qualified domain name for the container app. Copy this location to a web browser and see the following message:
+# [Bash](#tab/bash)
+
+The `create` command returns the fully qualified domain name for the container app. Copy this location to a web browser.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Get the fully qualified domain name for the container app.
+
+```azurepowershell
+(Get-AzContainerApp -Name $AppArgs.Name -ResourceGroupName $ResourceGroupName).IngressFqdn
+```
+
+Copy this location to a web browser.
+++
+ The following message is displayed when the container app is deployed:
:::image type="content" source="media/get-started/azure-container-apps-quickstart.png" alt-text="Your first Azure Container Apps deployment.":::
The `create` command returned the fully qualified domain name for the container
If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
+ # [Bash](#tab/bash) ```azurecli
-az group delete \
- --name $RESOURCE_GROUP
+az group delete --name $RESOURCE_GROUP
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
```
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Previously updated : 06/23/2022 Last updated : 08/31/2022 ms.devlang: azurecli
The following architecture diagram illustrates the components that make up this
-Individual container apps are deployed to an Azure Container Apps environment. To create the environment, run the following command:
# [Bash](#tab/bash)
+Individual container apps are deployed to an Azure Container Apps environment. To create the environment, run the following command:
+ ```azurecli az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \
az containerapp env create \
--location "$LOCATION" ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az containerapp env create `
- --name $CONTAINERAPPS_ENVIRONMENT `
- --resource-group $RESOURCE_GROUP `
- --location "$LOCATION"
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
++
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
```
az containerapp env create `
### Create an Azure Blob Storage account
-Choose a name for `STORAGE_ACCOUNT`. Storage account names must be *unique within Azure*, from 3 to 24 characters in length and must contain numbers and lowercase letters only.
+Choose a name for storage account. Storage account names must be *unique within Azure*, from 3 to 24 characters in length and must contain numbers and lowercase letters only.
# [Bash](#tab/bash) ```bash
-STORAGE_ACCOUNT="<storage account name>"
+STORAGE_ACCOUNT_NAME="<storage account name>"
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$STORAGE_ACCOUNT="<storage account name>"
+```azurepowershell
+$StorageAcctName = "<storage account name>"
```
-Set the `STORAGE_ACCOUNT_CONTAINER` name.
# [Bash](#tab/bash)
+Set the `STORAGE_ACCOUNT_CONTAINER` name.
+ ```bash STORAGE_ACCOUNT_CONTAINER="mycontainer" ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
+
+Set the storage account container name.
-```powershell
-$STORAGE_ACCOUNT_CONTAINER="mycontainer"
+```azurepowershell
+$StorageAcctContainerName = 'mycontainer'
```
Use the following command to create an Azure Storage account.
```azurecli az storage account create \
- --name $STORAGE_ACCOUNT \
+ --name $STORAGE_ACCOUNT_NAME \
--resource-group $RESOURCE_GROUP \ --location "$LOCATION" \ --sku Standard_RAGRS \ --kind StorageV2 ```
-# [PowerShell](#tab/powershell)
-
-```azurecli
-az storage account create `
- --name $STORAGE_ACCOUNT `
- --resource-group $RESOURCE_GROUP `
- --location "$LOCATION" `
- --sku Standard_RAGRS `
- --kind StorageV2
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$StorageAcctArgs = @{
+ Name = $StorageAcctName
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ SkuName = 'Standard_RAGRS'
+ Kind = "StorageV2"
+}
+$StorageAccount = New-AzStorageAccount @StorageAcctArgs
```
Get the storage account key with the following command:
# [Bash](#tab/bash) ```azurecli
-STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv`
+STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' --out tsv`
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-$STORAGE_ACCOUNT_KEY=(az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv)
+```azurepowershell
+$StorageAcctKey = (Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAcctName)| Where-Object {$_.KeyName -eq "key1"}
``` ### Configure the state store component
+# [Bash](#tab/bash)
+ Create a config file named *statestore.yaml* with the properties that you sourced from the previous steps. This file helps enable your Dapr app to access your state store. The following example shows how your *statestore.yaml* file should look when configured for your Azure Blob Storage account: ```yaml
metadata:
- name: accountKey secretRef: account-key - name: containerName
- value: mycontainer
+ value: <STORAGE_ACCOUNT_CONTAINER>
secrets: - name: account-key value: "<STORAGE_ACCOUNT_KEY>"
scopes:
To use this file, update the placeholders: -- Replace `<STORAGE_ACCOUNT>` with the value of the `STORAGE_ACCOUNT` variable you defined. To obtain its value, run the following command:
+* Replace `<STORAGE_ACCOUNT>` with the value of the `STORAGE_ACCOUNT_NAME` variable you defined. To obtain its value, run the following command:
+ ```azurecli
- echo $STORAGE_ACCOUNT
+ echo $STORAGE_ACCOUNT_NAME
```-- Replace `<STORAGE_ACCOUNT_KEY>` with the storage account key. To obtain its value, run the following command:+
+* Replace `<STORAGE_ACCOUNT_KEY>` with the storage account key. To obtain its value, run the following command:
+ ```azurecli echo $STORAGE_ACCOUNT_KEY ```
-If you've changed the `STORAGE_ACCOUNT_CONTAINER` variable from its original value, `mycontainer`, replace the value of `containerName` with your own value.
+* Replace `<STORAGE_ACCOUNT_CONTAINER>` with the storage account container name. To obtain its value, run the following command:
+
+ ```azurecli
+ echo $STORAGE_ACCOUNT_CONTAINER
+ ```
> [!NOTE] > Container Apps does not currently support the native [Dapr components schema](https://docs.dapr.io/operations/components/component-schema/). The above example uses the supported schema. Navigate to the directory in which you stored the *statestore.yaml* file and run the following command to configure the Dapr component in the Container Apps environment.
-If you need to add multiple components, create a separate YAML file for each component and run the `az containerapp env dapr-component set` command multiple times to add each component. For more information about configuring Dapr components, see [Configure Dapr components](dapr-overview.md#configure-dapr-components).
--
-# [Bash](#tab/bash)
+If you need to add multiple components, create a separate YAML file for each component, and run the `az containerapp env dapr-component set` command multiple times to add each component. For more information about configuring Dapr components, see [Configure Dapr components](dapr-overview.md#configure-dapr-components).
```azurecli az containerapp env dapr-component set \
az containerapp env dapr-component set \
--yaml statestore.yaml ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az containerapp env dapr-component set `
- --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP `
- --dapr-component-name statestore `
- --yaml statestore.yaml
+```azurepowershell
+
+$AcctName = New-AzContainerAppDaprMetadataObject -Name "accountName" -Value $StorageAcctName
+
+$AcctKey = New-AzContainerAppDaprMetadataObject -Name "accountKey" -SecretRef "account-key"
+
+$ContainerName = New-AzContainerAppDaprMetadataObject -Name "containerName" -Value $StorageAcctContainerName
+
+$Secret = New-AzContainerAppSecretObject -Name "account-key" -Value $StorageAcctKey.Value
+
+$DaprArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ DaprName = 'statestore'
+ Metadata = $AcctName, $AcctKey, $ContainerName
+ Secret = $Secret
+ Scope = 'nodeapp'
+ Version = "v1"
+ ComponentType = 'state.azure.blobstorage'
+}
+
+New-AzContainerAppManagedEnvDapr @DaprArgs
```
-Your state store is configured using the Dapr component described in *statestore.yaml*. The component is scoped to a container app named `nodeapp` and isn't available to other container apps.
+Your state store is configured using the Dapr component type of `state.azure.blobstorage`. The component is scoped to a container app named `nodeapp` and isn't available to other container apps.
## Deploy the service application (HTTP web server)
az containerapp create \
--env-vars 'APP_PORT=3000' ```
-# [PowerShell](#tab/powershell)
+This command deploys:
-```azurecli
-az containerapp create `
- --name nodeapp `
- --resource-group $RESOURCE_GROUP `
- --environment $CONTAINERAPPS_ENVIRONMENT `
- --image dapriosamples/hello-k8s-node:latest `
- --target-port 3000 `
- --ingress 'internal' `
- --min-replicas 1 `
- --max-replicas 1 `
- --enable-dapr `
- --dapr-app-id nodeapp `
- --dapr-app-port 3000 `
- --env-vars 'APP_PORT=3000'
+* The service (Node) app server on `--target-port 3000` (the app port)
+* Its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000'` for service discovery and invocation
++
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$EnvId = (Get-AzContainerAppManagedEnv -ResourceGroupName $ResourceGroupName -EnvName $ContainerAppsEnvironment).Id
+
+$EnvVars = New-AzContainerAppEnvironmentVarObject -Name APP_PORT -Value 3000
+
+$TemplateArgs = @{
+ Name = 'nodeapp'
+ Image = 'dapriosamples/hello-k8s-node:latest'
+ Env = $EnvVars
+}
+$ServiceTemplateObj = New-AzContainerAppTemplateObject @TemplateArgs
+
+$ServiceArgs = @{
+ Name = "nodeapp"
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ ManagedEnvironmentId = $EnvId
+ TemplateContainer = $ServiceTemplateObj
+ IngressTargetPort = 3000
+ ScaleMinReplica = 1
+ ScaleMaxReplica = 1
+ DaprEnabled = $true
+ DaprAppId = 'nodeapp'
+ DaprAppPort = 3000
+}
+New-AzContainerApp @ServiceArgs
```
+This command deploys:
+
+* the service (Node) app server on `DaprAppPort 3000` (the app port)
+* its accompanying Dapr sidecar configured with `-DaprAppId nodeapp` and `-DaprAppPort 3000'` for service discovery and invocation
+ By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapriosamples/hello-k8s-node).
-This command deploys:
-
-* the service (Node) app server on `--target-port 3000` (the app port)
-* its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000'` for service discovery and invocation
## Deploy the client application (headless client)
az containerapp create \
--dapr-app-id pythonapp ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az containerapp create `
- --name pythonapp `
- --resource-group $RESOURCE_GROUP `
- --environment $CONTAINERAPPS_ENVIRONMENT `
- --image dapriosamples/hello-k8s-python:latest `
- --min-replicas 1 `
- --max-replicas 1 `
- --enable-dapr `
- --dapr-app-id pythonapp
+```azurepowershell
+
+$TemplateArgs = @{
+ Name = 'pythonapp'
+ Image = 'dapriosamples/hello-k8s-python:latest'
+}
+
+$ClientTemplateObj = New-AzContainerAppTemplateObject @TemplateArgs
++
+$ClientArgs = @{
+ Name = 'pythonapp'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ ManagedEnvironmentId = $EnvId
+ TemplateContainer = $ClientTemplateObj
+ ScaleMinReplica = 1
+ ScaleMaxReplica = 1
+ DaprEnabled = $true
+ DaprAppId = 'pythonapp'
+}
+New-AzContainerApp @ClientArgs
``` By default, the image is pulled from [Docker Hub](https://hub.docker.com/r/dapriosamples/hello-k8s-python).
-This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there's no `--target-port` to start a server, nor is there a need to enable ingress.
+This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless, there's no need to specify a target port nor is there a need to external enable ingress.
## Verify the result
You can confirm that the services are working correctly by viewing data in your
1. Verify that you can see the file named `order` in the container.
-1. Select on the file.
+1. Select the file.
1. Select the **Edit** tab.
az monitor log-analytics query \
--out table | ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`
-(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+```azurepowershell
+$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceId -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5 "
+$queryResults.Results
-az monitor log-analytics query `
- --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | sort by TimeGenerated | take 5" `
- --out table
``` The following output demonstrates the type of response to expect from the CLI command.
-```console
+```bash
ContainerAppName_s Log_s TableName TimeGenerated -- - - nodeapp Got a new order! Order ID: 61 PrimaryResult 2021-10-22T21:31:46.184Z
nodeapp Got a new order! Order ID: 63 PrimaryResult 2021-10-22
Once you're done, run the following command to delete your resource group along with all the resources you created in this tutorial.
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+ # [Bash](#tab/bash) ```azurecli
-az group delete \
- --resource-group $RESOURCE_GROUP
+az group delete --resource-group $RESOURCE_GROUP
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az group delete `
- --resource-group $RESOURCE_GROUP
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
```
-This command deletes the resource group that includes all of the resources created in this tutorial.
- > [!NOTE] > Since `pythonapp` continuously makes calls to `nodeapp` with messages that get persisted into your configured state store, it is important to complete these cleanup steps to avoid ongoing billable operations.
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
Previously updated : 06/09/2022 Last updated : 08/31/2022 zone_pivot_groups: azure-cli-or-portal
Next, declare a variable to hold the VNET name.
VNET_NAME="my-custom-vnet" ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$VNET_NAME="my-custom-vnet"
+```azurepowershell
+$VnetName = 'my-custom-vnet'
```
az network vnet create \
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \
- --name infrastructure \
+ --name infrastructure-subnet \
--address-prefixes 10.0.0.0/23 ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-az network vnet create `
- --resource-group $RESOURCE_GROUP `
- --name $VNET_NAME `
- --location $LOCATION `
- --address-prefix 10.0.0.0/16
+```azurepowershell
+$SubnetArgs = @{
+ Name = 'infrastructure-subnet'
+ AddressPrefix = '10.0.0.0/23'
+}
+$subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs
```
-```powershell
-az network vnet subnet create `
- --resource-group $RESOURCE_GROUP `
- --vnet-name $VNET_NAME `
- --name infrastructure-subnet `
- --address-prefixes 10.0.0.0/23
+```azurepowershell
+$VnetArgs = @{
+ Name = $VnetName
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ AddressPrefix = '10.0.0.0/16'
+ Subnet = $subnet
+}
+$vnet = New-AzVirtualNetwork @VnetArgs
```
With the VNET established, you can now query for the infrastructure subnet ID.
INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv)
+```azurepowershell
+$InfrastructureSubnet = (Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet).Id
```
az containerapp env create \
--internal-only ```
-# [PowerShell](#tab/powershell)
-
-```powershell
-az containerapp env create `
- --name $CONTAINERAPPS_ENVIRONMENT `
- --resource-group $RESOURCE_GROUP `
- --location "$LOCATION" `
- --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET `
- --internal-only
-```
--- The following table describes the parameters used in for `containerapp env create`. | Parameter | Description |
The following table describes the parameters used in for `containerapp env creat
With your environment created using your custom virtual network, you can deploy container apps into the environment using the `az containerapp create` command.
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+ VnetConfigurationInfrastructureSubnetId = $InfrastructureSubnet
+ VnetConfigurationInternal = $true
+}
+New-AzContainerAppManagedEnv @EnvArgs
+```
+
+The following table describes the parameters used in for `New-AzContainerAppManagedEnv`.
+
+| Parameter | Description |
+|||
+| `EnvName` | Name of the Container Apps environment. |
+| `ResourceGroupName` | Name of the resource group. |
+| `LogAnalyticConfigurationCustomerId` | The ID of an existing the Log Analytics workspace. |
+| `LogAnalyticConfigurationSharedKey` | The Log Analytics client secret.|
+| `Location` | The Azure location where the environment is to deploy. |
+| `VnetConfigurationInfrastructureSubnetId` | Resource ID of a subnet for infrastructure components and user application containers. |
+| `VnetConfigurationInternal` | (Optional) The environment doesn't use a public static IP, only internal IP addresses available in the custom VNET. (Requires an infrastructure subnet resource ID.) |
+
+With your environment created using your custom virtual network, you can deploy container apps into the environment.
+++ ### Optional configuration You have the option of deploying a private DNS and defining custom networking IP ranges for your Container Apps environment.
ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONME
VNET_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query id --out json | tr -d '"'` ```
-# [PowerShell](#tab/powershell)
-
-```powershell
-$ENVIRONMENT_DEFAULT_DOMAIN=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.defaultDomain -o tsv)
-```
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$ENVIRONMENT_STATIC_IP=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.staticIp -o tsv)
+```azurepowershell
+$EnvironmentDefaultDomain = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).DefaultDomain
```
-```powershell
-$VNET_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query id -o tsv)
+```azurepowershell
+$EnvironmentStaticIp = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).StaticIp
```
az network private-dns record-set a add-record \
--zone-name $ENVIRONMENT_DEFAULT_DOMAIN ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-az network private-dns zone create `
- --resource-group $RESOURCE_GROUP `
- --name $ENVIRONMENT_DEFAULT_DOMAIN
+```azurepowershell
+New-AzPrivateDnsZone -ResourceGroupName $ResourceGroupName -Name $EnvironmentDefaultDomain
```
-```powershell
-az network private-dns link vnet create `
- --resource-group $RESOURCE_GROUP `
- --name $VNET_NAME `
- --virtual-network $VNET_ID `
- --zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true
+```azurepowershell
+New-AzPrivateDnsVirtualNetworkLink -ResourceGroupName $ResourceGroupName -Name $VnetName -VirtualNetwork $Vnet -ZoneName $EnvironmentDefaultDomain -EnableRegistration
```
-```powershell
-az network private-dns record-set a add-record `
- --resource-group $RESOURCE_GROUP `
- --record-set-name "*" `
- --ipv4-address $ENVIRONMENT_STATIC_IP `
- --zone-name $ENVIRONMENT_DEFAULT_DOMAIN
+```azurepowershell
+$DnsRecords = @()
+$DnsRecords += New-AzPrivateDnsRecordConfig -Ipv4Address $EnvironmentStaticIp
+
+$DnsRecordArgs = @{
+ ResourceGroupName = $ResourceGroupName
+ ZoneName = $EnvironmentDefaultDomain
+ Name = '*'
+ RecordType = 'A'
+ Ttl = 3600
+ PrivateDnsRecords = $DnsRecords
+}
+New-AzPrivateDnsRecordSet @DnsRecordArgs
``` #### Networking parameters
-There are three optional networking parameters you can choose to define when calling `containerapp env create`. Use these options when you have a peered VNET with separate address ranges. Explicitly configuring these ranges ensures the addresses used by the Container Apps environment doesn't conflict with other ranges in the network infrastructure.
+There are three optional networking parameters you can choose to define when calling `containerapp env create`. Use these options when you have a peered VNET with separate address ranges. Explicitly configuring these ranges ensures the addresses used by the Container Apps environment don't conflict with other ranges in the network infrastructure.
+
+You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the values are generated for you.
-You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the CLI generates the values for you.
+# [Bash](#tab/bash)
| Parameter | Description | |||
You must either provide values for all three of these properties, or none of the
- If these properties arenΓÇÖt provided, the CLI autogenerates the range values based on the address range of the VNET to avoid range conflicts.
+# [Azure PowerShell](#tab/azure-powershell)
+
+| Parameter | Description |
+|||
+| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `VnetConfigurationPlatformReservedDnsIP` | An IP address from the `VnetConfigurationPlatformReservedCidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `VnetConfigurationPlatformReservedCidr` is set to `10.2.0.0/16`, then `VnetConfigurationPlatformReservedDnsIP` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
+| `VnetConfigurationDockerBridgeCidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
+
+- The `VnetConfigurationPlatformReservedCidr` and `VnetConfigurationDockerBridgeCidr` address ranges can't conflict with each other, or with the ranges of either provided subnet. Further, make sure these ranges don't conflict with any other address range in the VNET.
+
+- If these properties arenΓÇÖt provided, the range values are autogenerated based on the address range of the VNET to avoid range conflicts.
+++ ::: zone-end ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group will also delete the resource group automatically created by the Container Apps service containing the custom network components.
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this guide exist in the specified resource group, they will also be deleted.
+ ::: zone pivot="azure-cli" # [Bash](#tab/bash) ```azurecli
-az group delete \
- --name $RESOURCE_GROUP
+az group delete --name $RESOURCE_GROUP
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az group delete `
- --name $RESOURCE_GROUP
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
```
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
Previously updated : 06/09/2022 Last updated : 08/31/2022 zone_pivot_groups: azure-cli-or-portal
Next, declare a variable to hold the VNET name.
VNET_NAME="my-custom-vnet" ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$VNET_NAME="my-custom-vnet"
+```azurepowershell
+$VnetName = 'my-custom-vnet'
```
az network vnet subnet create \
--address-prefixes 10.0.0.0/23 ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-az network vnet create `
- --resource-group $RESOURCE_GROUP `
- --name $VNET_NAME `
- --location $LOCATION `
- --address-prefix 10.0.0.0/16
+```azurepowershell
+$SubnetArgs = @{
+ Name = 'infrastructure-subnet'
+ AddressPrefix = '10.0.0.0/23'
+}
+$subnet = New-AzVirtualNetworkSubnetConfig @SubnetArgs
```
-```powershell
-az network vnet subnet create `
- --resource-group $RESOURCE_GROUP `
- --vnet-name $VNET_NAME `
- --name infrastructure-subnet `
- --address-prefixes 10.0.0.0/23
+```azurepowershell
+$VnetArgs = @{
+ Name = $VnetName
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ AddressPrefix = '10.0.0.0/16'
+ Subnet = $subnet
+}
+$vnet = New-AzVirtualNetwork @VnetArgs
```
With the virtual network created, you can retrieve the ID for the infrastructure
INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv)
+```azurepowershell
+$InfrastructureSubnet=(Get-AzVirtualNetworkSubnetConfig -Name $SubnetArgs.Name -VirtualNetwork $vnet).Id
```
az containerapp env create \
--infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET ```
-# [PowerShell](#tab/powershell)
-
-```powershell
-az containerapp env create `
- --name $CONTAINERAPPS_ENVIRONMENT `
- --resource-group $RESOURCE_GROUP `
- --location "$LOCATION" `
- --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET
-```
--- The following table describes the parameters used in `containerapp env create`. | Parameter | Description |
The following table describes the parameters used in `containerapp env create`.
| `location` | The Azure location where the environment is to deploy. | | `infrastructure-subnet-resource-id` | Resource ID of a subnet for infrastructure components and user application containers. |
-With your environment created using a custom virtual network, you can now deploy container apps using the `az containerapp create` command.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to environment variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = "log-analytics"
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+ VnetConfigurationInfrastructureSubnetId = $InfrastructureSubnet
+ VnetConfigurationInternal = $true
+}
+New-AzContainerAppManagedEnv @EnvArgs
+```
+
+The following table describes the parameters used in for `New-AzContainerAppManagedEnv`.
+
+| Parameter | Description |
+|||
+| `EnvName` | Name of the Container Apps environment. |
+| `ResourceGroupName` | Name of the resource group. |
+| `LogAnalyticConfigurationCustomerId` | The ID of an existing the Log Analytics workspace. |
+| `LogAnalyticConfigurationSharedKey` | The Log Analytics client secret.|
+| `Location` | The Azure location where the environment is to deploy. |
+| `VnetConfigurationInfrastructureSubnetId` | Resource ID of a subnet for infrastructure components and user application containers. |
++++
+With your environment created using a custom virtual network, you can now deploy container apps into the environment.
### Optional configuration
ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONME
VNET_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query id --out json | tr -d '"'` ```
-# [PowerShell](#tab/powershell)
-
-```powershell
-$ENVIRONMENT_DEFAULT_DOMAIN=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query defaultDomain -o tsv)
-```
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-$ENVIRONMENT_STATIC_IP=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query staticIp -o tsv)
+```azurepowershell
+$EnvironmentDefaultDomain = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).DefaultDomain
```
-```powershell
-$VNET_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query id -o tsv)
+```azurepowershell
+$EnvironmentStaticIp = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).StaticIp
```
az network private-dns record-set a add-record \
--zone-name $ENVIRONMENT_DEFAULT_DOMAIN ```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```powershell
-az network private-dns zone create `
- --resource-group $RESOURCE_GROUP `
- --name $ENVIRONMENT_DEFAULT_DOMAIN
+```azurepowershell
+New-AzPrivateDnsZone -ResourceGroupName $ResourceGroupName -Name $EnvironmentDefaultDomain
```
-```powershell
-az network private-dns link vnet create `
- --resource-group $RESOURCE_GROUP `
- --name $VNET_NAME `
- --virtual-network $VNET_ID `
- --zone-name $ENVIRONMENT_DEFAULT_DOMAIN -e true
+```azurepowershell
+New-AzPrivateDnsVirtualNetworkLink -ResourceGroupName $ResourceGroupName -Name $VnetName -VirtualNetwork $Vnet -ZoneName $EnvironmentDefaultDomain -EnableRegistration
```
-```powershell
-az network private-dns record-set a add-record `
- --resource-group $RESOURCE_GROUP `
- --record-set-name "*" `
- --ipv4-address $ENVIRONMENT_STATIC_IP `
- --zone-name $ENVIRONMENT_DEFAULT_DOMAIN
+```azurepowershell
+$DnsRecords = @()
+$DnsRecords += New-AzPrivateDnsRecordConfig -Ipv4Address $EnvironmentStaticIp
+
+$DnsRecordArgs = @{
+ ResourceGroupName = $ResourceGroupName
+ ZoneName = $EnvironmentDefaultDomain
+ Name = '*'
+ RecordType = 'A'
+ Ttl = 3600
+ PrivateDnsRecords = $DnsRecords
+}
+New-AzPrivateDnsRecordSet @DnsRecordArgs
``` #### Networking parameters
-There are three optional networking parameters you can choose to define when calling `containerapp env create`. Use these options when you have a peered VNET with separate address ranges. Explicitly configuring these ranges ensures the addresses used by the Container Apps environment doesn't conflict with other ranges in the network infrastructure.
+There are three optional networking parameters you can choose to define when calling `containerapp env create`. Use these options when you have a peered VNET with separate address ranges. Explicitly configuring these ranges ensures the addresses used by the Container Apps environment don't conflict with other ranges in the network infrastructure.
+
+You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the values are generated for you.
-You must either provide values for all three of these properties, or none of them. If they arenΓÇÖt provided, the CLI generates the values for you.
+# [Bash](#tab/bash)
| Parameter | Description | |||
You must either provide values for all three of these properties, or none of the
- If these properties arenΓÇÖt provided, the CLI autogenerates the range values based on the address range of the VNET to avoid range conflicts.
+# [Azure PowerShell](#tab/azure-powershell)
+
+| Parameter | Description |
+|||
+| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `VnetConfigurationPlatformReservedDnsIP` | An IP address from the `VnetConfigurationPlatformReservedCidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `VnetConfigurationPlatformReservedCidr` is set to `10.2.0.0/16`, then `VnetConfigurationPlatformReservedDnsIP` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
+| `VnetConfigurationDockerBridgeCidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
+
+- The `VnetConfigurationPlatformReservedCidr` and `VnetConfigurationDockerBridgeCidr` address ranges can't conflict with each other, or with the ranges of either provided subnet. Further, make sure these ranges don't conflict with any other address range in the VNET.
+
+- If these properties arenΓÇÖt provided, the range values are autogenerated based on the address range of the VNET to avoid range conflicts.
+++ ::: zone-end ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group will also delete the resource group automatically created by the Container Apps service containing the custom network components.
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this guide exist in the specified resource group, they will also be deleted.
+ ::: zone pivot="azure-cli" # [Bash](#tab/bash) ```azurecli
-az group delete \
- --name $RESOURCE_GROUP
+az group delete --name $RESOURCE_GROUP
```
-# [PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurecli
-az group delete `
- --name $RESOURCE_GROUP
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
```
az group delete `
## Additional resources - For more information about configuring your private endpoints, see [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md).-- - To set up DNS name resolution for internal services, you must [set up your own DNS server](../dns/index.yml). ## Next steps
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
When you move a machine from one group to another, the application control polic
To manage your adaptive application controls programmatically, use our REST API.
-The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](/rest/api/defenderforcloud/adaptiveapplicationcontrols).
+The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](/rest/api/defenderforcloud/adaptive-application-controls).
Some of the functions that are available from the REST API:
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
You can send email notifications to individuals or to all users with specific Az
1. To apply the security contact information to your subscription, select **Save**. ## Customize the alerts email notifications through the API
-You can also manage your email notifications through the supplied REST API. For full details see the [SecurityContacts API documentation](/rest/api/defenderforcloud/securitycontacts).
+You can also manage your email notifications through the supplied REST API. For full details see the [SecurityContacts API documentation](/rest/api/defenderforcloud/security-contacts).
This is an example request body for the PUT request when creating a security contact configuration:
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Below is an example of a custom policy including the metadata/securityCenter pro
} ```
-For another example of using the securityCenter property, see [this section of the REST API documentation](/rest/api/defenderforcloud/assessmentsmetadata/createinsubscription#examples).
+For another example of using the securityCenter property, see [this section of the REST API documentation](/rest/api/defenderforcloud/assessments-metadata/create-in-subscription#examples).
## Next steps
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Defender for Cloud pulls the image from the registry and runs it in an isolated
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts. ### Can I get the scan results via REST API?
-Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
### What registry types are scanned? What types are billed? For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](#availability).
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Defender for Containers protects your clusters whether they're running in:
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). You can learn more by watching these videos from the Defender for Cloud in the Field video series:+ - [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md) - [Protect Containers in GCP with Defender for Containers](episode-ten.md)
You can check out the following blogs:
- [Protect your Google Cloud workloads with Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/protect-your-google-cloud-workloads-with-microsoft-defender-for/ba-p/3073360) - [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317) - [A new name for multicloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
-
+ ## Next steps [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-container-registries-usage.md).
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
Defender for Cloud filters and classifies findings from the scanner. When an ima
### Can I get the scan results via REST API?
-Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
### What registry types are scanned? What types are billed?
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
The following PowerShell commands create this JIT configuration:
The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
-Learn more at [JIT network access policies](/rest/api/defenderforcloud/jitnetworkaccesspolicies).
+Learn more at [JIT network access policies](/rest/api/defenderforcloud/jit-network-access-policies).
Learn more in the [PowerShell cmdlet documentation](/powershell/scripting/develo
The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
-Learn more at [JIT network access policies](/rest/api/defenderforcloud/jitnetworkaccesspolicies).
+Learn more at [JIT network access policies](/rest/api/defenderforcloud/jit-network-access-policies).
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
This page provides you with information about:
- Bug fixes - Deprecated functionality +
+## March 2022
+
+Updates in March include:
+
+- [Global availability of Secure Score for AWS and GCP environments](#global-availability-of-secure-score-for-aws-and-gcp-environments)
+- [Deprecated the recommendations to install the network traffic data collection agent](#deprecated-the-recommendations-to-install-the-network-traffic-data-collection-agent)
+- [Defender for Containers can now scan for vulnerabilities in Windows images (preview)](#defender-for-containers-can-now-scan-for-vulnerabilities-in-windows-images-preview)
+- [New alert for Microsoft Defender for Storage (preview)](#new-alert-for-microsoft-defender-for-storage-preview)
+- [Configure email notifications settings from an alert](#configure-email-notifications-settings-from-an-alert)
+- [Deprecated preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecated-preview-alert-armmcas_activityfromanonymousipaddresses)
+- [Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moved-the-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices)
+- [Deprecated the recommendation to use service principals to protect your subscriptions](#deprecated-the-recommendation-to-use-service-principals-to-protect-your-subscriptions)
+- [Legacy implementation of ISO 27001 replaced with new ISO 27001:2013 initiative](#legacy-implementation-of-iso-27001-replaced-with-new-iso-270012013-initiative)
+- [Deprecated Microsoft Defender for IoT device recommendations](#deprecated-microsoft-defender-for-iot-device-recommendations)
+- [Deprecated Microsoft Defender for IoT device alerts](#deprecated-microsoft-defender-for-iot-device-alerts)
+- [Posture management and threat protection for AWS and GCP released for general availability (GA)](#posture-management-and-threat-protection-for-aws-and-gcp-released-for-general-availability-ga)
+- [Registry scan for Windows images in ACR added support for national clouds](#registry-scan-for-windows-images-in-acr-added-support-for-national-clouds)
+
+### Global availability of Secure Score for AWS and GCP environments
+
+The cloud security posture management capabilities provided by Microsoft Defender for Cloud, has now added support for your AWS and GCP environments within your Secure Score.
+
+Enterprises can now view their overall security posture, across various environments, such as Azure, AWS and GCP.
+
+The Secure Score page has been replaced with the Security posture dashboard. The Security posture dashboard allows you to view an overall combined score for all of your environments, or a breakdown of your security posture based on any combination of environments that you choose.
+
+The Recommendations page has also been redesigned to provide new capabilities such as: cloud environment selection, advanced filters based on content (resource group, AWS account, GCP project and more), improved user interface on low resolution, support for open query in resource graph, and more. You can learn more about your overall [security posture](secure-score-security-controls.md) and [security recommendations](review-security-recommendations.md).
+
+### Deprecated the recommendations to install the network traffic data collection agent
+
+Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. The following two recommendations and their related policies were deprecated.
+
+|Recommendation |Description |Severity |
+||||
+| Network traffic data collection agent should be installed on Linux virtual machines|Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |Medium |
+| Network traffic data collection agent should be installed on Windows virtual machines |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats. |Medium |
+
+### Defender for Containers can now scan for vulnerabilities in Windows images (preview)
+
+Defender for Container's image scan now supports Windows images that are hosted in Azure Container Registry. This feature is free while in preview, and will incur a cost when it becomes generally available.
+
+Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-containers-usage.md).
+
+### New alert for Microsoft Defender for Storage (preview)
+
+To expand the threat protections provided by Microsoft Defender for Storage, we've added a new preview alert.
+
+Threat actors use applications and tools to discover and access storage accounts. Microsoft Defender for Storage detects these applications and tools so that you can block them and remediate your posture.
+
+This preview alert is called `Access from a suspicious application`. The alert is relevant to Azure Blob Storage, and ADLS Gen2 only.
+
+| Alert (alert type) | Description | MITRE tactic | Severity |
+|--|--|--|--|
+| **PREVIEW - Access from a suspicious application**<br>(Storage.Blob_SuspiciousApp) | Indicates that a suspicious application has successfully accessed a container of a storage account with authentication.<br>This might indicate that an attacker has obtained the credentials necessary to access the account, and is exploiting it. This could also be an indication of a penetration test carried out in your organization.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Initial Access | Medium |
+
+### Configure email notifications settings from an alert
+
+A new section has been added to the alert User Interface (UI) which allows you to view and edit who will receive email notifications for alerts that are triggered on the current subscription.
++
+Learn how to [Configure email notifications for security alerts](configure-email-notifications.md).
+
+### Deprecated preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses
+
+The following preview alert has been deprecated:
+
+|Alert name| Description|
+|-||
+|**PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses)|Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license.|
+
+A new alert has been created that provides this information and adds to it. In addition, the newer alerts (ARM_OperationFromSuspiciousIP, ARM_OperationFromSuspiciousProxyIP) don't require a license for Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App Security).
+
+See more alerts for [Resource Manager](alerts-reference.md#alerts-resourcemanager).
+
+### Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices
+
+The recommendation `Vulnerabilities in container security configurations should be remediated` has been moved from the secure score section to best practices section.
+
+The current user experience only provides the score when all compliance checks have passed. Most customers have difficulties with meeting all the required checks. We're working on an improved experience for this recommendation, and once released the recommendation will be moved back to the secure score.
+
+### Deprecated the recommendation to use service principals to protect your subscriptions
+
+As organizations move away from using management certificates to manage their subscriptions, and [our recent announcement that we're retiring the Cloud Services (classic) deployment model](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/), we deprecated the following Defender for Cloud recommendation and its related policy:
+
+|Recommendation |Description |Severity |
+||||
+| Service principals should be used to protect your subscriptions instead of Management Certificates | Management certificates allow anyone who authenticates with them to manage the subscription(s) they're associated with. To manage subscriptions more securely, using service principals with Resource Manager is recommended to limit the blast radius in the case of a certificate compromise. It also automates resource management. <br />(Related policy: [Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6646a0bd-e110-40ca-bb97-84fcee63c414)) |Medium |
+
+Learn more:
+
+- [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
+- [Overview of Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md)
+- [Workflow of Microsoft Azure classic VM Architecture - including RDFE workflow basics](../cloud-services/cloud-services-workflow-process.md)
+
+### Legacy implementation of ISO 27001 replaced with new ISO 27001:2013 initiative
+
+The legacy implementation of ISO 27001 has been removed from Defender for Cloud's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Defender for Cloud, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions.
++
+### Deprecated Microsoft Defender for IoT device recommendations
+
+Microsoft Defender for IoT device recommendations is no longer visible in Microsoft Defender for Cloud. These recommendations are still available on Microsoft Defender for IoT's Recommendations page.
+
+The following recommendations are deprecated:
+
+| Assessment key | Recommendations |
+|--|--|
+| 1a36f14a-8bd8-45f5-abe5-eef88d76ab5b: IoT Devices | Open Ports On Device |
+| ba975338-f956-41e7-a9f2-7614832d382d: IoT Devices | Permissive firewall rule in the input chain was found |
+| beb62be3-5e78-49bd-ac5f-099250ef3c7c: IoT Devices | Permissive firewall policy in one of the chains was found |
+| d5a8d84a-9ad0-42e2-80e0-d38e3d46028a: IoT Devices | Permissive firewall rule in the output chain was found |
+| 5f65e47f-7a00-4bf3-acae-90ee441ee876: IoT Devices | Operating system baseline validation failure |
+|a9a59ebb-5d6f-42f5-92a1-036fd0fd1879: IoT Devices | Agent sending underutilized messages |
+| 2acc27c6-5fdb-405e-9080-cb66b850c8f5: IoT Devices | TLS cipher suite upgrade needed |
+|d74d2738-2485-4103-9919-69c7e63776ec: IoT Devices | Auditd process stopped sending events |
+
+### Deprecated Microsoft Defender for IoT device alerts
+
+All of Microsoft's Defender for IoT device alerts are no longer visible in Microsoft Defender for Cloud. These alerts are still available on Microsoft Defender for IoT's Alert page, and in Microsoft Sentinel.
+
+### Posture management and threat protection for AWS and GCP released for general availability (GA)
+
+- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multicloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multicloud enabled feature that allows you to manage your AWS resources alongside your Azure resources.
+
+- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your compute instances in AWS and GCP. The Defender for Servers plan includes an integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more. Learn about all of the [supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities allow you to easily connect any existing or new compute instances discovered in your environment.
+
+Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+
+### Registry scan for Windows images in ACR added support for national clouds
+
+Registry scan for Windows images is now supported in Azure Government and Azure China 21Vianet. This addition is currently in preview.
+
+Learn more about our [feature's availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ ## February 2022 Updates in February include:
Learn more about [secure score and security controls in Azure Security Center](s
### Secure score API is released for general availability (GA)
-You can now access your score via the [secure score API](/rest/api/defenderforcloud/securescores/). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example:
+You can now access your score via the [secure score API](/rest/api/defenderforcloud/secure-scores). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example:
- use the **Secure Scores** API to get the score for a specific subscription - use the **Secure Score Controls** API to list the security controls and the current score of your subscriptions
Updates in June include:
### Secure score API (preview)
-You can now access your score via the [secure score API](/rest/api/defenderforcloud/securescores/) (currently in preview). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the **Secure Scores** API to get the score for a specific subscription. In addition, you can use the **Secure Score Controls** API to list the security controls and the current score of your subscriptions.
+You can now access your score via the [secure score API](/rest/api/defenderforcloud/secure-scores) (currently in preview). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the **Secure Scores** API to get the score for a specific subscription. In addition, you can use the **Secure Score Controls** API to list the security controls and the current score of your subscriptions.
For examples of external tools made possible with the secure score API, see [the secure score area of our GitHub community](https://github.com/Azure/Azure-Security-Center/tree/master/Secure%20Score).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud++ Previously updated : 08/21/2022 Last updated : 08/31/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## September 2022
+
+- [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities)
+
+### Suppress alerts based on Container and Kubernetes entities
+
+You can now suppress alerts based on these Kubernetes entities so you can use the container environment details to align your alerts your organization's policy and stop receiving unwanted alerts:
+
+- Container Image
+- Container Registry
+- Kubernetes Namespace
+- Kubernetes Pod
+- Kubernetes Service
+- Kubernetes Secret
+- Kubernetes ServiceAccount
+- Kubernetes Deployment
+- Kubernetes ReplicaSet
+- Kubernetes StatefulSet
+- Kubernetes DaemonSet
+- Kubernetes Job
+- Kubernetes CronJob
+
+Learn more about [alert suppression rules](alerts-suppression-rules.md).
+ ## August 2022 Updates in August include:
As part of the actions you can take to [evaluate a security alert](managing-and-
Microsoft Defender for Cloud identifies platform logs that are within one day of the alert. The platform logs can help you evaluate the security threat and identify steps that you can take to mitigate the identified risk.-
-## March 2022
-
-Updates in March include:
--- [Global availability of Secure Score for AWS and GCP environments](#global-availability-of-secure-score-for-aws-and-gcp-environments)-- [Deprecated the recommendations to install the network traffic data collection agent](#deprecated-the-recommendations-to-install-the-network-traffic-data-collection-agent)-- [Defender for Containers can now scan for vulnerabilities in Windows images (preview)](#defender-for-containers-can-now-scan-for-vulnerabilities-in-windows-images-preview)-- [New alert for Microsoft Defender for Storage (preview)](#new-alert-for-microsoft-defender-for-storage-preview)-- [Configure email notifications settings from an alert](#configure-email-notifications-settings-from-an-alert)-- [Deprecated preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecated-preview-alert-armmcas_activityfromanonymousipaddresses)-- [Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moved-the-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices)-- [Deprecated the recommendation to use service principals to protect your subscriptions](#deprecated-the-recommendation-to-use-service-principals-to-protect-your-subscriptions)-- [Legacy implementation of ISO 27001 replaced with new ISO 27001:2013 initiative](#legacy-implementation-of-iso-27001-replaced-with-new-iso-270012013-initiative)-- [Deprecated Microsoft Defender for IoT device recommendations](#deprecated-microsoft-defender-for-iot-device-recommendations)-- [Deprecated Microsoft Defender for IoT device alerts](#deprecated-microsoft-defender-for-iot-device-alerts)-- [Posture management and threat protection for AWS and GCP released for general availability (GA)](#posture-management-and-threat-protection-for-aws-and-gcp-released-for-general-availability-ga)-- [Registry scan for Windows images in ACR added support for national clouds](#registry-scan-for-windows-images-in-acr-added-support-for-national-clouds)-
-### Global availability of Secure Score for AWS and GCP environments
-
-The cloud security posture management capabilities provided by Microsoft Defender for Cloud, has now added support for your AWS and GCP environments within your Secure Score.
-
-Enterprises can now view their overall security posture, across various environments, such as Azure, AWS and GCP.
-
-The Secure Score page has been replaced with the Security posture dashboard. The Security posture dashboard allows you to view an overall combined score for all of your environments, or a breakdown of your security posture based on any combination of environments that you choose.
-
-The Recommendations page has also been redesigned to provide new capabilities such as: cloud environment selection, advanced filters based on content (resource group, AWS account, GCP project and more), improved user interface on low resolution, support for open query in resource graph, and more. You can learn more about your overall [security posture](secure-score-security-controls.md) and [security recommendations](review-security-recommendations.md).
-
-### Deprecated the recommendations to install the network traffic data collection agent
-
-Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. The following two recommendations and their related policies were deprecated.
-
-|Recommendation |Description |Severity |
-||||
-| Network traffic data collection agent should be installed on Linux virtual machines|Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |Medium |
-| Network traffic data collection agent should be installed on Windows virtual machines |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats. |Medium |
-
-### Defender for Containers can now scan for vulnerabilities in Windows images (preview)
-
-Defender for Container's image scan now supports Windows images that are hosted in Azure Container Registry. This feature is free while in preview, and will incur a cost when it becomes generally available.
-
-Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-containers-usage.md).
-
-### New alert for Microsoft Defender for Storage (preview)
-
-To expand the threat protections provided by Microsoft Defender for Storage, we've added a new preview alert.
-
-Threat actors use applications and tools to discover and access storage accounts. Microsoft Defender for Storage detects these applications and tools so that you can block them and remediate your posture.
-
-This preview alert is called `Access from a suspicious application`. The alert is relevant to Azure Blob Storage, and ADLS Gen2 only.
-
-| Alert (alert type) | Description | MITRE tactic | Severity |
-|--|--|--|--|
-| **PREVIEW - Access from a suspicious application**<br>(Storage.Blob_SuspiciousApp) | Indicates that a suspicious application has successfully accessed a container of a storage account with authentication.<br>This might indicate that an attacker has obtained the credentials necessary to access the account, and is exploiting it. This could also be an indication of a penetration test carried out in your organization.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Initial Access | Medium |
-
-### Configure email notifications settings from an alert
-
-A new section has been added to the alert User Interface (UI) which allows you to view and edit who will receive email notifications for alerts that are triggered on the current subscription.
--
-Learn how to [Configure email notifications for security alerts](configure-email-notifications.md).
-
-### Deprecated preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses
-
-The following preview alert has been deprecated:
-
-|Alert name| Description|
-|-||
-|**PREVIEW - Activity from a risky IP address**<br>(ARM.MCAS_ActivityFromAnonymousIPAddresses)|Users activity from an IP address that has been identified as an anonymous proxy IP address has been detected.<br>These proxies are used by people who want to hide their device's IP address, and can be used for malicious intent. This detection uses a machine learning algorithm that reduces false positives, such as mis-tagged IP addresses that are widely used by users in the organization.<br>Requires an active Microsoft Defender for Cloud Apps license.|
-
-A new alert has been created that provides this information and adds to it. In addition, the newer alerts (ARM_OperationFromSuspiciousIP, ARM_OperationFromSuspiciousProxyIP) don't require a license for Microsoft Defender for Cloud Apps (formerly known as Microsoft Cloud App Security).
-
-See more alerts for [Resource Manager](alerts-reference.md#alerts-resourcemanager).
-
-### Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices
-
-The recommendation `Vulnerabilities in container security configurations should be remediated` has been moved from the secure score section to best practices section.
-
-The current user experience only provides the score when all compliance checks have passed. Most customers have difficulties with meeting all the required checks. We're working on an improved experience for this recommendation, and once released the recommendation will be moved back to the secure score.
-
-### Deprecated the recommendation to use service principals to protect your subscriptions
-
-As organizations move away from using management certificates to manage their subscriptions, and [our recent announcement that we're retiring the Cloud Services (classic) deployment model](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/), we deprecated the following Defender for Cloud recommendation and its related policy:
-
-|Recommendation |Description |Severity |
-||||
-| Service principals should be used to protect your subscriptions instead of Management Certificates | Management certificates allow anyone who authenticates with them to manage the subscription(s) they're associated with. To manage subscriptions more securely, using service principals with Resource Manager is recommended to limit the blast radius in the case of a certificate compromise. It also automates resource management. <br />(Related policy: [Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6646a0bd-e110-40ca-bb97-84fcee63c414)) |Medium |
-
-Learn more:
--- [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)-- [Overview of Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md)-- [Workflow of Microsoft Azure classic VM Architecture - including RDFE workflow basics](../cloud-services/cloud-services-workflow-process.md)-
-### Legacy implementation of ISO 27001 replaced with new ISO 27001:2013 initiative
-
-The legacy implementation of ISO 27001 has been removed from Defender for Cloud's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Defender for Cloud, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions.
--
-### Deprecated Microsoft Defender for IoT device recommendations
-
-Microsoft Defender for IoT device recommendations is no longer visible in Microsoft Defender for Cloud. These recommendations are still available on Microsoft Defender for IoT's Recommendations page.
-
-The following recommendations are deprecated:
-
-| Assessment key | Recommendations |
-|--|--|
-| 1a36f14a-8bd8-45f5-abe5-eef88d76ab5b: IoT Devices | Open Ports On Device |
-| ba975338-f956-41e7-a9f2-7614832d382d: IoT Devices | Permissive firewall rule in the input chain was found |
-| beb62be3-5e78-49bd-ac5f-099250ef3c7c: IoT Devices | Permissive firewall policy in one of the chains was found |
-| d5a8d84a-9ad0-42e2-80e0-d38e3d46028a: IoT Devices | Permissive firewall rule in the output chain was found |
-| 5f65e47f-7a00-4bf3-acae-90ee441ee876: IoT Devices | Operating system baseline validation failure |
-|a9a59ebb-5d6f-42f5-92a1-036fd0fd1879: IoT Devices | Agent sending underutilized messages |
-| 2acc27c6-5fdb-405e-9080-cb66b850c8f5: IoT Devices | TLS cipher suite upgrade needed |
-|d74d2738-2485-4103-9919-69c7e63776ec: IoT Devices | Auditd process stopped sending events |
-
-### Deprecated Microsoft Defender for IoT device alerts
-
-All of Microsoft's Defender for IoT device alerts are no longer visible in Microsoft Defender for Cloud. These alerts are still available on Microsoft Defender for IoT's Alert page, and in Microsoft Sentinel.
-
-### Posture management and threat protection for AWS and GCP released for general availability (GA)
--- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multicloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multicloud enabled feature that allows you to manage your AWS resources alongside your Azure resources.--- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your compute instances in AWS and GCP. The Defender for Servers plan includes an integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more. Learn about all of the [supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities allow you to easily connect any existing or new compute instances discovered in your environment.-
-Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
-
-### Registry scan for Windows images in ACR added support for national clouds
-
-Registry scan for Windows images is now supported in Azure Government and Azure China 21Vianet. This addition is currently in preview.
-
-Learn more about our [feature's availability](supported-machines-endpoint-solutions-clouds-containers.md).
defender-for-cloud Secure Score Access And Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-access-and-track.md
To recap, your secure score is shown in the following locations in Defender for
## Get your secure score from the REST API
-You can access your score via the secure score API. The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the [Secure Scores API](/rest/api/defenderforcloud/securescores) to get the score for a specific subscription. In addition, you can use the [Secure Score Controls API](/rest/api/defenderforcloud/securescorecontrols) to list the security controls and the current score of your subscriptions.
+You can access your score via the secure score API. The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the [Secure Scores API](/rest/api/defenderforcloud/secure-scores) to get the score for a specific subscription. In addition, you can use the [Secure Score Controls API](/rest/api/defenderforcloud/secure-score-controls) to list the security controls and the current score of your subscriptions.
![Retrieving a single secure score via the API.](media/secure-score-security-controls/single-secure-score-via-api.png)
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: The regulatory compliance dashboard in Microsoft Defender for Cloud description: Learn how to add and remove regulatory standards from the regulatory compliance dashboard in Defender for Cloud Previously updated : 11/09/2021 Last updated : 09/04/2022 # Customize the set of standards in your regulatory compliance dashboard
To add standards to your dashboard:
1. From Defender for Cloud's menu, select **Regulatory compliance** to open the regulatory compliance dashboard. Here you can see the compliance standards currently assigned to the currently selected subscriptions.
-1. From the top of the page, select **Manage compliance policies**. The Policy Management page appears.
+1. From the top of the page, select **Manage compliance policies**.
1. Select the subscription or management group for which you want to manage the regulatory compliance posture. > [!TIP] > We recommend selecting the highest scope for which the standard is applicable so that compliance data is aggregated and tracked for all nested resources.
-1. To add the standards relevant to your organization, expand the **Industry & regulatory standards** section and select **Add more standards**.
+1. Select **Security policy**.
+
+1. Expand the Industry & regulatory standards section and select **Add more standards**.
1. From the **Add regulatory compliance standards** page, you can search for any of the available standards:
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
The common steps to subscribe to events published by any partner, including Grap
### Enable Microsoft Graph API events to flow to your partner topic > [!IMPORTANT]
-> Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
+> Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask-graph-and-grid@microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample:
frontdoor How To Add Security Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-security-headers.md
The following example shows you how to add a Content-Security-Policy header to a
## Prerequisites
-* Before you can configure configure security headers, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door](create-front-door-portal.md).
+* Before you can configure security headers, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door](create-front-door-portal.md).
* Review how to [Set up a Rule Set](how-to-configure-rule-set.md) if you haven't used the Rule Set feature before. ## Add a Content-Security-Policy header in Azure portal
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
You can use the [IoT Hub resource provider REST API](/rest/api/iothub/iothubreso
* Visual Studio.
-* [Azure Az PowerShell module](/powershell/azure/install-Az-ps).
+* [Azure PowerShell](/powershell/azure/install-Az-ps).
[!INCLUDE [iot-hub-prepare-resource-manager](../../includes/iot-hub-prepare-resource-manager.md)]
lab-services Azure Polices For Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/azure-polices-for-lab-services.md
This policy can be used to restrict [customization of lab templates](tutorial-se
|**Audit**|Labs will show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.| |**Deny**|Lab creation to fail if ΓÇ£create a template virtual machineΓÇ¥ option is used for a lab.|
-## Lab Services require non-admin user for labs
+## Lab Services requires non-admin user for labs
This policy is used to enforce using non-admin accounts while creating a lab. With the August 2022 Update, you can choose to add a non-admin account to the VM image. This new feature allows you to keep separate credentials for VM admin and non-admin users. For more information to create a lab with a non-admin user, see [Tutorial: Create and publish a lab](tutorial-setup-lab.md#create-a-lab), which shows how to give a student non-administrator account rather than default administrator account on the ΓÇ£Virtual machine credentialsΓÇ¥ page of the new lab wizard.
During the policy assignment, the lab administrator can choose the following eff
|**Effect**|**Behavior**| |--|--|
-|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when non-admin accounts is not used while creating the lab.|
+|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when non-admin accounts are not used while creating the lab.|
|**Deny**|Lab creation will fail if ΓÇ£Give lab users a non-admin account on their virtual machinesΓÇ¥ is not checked while creating a lab.| ## Lab Services should restrict allowed virtual machine SKU sizes
During the policy assignment, the Lab Administrator can choose the following eff
|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.| |**Deny**|Lab creation will fail if SKU chosen while creating a lab is not allowed as per the policy assignment.|
+## Custom policies
+
+In addition to the new built-in policies described above, you can create and apply custom policies. This technique is helpful in situations where none of the built-in policies apply or where you need more granularity.
+
+Learn how to create custom policies:
+- [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md).
+- [Tutorial: Create a custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md).
+ ## Next steps See the following articles:
logic-apps Connect Virtual Network Vnet Isolated Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md
To make sure that your ISE is accessible and that the logic apps in that ISE can
This table describes the ports that your ISE requires to be accessible and the purpose for those ports. To help reduce complexity when you set up security rules, the table uses [service tags](../virtual-network/service-tags-overview.md) that represent groups of IP address prefixes for a specific Azure service. Where noted, *internal ISE* and *external ISE* refer to the [access endpoint that's selected during ISE creation](connect-virtual-network-vnet-isolated-environment.md#create-environment). For more information, review [Endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). > [!IMPORTANT]
+>
> For all rules, make sure that you set source ports to `*` because source ports are ephemeral. #### Inbound security rules
-| Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes |
-|||--|--|-|-|
-| Intersubnet communication within virtual network | Address space for the virtual network with ISE subnets | * | Address space for the virtual network with ISE subnets | * | Required for traffic to flow *between* the subnets in your virtual network. <p><p>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
-| Both: <p>Communication to your logic app <p><p>Runs history for logic app| Internal ISE: <br>**VirtualNetwork** <p><p>External ISE: **Internet** or see **Notes** | * | **VirtualNetwork** | 443 | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <p><p>- The computer or service that calls any request triggers or webhooks in your logic app <p>- The computer or service from where you want to access logic app runs history <p><p>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history.|
-| Azure Logic Apps designer - dynamic properties | **LogicAppsManagement** | * | **VirtualNetwork** | 454 | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. |
-| Network health check | **LogicApps** | * | **VirtualNetwork** | 454 | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#outbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
-| Connector deployment | **AzureConnectors** | * | **VirtualNetwork** | 454 | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <p><p>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
-| App Service Management dependency | **AppServiceManagement** | * | **VirtualNetwork** | 454, 455 ||
-| Communication from Azure Traffic Manager | **AzureTrafficManager** | * | **VirtualNetwork** | Internal ISE: 454 <p><p>External ISE: 443 ||
-| Both: <p>Connector policy deployment <p>API Management - management endpoint | **APIManagement** | * | **VirtualNetwork** | 3443 | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. |
-| Access Azure Cache for Redis Instances between Role Instances | **VirtualNetwork** | * | **VirtualNetwork** | 6379 - 6383, plus see **Notes**| For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
-|||||||
+| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes |
+|--|-||--||-|
+| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network. | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
+| * | 443 | Internal ISE: <br>**VirtualNetwork** <br><br>External ISE: **Internet** or see **Notes** | **VirtualNetwork** | - Communication to your logic app <br><br>- Runs history for your logic app | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <br><br>- The computer or service that calls any request triggers or webhooks in your logic app <br><br>- The computer or service from where you want to access logic app runs history <br><br>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history. |
+| * | 454 | **LogicAppsManagement** |**VirtualNetwork** | Azure Logic Apps designer - dynamic properties| Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. |
+| * | 454 | **LogicApps** | **VirtualNetwork** | Network health check | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#outbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
+| * | 454 | **AzureConnectors** | **VirtualNetwork** | Connector deployment | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <br><br>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
+| * | 454, 455 | **AppServiceManagement** | **VirtualNetwork** | App Service Management dependency ||
+| * | Internal ISE: 454 <br><br>External ISE: 443 | **AzureTrafficManager** | **VirtualNetwork** | Communication from Azure Traffic Manager ||
+| * | 3443 | **APIManagement** | **VirtualNetwork** | Connector policy deployment <br><br>API Management - management endpoint | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. |
+| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
#### Outbound security rules
-| Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes |
-|||--|--|-|-|
-| Intersubnet communication within virtual network | Address space for the virtual network with ISE subnets | * | Address space for the virtual network with ISE subnets | * | Required for traffic to flow *between* the subnets in your virtual network. <p><p>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
-| Communication from your logic app | **VirtualNetwork** | * | Internet | 443, 80 | This rule is required for Secure Socket Layer (SSL) certificate verification. This check is for various internal and external sites, which is the reason that the Internet is required as the destination. |
-| Communication from your logic app | **VirtualNetwork** | * | Varies based on destination | Varies based on destination | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <p><p>For example, the destination port is port 25 for an SMTP service, port 22 for an SFTP service, and so on. |
-| Azure Active Directory | **VirtualNetwork** | * | **AzureActiveDirectory** | 80, 443 ||
-| Azure Storage dependency | **VirtualNetwork** | * | **Storage** | 80, 443, 445 ||
-| Connection management | **VirtualNetwork** | * | **AppService** | 443 ||
-| Publish diagnostic logs & metrics | **VirtualNetwork** | * | **AzureMonitor** | 443 ||
-| Azure SQL dependency | **VirtualNetwork** | * | **SQL** | 1433 ||
-| Azure Resource Health | **VirtualNetwork** | * | **AzureMonitor** | 1886 | Required for publishing health status to Resource Health. |
-| Dependency from Log to Event Hub policy and monitoring agent | **VirtualNetwork** | * | **EventHub** | 5672 ||
-| Access Azure Cache for Redis Instances between Role Instances | **VirtualNetwork** | * | **VirtualNetwork** | 6379 - 6383, plus see **Notes**| For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
-| DNS name resolution | **VirtualNetwork** | * | IP addresses for any custom Domain Name System (DNS) servers on your virtual network | 53 | Required only when you use custom DNS servers on your virtual network |
-|||||||
+| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes |
+|--|-||--||-|
+| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
+| * | 443, 80 | **VirtualNetwork** | Internet | Communication from your logic app | This rule is required for Secure Socket Layer (SSL) certificate verification. This check is for various internal and external sites, which is the reason that the Internet is required as the destination. |
+| * | Varies based on destination | **VirtualNetwork** | Varies based on destination | Communication from your logic app | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <br><br>For example, the destination port is port 25 for an SMTP service, port 22 for an SFTP service, and so on. |
+| * | 80, 443 | **VirtualNetwork** | **AzureActiveDirectory** | Azure Active Directory ||
+| * | 80, 443, 445 | **VirtualNetwork** | **Storage** | Azure Storage dependency ||
+| * | 443 | **VirtualNetwork** | **AppService** | Connection management ||
+| * | 443 | **VirtualNetwork** | **AzureMonitor** | Publish diagnostic logs & metrics ||
+| * | 1433 | **VirtualNetwork** | **SQL** | Azure SQL dependency ||
+| * | 1886 | **VirtualNetwork** | **AzureMonitor** | Azure Resource Health | Required for publishing health status to Resource Health. |
+| * | 5672 | **VirtualNetwork** | **EventHub** | Dependency from Log to Event Hubs policy and monitoring agent ||
+| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
+| * | 53 | **VirtualNetwork** | IP addresses for any custom Domain Name System (DNS) servers on your virtual network | DNS name resolution | Required only when you use custom DNS servers on your virtual network |
In addition, you need to add outbound rules for [App Service Environment (ASE)](../app-service/environment/intro.md):
logic-apps Create Automation Tasks Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-automation-tasks-azure-resources.md
You can create an automation task from a specific automation task template. The
| Resource type | Automation task templates | |||
-| Azure resource groups | **When resource is deleted** |
| All Azure resources | **Send monthly cost for resource** |
-| Azure virtual machines | Additionally: <p>- **Power off Virtual Machine** <br>- **Start Virtual Machine** |
-| Azure Storage accounts | Additionally: <p>- **Delete old blobs** |
-| Azure Cosmos DB | Additionally, <p>- **Send query result via email** |
-|||
+| Azure virtual machines | Additionally: <br><br>- **Power off Virtual Machine** <br>- **Start Virtual Machine** <br>- **Deallocate Virtual Machine** |
+| Azure storage accounts | Additionally: <br><br>- **Delete old blobs** |
+| Azure Cosmos DB | Additionally, <br><br>- **Send query result via email** |
This article shows you how to complete the following tasks:
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
ms.suite: integration Previously updated : 08/20/2022 Last updated : 09/06/2022
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-This article shows how to create an example automated integration workflow that runs in the *single-tenant* Azure Logic Apps environment by using Visual Studio Code with the **Azure Logic Apps (Standard)** extension. The logic app that you create with this extension is based on the **Logic App (Standard)** resource type, which provides the following capabilities:
-
-* You can locally run and test logic app workflows in the Visual Studio Code development environment.
+This article shows how to create an example automated integration workflow that runs in the *single-tenant* Azure Logic Apps environment by using Visual Studio Code with the **Azure Logic Apps (Standard)** extension. When you use this extension, you create a Standard logic app resource and workflow that provides the following capabilities:
* Your logic app can include multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless). * Workflows in the same logic app and tenant run in the same process as the Azure Logic Apps runtime, so they share the same resources and provide better performance.
-* You can deploy the **Logic App (Standard)** resource type directly to the single-tenant Azure Logic Apps environment or anywhere that Azure Functions can run, including containers, due to the Azure Logic Apps containerized runtime.
+* You can locally create, run, and test workflows in the Visual Studio Code development environment. You can deploy your logic app locally, to Azure, which includes the single-tenant Azure Logic Apps environment or App Service Environment v3 (ASEv3) - Windows plans only, and on-premises using containers, due to the Azure Logic Apps containerized runtime.
-For more information about the single-tenant Azure Logic Apps offering, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+For more information about single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
While the example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
To find and confirm these settings, follow these steps:
![Screenshot that shows Azure pane and selected filter icon.](./media/create-single-tenant-workflows-visual-studio-code/filter-subscription-list.png)
- Or, in the Visual Studio Code status bar, select your Azure account.
+ Or, in the Visual Studio Code status bar, select your Azure account.
1. When another subscriptions list appears, select the subscriptions that you want, and then make sure that you select **OK**.
To locally run webhook-based triggers and actions in Visual Studio Code, you nee
} ```
+ > [!NOTE]
+ >
+ > If your project is NuGet package-based (.NET), not extension bundle-based (Node.js),
+ > `"FUNCTIONS_WORKER_RUNTIME"` is set to `"dotnet"`. However, to use **Inline Code Operations**,
+ > you must have `"FUNCTIONS_WORKER_RUNTIME"` set to `"node"`
+ The first time when you start a local debugging session or run the workflow without debugging, the Logic Apps runtime registers the workflow with the service endpoint and subscribes to that endpoint for notifying the webhook operations. The next time that your workflow runs, the runtime won't register or resubscribe because the subscription registration already exists in local storage. When you stop the debugging session for a workflow run that uses locally run webhook-based triggers or actions, the existing subscription registrations aren't deleted. To unregister, you have to manually remove or delete the subscription registrations.
When you stop the debugging session for a workflow run that uses locally run web
## Manage breakpoints for debugging
-Before you run and test your logic app workflow by starting a debugging session, you can set [breakpoints](https://code.visualstudio.com/docs/editor/debugging#_breakpoints) inside the **workflow.json** file for each workflow. No other setup is required.
+Before you run and test your logic app workflow by starting a debugging session, you can set [breakpoints](https://code.visualstudio.com/docs/editor/debugging#_breakpoints) inside the **workflow.json** file for each workflow. No other setup is required.
At this time, breakpoints are supported only for actions, not triggers. Each action definition has these breakpoint locations:
Deployment for the **Logic App (Standard)** resource type requires a hosting pla
"IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "FUNCTIONS_WORKER_RUNTIME": "node",
"APPINSIGHTS_INSTRUMENTATIONKEY": <instrumentation-key> } }
To debug a stateless workflow more easily, you can enable the run history for th
"IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "FUNCTIONS_WORKER_RUNTIME": "node",
"Workflows.{yourWorkflowName}.OperationOptions": "WithStatelessRunHistory" } }
To debug a stateless workflow more easily, you can enable the run history for th
"Values": { "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=fabrikamstorageacct; \ AccountKey=<access-key>;EndpointSuffix=core.windows.net",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "FUNCTIONS_WORKER_RUNTIME": "node",
"Workflows.{yourWorkflowName}.OperationOptions": "WithStatelessRunHistory" } }
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
Last updated 08/20/2022
When you create a single-tenant Standard logic app resource, you're required to have a storage account for storing logic app artifacts. You can restrict access to this storage account so that only the resources inside a virtual network can connect to your logic app workflow. Azure Storage supports adding private endpoints to your storage account.
-This article describes the steps to follow for deploying such logic apps to protected private storage accounts. For more information, review [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md).
+This article describes the steps to follow for deploying such logic apps to protected private storage accounts.
+
+For more information, review the following documentation:
+
+- [Secure traffic between Standard logic apps and Azure virtual networks using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md)
+- [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md)
<a name="deploy-with-portal-or-visual-studio-code"></a>
This deployment method requires that temporary public access to your storage acc
1. Deploy your logic app resource by using either the Azure portal or Visual Studio Code.
-1. After deployment finishes, enable VNet integration between your logic app and the private endpoints on the virtual network that connects to your storage account.
+1. After deployment finishes, enable virtual network integration between your logic app and the private endpoints on the virtual network that connects to your storage account.
1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
This deployment method requires that temporary public access to your storage acc
This deployment method doesn't require public access to the storage account. For an example ARM template, review [Deploy logic app using secured storage account with private endpoints](https://github.com/VeeraMS/LogicApp-deployment-with-Secure-Storage). The example template creates the following resources: - A storage account that denies the public traffic-- An Azure VNet and subnets
+- An Azure virtual network and subnets
- Private DNS zones and private endpoints for Blob, File, Queue, and Table services - A file share for the Azure Logic Apps runtime directories and files. For more information, review [Host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). - An App Service plan (Workflow Standard WS1) for hosting Standard logic app resources-- A Standard logic app resource with a network configuration that's set up to use VNet integration. This configuration enables the logic app to access the storage account through private endpoints.
+- A Standard logic app resource with a network configuration that's set up to use virtual network integration. This configuration enables the logic app to access the storage account through private endpoints.
## Troubleshoot common errors
The following errors commonly happen with a private storage account that's behin
As the logic app isn't running when these errors occur, you can't use the Kudu console debugging service on the Azure platform to troubleshoot these errors. However, you can use the following methods instead: -- Create an Azure virtual machine (VM) inside a different subnet within the same VNet that's integrated with your logic app. Try to connect from the VM to the storage account.
+- Create an Azure virtual machine (VM) inside a different subnet within the same virtual network that's integrated with your logic app. Try to connect from the VM to the storage account.
- Check access to the storage account services by using the [Storage Explorer tool](https://azure.microsoft.com/features/storage-explorer/#overview).
As the logic app isn't running when these errors occur, you can't use the Kudu c
`C:\psping {storage-account-host-name}.blob.core.windows.net:443`
- `C:\psping {storage-account-host-name}.file.core.windows.net:443`
- `C:\psping {storage-account-host-name}.queue.core.windows.net:443` `C:\psping {storage-account-host-name}.table.core.windows.net:443`
+ `C:\psping {storage-account-host-name}.file.core.windows.net:445`
+ 1. If the queries resolve from the VM, continue with the following steps: 1. In the VM, find the DNS server that's used for resolution. 1. In your logic app, [find and set the `WEBSITE_DNS_SERVER` app setting](edit-app-settings-host-settings.md?tabs=azure-portal?tabs=azure-portal#manage-app-settingslocalsettingsjson) to the same DNS server value that you found in the previous step.
- 1. Check that the VNet integration is set up correctly with the appropriate VNET and subnet in your logic app.
+ 1. Check that the virtual network integration is set up correctly with the appropriate virtual network and subnet in your logic app.
## Next steps
logic-apps Logic Apps Deploy Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-deploy-azure-resource-manager-templates.md
Here are the general high-level steps for using Azure Pipelines:
1. Choose the resources you need for the pipeline, such as your logic app template and template parameters files, which you generate manually or as part of the build process.
-1. For your agent job, find and add the **Azure Resource Group Deployment** task.
-
- ![Add "Azure Resource Group Deployment" task](./media/logic-apps-deploy-azure-resource-manager-templates/add-azure-resource-group-deployment-task.png)
+1. For your agent job, find and add the **ARM Template deployment** task.
1. Configure with a [service principal](/azure/devops/pipelines/library/connect-to-azure).
logic-apps Logic Apps Diagnosing Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-diagnosing-failures.md
ms.suite: integration Previously updated : 08/20/2022 Last updated : 09/02/2022 # Troubleshoot and diagnose workflow failures in Azure Logic Apps
To help with debugging, you can add diagnostic steps to a logic app workflow, al
1. To review how Azure Logic Apps generates and forms a request, run the logic app workflow. You can then revisit the Webhook Tester site for more information.
+## Performance - frequently asked questions (FAQ)
+
+### Why is the workflow run duration longer than the sum of all the workflow action durations?
+
+Scheduling overhead exists when running actions, while waiting time between actions can happen due to backend system load. A workflow run duration includes these scheduling times and waiting times along with the sum of all of the action durations.
+
+### Usually, my workflow completes within 10 seconds. But, sometimes, completion can take much longer. How can I make sure the workflow always finishes within 10 seconds?
+
+* No SLA guarantee exists on latency.
+
+* Consumption workflows run on multi-tenant Azure Logic Apps, so other customers' workloads might negatively affect your workflow's performance.
+
+* For more predictable performance, you might consider creating [Standard workflows](single-tenant-overview-compare.md), which run in single-tenant Azure Logic Apps. You'll have more control to scale up or out to improve performance.
+
+### My action times out after 2 minutes. How can I increase the timeout value?
+
+The action timeout value can't be changed and is fixed at 2 minutes. If you're using the HTTP action, and you own the service called by the HTTP action, you can change your service to avoid the 2-minute timeout by using the asynchronous pattern. For more information, review [Perform long-running tasks with the polling action pattern](logic-apps-create-api-app.md#perform-long-running-tasks-with-the-polling-action-pattern).
+ ## Common problems - Standard logic apps ### Inaccessible artifacts in Azure storage account
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-maps.md
Title: Add XSLT maps to transform XML in workflows
-description: Add XSLT maps to transform XML in workflows with Azure Logic Apps and the Enterprise Integration Pack.
+ Title: Add maps to use with workflows
+description: Add maps for transform operations in workflows with Azure Logic Apps.
ms.suite: integration
Last updated 08/22/2022
-# Add XSLT maps to transform XML in workflows with Azure Logic Apps
+# Add maps for transformations in workflows with Azure Logic Apps
-To convert XML between formats, your logic app workflow can use maps with the **Transform XML** action. A map is an XML document that uses Extensible Stylesheet Language Transformation (XSLT) language to describe how to convert data from XML to another format. The map consists of a source XML schema as input and a target XML schema as output. You can define a basic transformation, such as copying a name and address from one document to another. Or, you can create more complex transformations using the out-of-the-box map operations. You can manipulate or control data by using different built-in functions, such as string manipulations, conditional assignments, arithmetic expressions, date time formatters, and even looping constructs.
+Workflow actions such as **Transform XML** and **Liquid** require a map to perform their tasks. For example, the **Transform XML** action requires a map to convert XML between formats. A map is an XML document that uses [Extensible Stylesheet Language Transformation (XSLT)](https://www.w3.org/TR/xslt/) language to describe how to convert data from XML to another format and has the .xslt file name extension. The map consists of a source XML schema as input and a target XML schema as output. You can define a basic transformation, such as copying a name and address from one document to another. Or, you can create more complex transformations using the out-of-the-box map operations. You can manipulate or control data by using different built-in functions, such as string manipulations, conditional assignments, arithmetic expressions, date time formatters, and even looping constructs.
For example, suppose you regularly receive B2B orders or invoices from a customer who uses the YearMonthDay date format (YYYYMMDD). However, your organization uses the MonthDayYear date format (MMDDYYYY). You can define and use a map that transforms the YYYYMMDD format to the MMDDYYYY format before storing the order or invoice details in your customer activity database.
-> [!NOTE]
-> Azure Logic Apps allocates finite memory for processing XML transformations. If you create logic apps based on the
-> [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences),
-> and your map or payload transformations have high memory consumption, such transformations might fail, resulting in
-> out of memory errors. To avoid this scenario, consider these options:
->
-> * Edit your maps or payloads to reduce memory consumption.
->
-> * Create your logic apps using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences) instead.
->
-> These workflows run in single-tenant Azure Logic Apps, which offers dedicated and flexible options for compute and memory resources.
-> However, the Standard logic app resource type currently doesn't support referencing external assemblies from maps. Also, only Extensible
-> Stylesheet Language Transformation (XSLT) 1.0 is currently supported.
-
-If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overview.md)? For more information about B2B enterprise integration, review [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md).
+This article shows how to add a map to your integration account. If you're working with a Standard logic app workflow, you can also add a map directly to your logic app resource.
## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* To create maps, you can use the following tools:
+* The map that you want to add. To create maps, you can use the following tools with the [Enterprise Integration SDK](https://aka.ms/vsmapsandschemas):
* Visual Studio 2019 and the [Microsoft Azure Logic Apps Enterprise Integration Tools Extension](https://aka.ms/vsenterpriseintegrationtools). * Visual Studio 2015 and the [Microsoft Azure Logic Apps Enterprise Integration Tools for Visual Studio 2015 2.0](https://aka.ms/vsmapsandschemas) extension.
- > [!IMPORTANT]
+ > [!NOTE]
> Don't install the extension alongside the BizTalk Server extension. Having both extensions might > produce unexpected behavior. Make sure that you only have one of these extensions installed.-
- > [!NOTE]
+ >
> On high resolution monitors, you might experience a [display problem with the map designer](/visualstudio/designers/disable-dpi-awareness) > in Visual Studio. To resolve this display problem, either [restart Visual Studio in DPI-unaware mode](/visualstudio/designers/disable-dpi-awareness#restart-visual-studio-as-a-dpi-unaware-process), > or add the [DPIUNAWARE registry value](/visualstudio/designers/disable-dpi-awareness#add-a-registry-entry).
-* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
+ For more information, review the [Create maps](#create-maps) section in this article.
+
+* Based on whether you're working on a Consumption or Standard logic app workflow, you'll need an [integration account resource](logic-apps-enterprise-integration-create-integration-account.md). Usually, you need this resource when you want to define and store artifacts for use in enterprise integration and B2B workflows.
+
+ > [!IMPORTANT]
+ >
+ > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
+
+ * If you're working on a Consumption logic app workflow, you'll need an [integration account that's linked to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
+
+ * If you're working on a Standard logic app workflow, you can link your integration account to your logic app resource, upload maps directly to your logic app resource, or both, based on the following scenarios:
+
+ * If you already have an integration account with the artifacts that you need or want to use, you can link your integration account to multiple Standard logic app resources where you want to use the artifacts. That way, you don't have to upload maps to each individual logic app. For more information, review [Link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
+
+ * The **Liquid** built-in connector lets you select a map that you previously uploaded to your logic app resource or to a linked integration account, but not both. You can then use this artifact across all child workflows within the same logic app resource.
+
+ So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource.
+
+## Limitations
+
+* Limits apply to the number of artifacts, such as maps, per integration account. For more information, review [Limits and configuration information for Azure Logic Apps](logic-apps-limits-and-config.md#integration-account-limits).
+
+* Based on whether you're working on a Consumption or Standard logic app workflow, the following limitations apply:
- * Is associated with the same Azure subscription as your logic app resource.
+ * Standard workflows
- * Exists in the same location or Azure region as your logic app resource where you plan to use the **Transform XML** action.
+ * Only XSLT 1.0 is supported.
- * If you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you have to [link your integration account to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md#link-account) before you can use your artifacts in your workflow.
+ * References to external assemblies from maps aren't supported.
- To create and add maps for use in **Logic App (Consumption)** workflows, you don't need a logic app resource yet. However, when you're ready to use those maps in your workflows, your logic app resource requires a linked integration account that stores those maps.
+ * No limits apply to map file sizes.
- * If you use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you need an existing logic app resource because you don't store maps in your integration account. Instead, you can directly add maps to your logic app resource using either the Azure portal or Visual Studio Code. Only XSLT 1.0 is currently supported. You can then use these maps across multiple workflows within the *same logic app resource*.
+ * Consumption workflows
- You still need an integration account to store other artifacts, such as partners, agreements, and certificates, along with using the [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), and [EDIFACT](logic-apps-enterprise-integration-edifact.md) operations. However, you don't need to link your logic app resource to your integration account, so the linking capability doesn't exist. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * Supports references to external assemblies from maps, which enable direct calls from XSLT maps to custom .NET code with the following requirements:
- > [!NOTE]
- > Currently, only the **Logic App (Consumption)** resource type supports [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
- > The **Logic App (Standard)** resource type doesn't include [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations.
+ * You need a 64-bit assembly. The transform service runs a 64-bit process, so 32-bit assemblies aren't supported. If you have the source code for a 32-bit assembly, recompile the code into a 64-bit assembly. If you don't have the source code, but you obtained the binary from a third-party provider, get the 64-bit version from that provider. For example, some vendors provide assemblies in packages that have both 32-bit and 64-bit versions. If you have the option, use the 64-bit version instead.
-* While **Logic App (Consumption)** supports referencing external assemblies from maps, **Logic App (Standard)** currently doesn't support this capability. Referencing an assembly enables direct calls from XSLT maps to custom .NET code.
+ * You have to upload *both the assembly and the map* in a specific order to your integration account. Make sure you [*upload your assembly first*](#add-assembly), and then upload the map that references the assembly.
- * You need a 64-bit assembly. The transform service runs a 64-bit process, so 32-bit assemblies aren't supported. If you have the source code for a 32-bit assembly, recompile the code into a 64-bit assembly. If you don't have the source code, but you obtained the binary from a third-party provider, get the 64-bit version from that provider. For example, some vendors provide assemblies in packages that have both 32-bit and 64-bit versions. If you have the option, use the 64-bit version instead.
+ * If your assembly or map is [2 MB or smaller](#smaller-map), you can add your assembly and map to your integration account *directly* from the Azure portal.
- * You have to upload *both the assembly and the map* in a specific order to your integration account. Make sure you [*upload your assembly first*](#add-assembly), and then upload the map that references the assembly.
+ * If your assembly is bigger than 2 MB but not bigger than the [size limit for assemblies](logic-apps-limits-and-config.md#artifact-capacity-limits), you'll need an Azure storage account and blob container where you can upload your assembly. Later, you can provide that container's location when you add the assembly to your integration account. For this task, the following table describes the items you need:
- * If your assembly is [2 MB or smaller](#smaller-map), you can add your assembly and map to your integration account *directly* from the Azure portal. However, if your assembly or map is bigger than 2 MB but not bigger than the [size limit for assemblies or maps](logic-apps-limits-and-config.md#artifact-capacity-limits), you can use an Azure blob container where you can upload your assembly and that container's location. That way, you can provide that location later when you add the assembly to your integration account. For this task, you need these items:
+ | Item | Description |
+ ||-|
+ | [Azure storage account](../storage/common/storage-account-overview.md) | In this account, create an Azure blob container for your assembly. Learn [how to create a storage account](../storage/common/storage-account-create.md). |
+ | Blob container | In this container, you can upload your assembly. You also need this container's content URI location when you add the assembly to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). |
+ | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, either [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md). <br><br>Or, in the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. |
- | Item | Description |
- ||-|
- | [Azure storage account](../storage/common/storage-account-overview.md) | In this account, create an Azure blob container for your assembly. Learn [how to create a storage account](../storage/common/storage-account-create.md). |
- | Blob container | In this container, you can upload your assembly. You also need this container's content URI location when you add the assembly to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). |
- | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, either [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md). <p>Or, in the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. |
- |||
+ To add larger maps, you can use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate). For Standard workflows, the Azure Logic Apps REST API is currently unavailable.
- * To add larger maps for the **Logic App (Consumption)** resource type, you can also use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate). However, for the **Logic App (Standard)** resource type, the Azure Logic Apps REST API is currently unavailable.
+ * Azure Logic Apps allocates finite memory for processing XML transformations. If you create Consumption workflows, and your map or payload transformations have high memory consumption, such transformations might fail, resulting in out of memory errors. To avoid this scenario, consider these options:
-## Limits
+ * Edit your maps or payloads to reduce memory consumption.
-* With **Logic App (Standard)**, no limits exist for map file sizes.
+ * Create [Standard logic app workflows](logic-apps-overview.md#resource-type-and-host-environment-differences) instead.
-* With **Logic App (Consumption)**, limits exist for integration accounts and artifacts such as maps. For more information, review [Limits and configuration information for Azure Logic Apps](logic-apps-limits-and-config.md#integration-account-limits).
+ These workflows run in single-tenant Azure Logic Apps, which offers dedicated and flexible options for compute and memory resources. However, Standard workflows support only XSLT 1.0 and don't support referencing external assemblies from maps.
+
+<a name="create-maps"></a>
+
+## Create maps
+
+To create an XSLT document to use as a map, create an integration project in Visual Studio 2019 or 2015 using the [Enterprise Integration SDK](https://aka.ms/vsmapsandschemas). In the integration project, you can build an integration map file, which lets you visually map items between two XML schema files. These tools offer the following map capabilities:
+
+* You work with a graphical representation of the map, which shows all the relationships and links you create.
+
+* You can make a direct data copy between the XML schemas that you use to create the map. The Enterprise Integration SDK for Visual Studio includes a mapper that makes this task as simple as drawing a line that connects the elements in the source XML schema with their counterparts in the target XML schema.
+
+* Operations or functions for multiple maps are available, including string functions, date time functions, and so on.
+
+* To add a sample XML message, you can use the map testing capability. With just one gesture, you can test the map you created, and review the generated output.
+
+* After you build your project, you get an XSLT document.
+
+Your map must have the following attributes and a `CDATA` section that contains the call to the assembly code:
+
+* `name` is the custom assembly name.
+
+* `namespace` is the namespace in your assembly that includes the custom code.
+
+The following example shows a map that references an assembly named `XslUtilitiesLib` and calls the `circumference` method from the assembly.
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:user="urn:my-scripts">
+<msxsl:script language="C#" implements-prefix="user">
+ <msxsl:assembly name="XsltHelperLib"/>
+ <msxsl:using namespace="XsltHelpers"/>
+ <![CDATA[public double circumference(int radius){ XsltHelper helper = new XsltHelper(); return helper.circumference(radius); }]]>
+</msxsl:script>
+<xsl:template match="data">
+<circles>
+ <xsl:for-each select="circle">
+ <circle>
+ <xsl:copy-of select="node()"/>
+ <circumference>
+ <xsl:value-of select="user:circumference(radius)"/>
+ </circumference>
+ </circle>
+ </xsl:for-each>
+</circles>
+</xsl:template>
+</xsl:stylesheet>
+```
<a name="add-assembly"></a>
-## Add referenced assemblies (Consumption resource only)
+## Add referenced assemblies (Consumption workflows only)
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
To add larger assemblies, you can upload your assembly to an Azure blob containe
After your assembly finishes uploading, the assembly appears in the **Assemblies** list. On your integration account's **Overview** page, under **Artifacts**, your uploaded assembly also appears.
-<a name="create-maps"></a>
-
-## Create maps
-
-To create an Extensible Stylesheet Language Transformation (XSLT) document that you can use as a map, you can use Visual Studio 2015 or 2019 to create an integration project by using the [Enterprise Integration SDK](https://aka.ms/vsmapsandschemas). In this project, you can build an integration map file, which lets you visually map items between two XML schema files. After you build this project, you get an XSLT document. For limits on map quantities in integration accounts, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md#artifact-number-limits).
-
-Your map must have the following attributes and a `CDATA` section that contains the call to the assembly code:
-
-* `name` is the custom assembly name.
-
-* `namespace` is the namespace in your assembly that includes the custom code.
-
-The following example shows a map that references an assembly named `XslUtilitiesLib` and calls the `circumference` method from the assembly.
-
-```xml
-<?xml version="1.0" encoding="UTF-8"?>
-<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:user="urn:my-scripts">
-<msxsl:script language="C#" implements-prefix="user">
- <msxsl:assembly name="XsltHelperLib"/>
- <msxsl:using namespace="XsltHelpers"/>
- <![CDATA[public double circumference(int radius){ XsltHelper helper = new XsltHelper(); return helper.circumference(radius); }]]>
-</msxsl:script>
-<xsl:template match="data">
-<circles>
- <xsl:for-each select="circle">
- <circle>
- <xsl:copy-of select="node()"/>
- <circumference>
- <xsl:value-of select="user:circumference(radius)"/>
- </circumference>
- </circle>
- </xsl:for-each>
-</circles>
-</xsl:template>
-</xsl:stylesheet>
-```
-
-## Tools and capabilities for maps
+<a name="add-map"></a>
-* When you create a map using Visual Studio and the [Enterprise Integration SDK](https://aka.ms/vsmapsandschemas), you work with a graphical representation of the map, which shows all the relationships and links you create.
+## Add maps
-* You can make a direct data copy between the XML schemas that you use to create the map. The [Enterprise Integration SDK](https://aka.ms/vsmapsandschemas) for Visual Studio includes a mapper that makes this task as simple as drawing a line that connects the elements in the source XML schema with their counterparts in the target XML schema.
+* If you're working with a Consumption workflow, you must add your map to a linked integration account.
-* Operations or functions for multiple maps are available, including string functions, date time functions, and so on.
+* If you're working with a Standard workflow, you have the following options:
-* To add a sample XML message, you can use the map testing capability. With just one gesture, you can test the map you created, and review the generated output.
+ * Add your map to a linked integration account. You can share the map and integration account across multiple Standard logic app resources and their child workflows.
-<a name="add-map"></a>
+ * Add your map directly to your logic app resource. However, you can only share that map across child workflows in the same logic app resource.
-## Add maps
+<a name="add-map-integration-account"></a>
-### [Consumption](#tab/consumption)
+### Add map to integration account
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
The following example shows a map that references an assembly named `XslUtilitie
1. On the integration account's navigation menu, under **Settings**, select **Maps**.
-1. On the **Maps** pane, select **Add**.
+1. On the **Maps** pane toolbar, select **Add**.
-1. Continue to add either a map [up to 2 MB](#smaller-map) or [more than 2 MB](#larger-map).
+For Consumption workflows, based on your map's file size, now follow the steps for uploading a map that's either [up to 2 MB](#smaller-map) or [more than 2 MB](#larger-map).
<a name="smaller-map"></a>
-#### Add maps up to 2 MB
+### Add maps up to 2 MB
1. On the **Add Map** pane, enter a unique name for your map.
The following example shows a map that references an assembly named `XslUtilitie
<a name="larger-map"></a>
-#### Add maps more than 2 MB
+### Add maps more than 2 MB
-Currently, to add larger maps, use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate).
+To add larger maps for Consumption workflows, use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate).
-### [Standard](#tab/standard)
++
+### Add map to Standard logic app resource
+
+The following steps apply only if you want to add a map directly to your Standard logic app resource. Otherwise, [add the map to your integration account](#add-map-integration-account).
#### Azure portal
logic-apps Logic Apps Enterprise Integration Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-schemas.md
Previously updated : 08/30/2022 Last updated : 08/30/2022 # Add schemas to use with workflows with Azure Logic Apps
-Workflow actions such as **Flat File** and **XML Validation** require a schema to perform their tasks. For example, the **XML Validation** action requires an XML schema to check that documents use valid XML and have the expected data in the predefined format. This schema is a business document that's represented in XML using the [XML Schema Definition (XSD)](https://www.w3.org/TR/xmlschema11-1/) and uses the .xsd file name extension. The **Flat File** actions use a schema to encode and decode XML content.
+Workflow actions such as **Flat File** and **XML Validation** require a schema to perform their tasks. For example, the **XML Validation** action requires an XML schema to check that documents use valid XML and have the expected data in the predefined format. This schema is an XML document that uses [XML Schema Definition (XSD)](https://www.w3.org/TR/xmlschema11-1/) language and has the .xsd file name extension. The **Flat File** actions use a schema to encode and decode XML content.
This article shows how to add a schema to your integration account. If you're working with a Standard logic app workflow, you can also add a schema directly to your logic app resource.
This article shows how to add a schema to your integration account. If you're wo
* Visual Studio 2015 and the [Microsoft Azure Logic Apps Enterprise Integration Tools for Visual Studio 2015 2.0](https://aka.ms/vsmapsandschemas) extension.
- > [!IMPORTANT]
+ > [!NOTE]
> Don't install the extension alongside the BizTalk Server extension. Having both extensions might > produce unexpected behavior. Make sure that you only have one of these extensions installed.-
- > [!NOTE]
+ >
> On high resolution monitors, you might experience a [display problem with the map designer](/visualstudio/designers/disable-dpi-awareness) > in Visual Studio. To resolve this display problem, either [restart Visual Studio in DPI-unaware mode](/visualstudio/designers/disable-dpi-awareness#restart-visual-studio-as-a-dpi-unaware-process), > or add the [DPIUNAWARE registry value](/visualstudio/designers/disable-dpi-awareness#add-a-registry-entry).
This article shows how to add a schema to your integration account. If you're wo
* If your schema is [2 MB or smaller](#smaller-schema), you can add your schema to your integration account *directly* from the Azure portal.
- * If your schema is bigger than 2 MB but not bigger than the [size limit for schemas](logic-apps-limits-and-config.md#artifact-capacity-limits), you'll need an Azure storage account where you can upload your schema. Then, to add that schema to your integration account, you can then link to your storage account from your integration account. For this task, the following table describes the items you need:
+ * If your schema is bigger than 2 MB but not bigger than the [size limit for schemas](logic-apps-limits-and-config.md#artifact-capacity-limits), you'll need an Azure storage account and a blob container where you can upload your schema. Then, to add that schema to your integration account, you can then link to your storage account from your integration account. For this task, the following table describes the items you need:
| Item | Description | ||-|
This article shows how to add a schema to your integration account. If you're wo
| Blob container | In this container, you can upload your schema. You also need this container's content URI later when you add the schema to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). | | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, choose a step: <br><br>- In the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. <br><br>- For the desktop version, [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md). |
- To add larger schemas, you can also use the [Azure Logic Apps REST API - Schemas](/rest/api/logic/schemas/create-or-update). However, for Standard workflows, the Azure Logic Apps REST API is currently unavailable.
+ To add larger schemas, you can also use the [Azure Logic Apps REST API - Schemas](/rest/api/logic/schemas/create-or-update). For Standard workflows, the Azure Logic Apps REST API is currently unavailable.
* Usually, when you're using an integration account with your workflow, you add the schema to that account. However, if you're referencing or importing a schema that's not in your integration account, you might receive the following error when you use the element `xsd:redefine`:
For Consumption workflows, based on your schema's file size, now follow the step
### Add schemas more than 2 MB
-To add larger schemas for Consumption workflows to use, you can upload your schema to an Azure blob container in your Azure storage account. Your steps for adding schemas differ based whether your blob container has public read access. So first, check whether or not your blob container has public read access by following these steps: [Set public access level for blob container](../vs-azure-tools-storage-explorer-blobs.md#set-the-public-access-level-for-a-blob-container)
+To add larger schemas for Consumption workflows to use, you can either use the [Azure Logic Apps REST API - Schemas](/rest/api/logic/schemas/create-or-update) or upload your schema to an Azure blob container in your Azure storage account. Your steps for adding schemas differ based whether your blob container has public read access. So first, check whether or not your blob container has public read access by following these steps: [Set public access level for blob container](../vs-azure-tools-storage-explorer-blobs.md#set-the-public-access-level-for-a-blob-container)
#### Check container access level
After your schema finishes uploading, the schema appears in the **Schemas** list
### Add schema to Standard logic app resource
-These steps apply only if you want to add a schema directly to your Standard logic app resource. Otherwise, [add the schema to your integration account](#add-schema-integration-account).
+The following steps apply only if you want to add a schema directly to your Standard logic app resource. Otherwise, [add the schema to your integration account](#add-schema-integration-account).
#### Azure portal
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
Title: Secure traffic between single-tenant workflows and virtual networks
+ Title: Secure traffic between Standard workflows and virtual networks
description: Secure traffic between Standard logic app workflows and virtual networks in Azure using private endpoints. ms.suite: integration
Last updated 08/08/2022
-# As a developer, I want to connect to my single-tenant logic app workflows with virtual networks using private endpoints and virtual network integration.
+# As a developer, I want to connect to my Standard logic app workflows with virtual networks using private endpoints and virtual network integration.
-# Secure traffic between single-tenant Standard logic apps and Azure virtual networks using private endpoints
+# Secure traffic between Standard logic apps and Azure virtual networks using private endpoints
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
This article shows how to set up access through private endpoints for inbound tr
For more information, review the following documentation: -- [What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md) and [Private endpoints - Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md#private-endpoints)
+- [What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md)
+- [Private endpoints - Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md#private-endpoints)
- [What is Azure Private Link?](../private-link/private-link-overview.md) - [Regional virtual network integration?](../app-service/networking-features.md#regional-vnet-integration)
To secure inbound traffic to your workflow, complete these high-level steps:
1. Make test calls to check access to the endpoint. To call your logic app workflow after you set up this endpoint, you must be connected to the virtual network.
+### Considerations for inbound traffic through private endpoints
+
+- If accessed from outside your virtual network, monitoring view can't access the inputs and outputs from triggers and actions.
+
+- Managed API webhook triggers (*push* triggers) and actions won't work because they run in the public cloud and can't call into your private network. They require a public endpoint to receive calls. For example, such triggers include the Dataverse trigger and the Event Grid trigger.
+
+- If you use the Office 365 Outlook trigger, the workflow is triggered only hourly.
+
+- Deployment from Visual Studio Code or Azure CLI works only from inside the virtual network. You can use the Deployment Center to link your logic app to a GitHub repo. You can then use Azure infrastructure to build and deploy your code.
+
+ For GitHub integration to work, remove the `WEBSITE_RUN_FROM_PACKAGE` setting from your logic app or set the value to `0`.
+
+- Enabling Private Link doesn't affect outbound traffic, which still flows through the App Service infrastructure.
+ ### Prerequisites for inbound traffic through private endpoints
-In addition to the [virtual network setup in the top-level prerequisites](#prerequisites), you need to have a new or existing single-tenant based logic app workflow that starts with a built-in trigger that can receive requests.
+Along with the [virtual network setup in the top-level prerequisites](#prerequisites), you need to have a new or existing Standard logic app workflow that starts with a built-in trigger that can receive requests.
For example, the Request trigger creates an endpoint on your workflow that can receive and handle inbound requests from other callers, including workflows. This endpoint provides a URL that you can use to call and trigger the workflow. For this example, the steps continue with the Request trigger.
-For more information, review the following documentation:
--- [Create single-tenant logic app workflows in Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)-- [Receive and respond to inbound HTTP requests using Azure Logic Apps](../connectors/connectors-native-reqres.md)
+For more information, review [Receive and respond to inbound HTTP requests using Azure Logic Apps](../connectors/connectors-native-reqres.md).
### Create the workflow
For more information, review the following documentation:
For more information, review [Create single-tenant logic app workflows in Azure Logic Apps](create-single-tenant-workflows-azure-portal.md).
-#### Copy the endpoint URL
+### Copy the endpoint URL
1. On the workflow menu, select **Overview**.
For more information, review [Create single-tenant logic app workflows in Azure
1. To make sure the connection is working correctly, create a virtual machine in the same virtual network that has the private endpoint, and try calling the logic app workflow.
-### Considerations for inbound traffic through private endpoints
--- If accessed from outside your virtual network, monitoring view can't access the inputs and outputs from triggers and actions.
+<a name="set-up-outbound"></a>
-- Managed API webhook triggers (*push* triggers) and actions won't work because they run in the public cloud and can't call into your private network. They require a public endpoint to receive calls. For example, such triggers include the Dataverse trigger and the Event Grid trigger.
+## Set up outbound traffic using virtual network integration
-- If you use the Office 365 Outlook trigger, the workflow is triggered only hourly.
+To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. First, create and test an example workflow. You can then set up virtual network integration.
-- Deployment from Visual Studio Code or Azure CLI works only from inside the virtual network. You can use the Deployment Center to link your logic app to a GitHub repo. You can then use Azure infrastructure to build and deploy your code.
+### Considerations for outbound traffic through virtual network integration
- For GitHub integration to work, remove the `WEBSITE_RUN_FROM_PACKAGE` setting from your logic app or set the value to `0`.
+- Setting up virtual network integration affects only outbound traffic. To secure inbound traffic, which continues to use the App Service shared endpoint, review [Set up inbound traffic through private endpoints](#set-up-inbound).
-- Enabling Private Link doesn't affect outbound traffic, which still flows through the App Service infrastructure.
+- You can't change the subnet size after assignment, so use a subnet that's large enough to accommodate the scale that your app might reach. To avoid any issues with subnet capacity, use a `/26` subnet with 64 addresses. If you create the subnet for virtual network integration with the Azure portal, you must use `/27` as the minimum subnet size.
-<a name="set-up-outbound"></a>
+- For the Azure Logic Apps runtime to work, you need to have an uninterrupted connection to the backend storage. If the backend storage is exposed to the virtual network through a private endpoint, make sure that the following ports are open:
-## Set up outbound traffic using virtual network integration
+ | Source port | Direction | Protocol | Source / Destination | Purpose |
+ |-|--|-|-||
+ | 443 | Outbound | TCP | Private endpoint / Storage account | Storage account |
+ | 445 | Outbound | TCP | Private endpoint / Subnet integrated with Standard logic app | Server Message Block (SMB) File Share |
-To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. First, create and test an example workflow. You can then set up virtual network integration.
+- For Azure-hosted managed connectors to work, you need to have an uninterrupted connection to the managed API service. With virtual network integration, make sure that no firewall or network security policy blocks these connections. If your virtual network uses a network security group (NSG), user-defined route table (UDR), or a firewall, make sure that the virtual network allows outbound connections to [all managed connector IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in the corresponding region. Otherwise, Azure-managed connectors won't work.
-> [!IMPORTANT]
-> You can't change the subnet size after assignment, so use a subnet that's large enough to accommodate
-> the scale that your app might reach. To avoid any issues with subnet capacity, use a `/26` subnet with 64 addresses.
-> If you create the subnet for virtual network integration with the Azure portal, you must use `/27` as the minimum subnet size.
+For more information, review the following documentation:
+- [Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md)
+- [Network security groups](../virtual-network/network-security-groups-overview.md)
+- [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md)
### Create and test the workflow
To secure outbound traffic from your logic app, you can integrate your logic app
1. In the Azure portal, on the logic app resource menu, under **Settings**, select **Networking**.
-1. On the **Networking** pane, on the **Outbound traffic** card, select **VNet integration**.
+1. On the **Networking** pane, on the **Outbound traffic** card, select **VNet integration**.
1. On the **VNet Integration** pane, select **Add Vnet**.
To secure outbound traffic from your logic app, you can integrate your logic app
The HTTP action now runs successfully.
-> [!IMPORTANT]
-> For the Azure Logic Apps runtime to work, you need to have an uninterrupted connection to the backend storage.
-> If the backend storage is exposed to the virtual network through a private endpoint, make sure that the following port is open:
->
-> | Source port | Direction | Protocol | Source / Destination | Purpose |
-> |-|--|-|-||
-> | 443 | Outbound | TCP | Private endpoint / Storage account | Storage account |
-> | 445 | Outbound | TCP | Private endpoint / Subnet integrated with Standard logic app | Server Message Block (SMB) File Share |
-> ||||||
->
->
-> For Azure-hosted managed connectors to work, you need to have an uninterrupted connection to the managed API service.
-> With virtual network integration, make sure that no firewall or network security policy blocks these connections.
-
-### Considerations for outbound traffic through virtual network integration
-
-If your virtual network uses a network security group (NSG), user-defined route table (UDR), or a firewall, make sure that the virtual network allows outbound connections to [all managed connector IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in the corresponding region. Otherwise, Azure-managed connectors won't work.
-
-Setting up virtual network integration affects only outbound traffic. To secure inbound traffic, which continues to use the App Service shared endpoint, review [Set up inbound traffic through private endpoints](#set-up-inbound).
-
-For more information, review the following documentation:
--- [Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md)-- [Network security groups](../virtual-network/network-security-groups-overview.md)-- [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md)- ## Next steps - [Logic Apps Anywhere: Networking possibilities with Logic Apps (single-tenant)](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
By default, when the `trailingSlash` configuration is omitted, Static Web Apps a
```json {
+ "trailingSlash": "auto",
"routes": [ { "route": "/profile*",
synapse-analytics Apache Spark Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-sql-connector.md
This article covers how to use the DataFrame API to connect to SQL databases usi
In this example, we will use the Microsoft Spark utilities to facilitate acquiring secrets from a pre-configured Key Vault. To learn more about Microsoft Spark utilities, please visit [introduction to Microsoft Spark Utilities](../microsoft-spark-utilities.md). ```python
+# The servername is in the format "jdbc:sqlserver://<AzureSQLServerName>.database.windows.net:1433"
servername = "<< server name >>" dbname = "<< database name >>" url = servername + ";" + "databaseName=" + dbname + ";"
virtual-network Virtual Network Multiple Ip Addresses Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-portal.md
Title: Multiple IP addresses for Azure virtual machines - Portal | Microsoft Docs
-description: Learn how to assign multiple IP addresses to a virtual machine using the Azure portal | Resource Manager.
+ Title: Multiple IP addresses for Azure virtual machines - Portal
+description: Learn how to assign multiple IP addresses to a virtual machine using the Azure portal
- Previously updated : 11/30/2016 Last updated : 09/05/2022 # Assign multiple IP addresses to virtual machines using the Azure portal
+An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it.
-This article explains how to create a virtual machine (VM) through the Azure Resource Manager deployment model using the Azure portal. Multiple IP addresses cannot be assigned to resources created through the classic deployment model. To learn more about Azure deployment models, read the [Understand deployment models](../../azure-resource-manager/management/deployment-models.md) article.
+Assigning multiple IP addresses to a VM enables the following capabilities:
+* Hosting multiple websites or services with different IP addresses and TLS/SSL certificates on a single server.
-## <a name = "create"></a>Create a VM with multiple IP addresses
+* Serve as a network virtual appliance, such as a firewall or load balancer.
-If you want to create a VM with multiple IP addresses, or a static private IP address, you must create it using PowerShell or the Azure CLI. To learn how, click the PowerShell or CLI options at the top of this article. You can create a VM with a single dynamic private IP address and (optionally) a single public IP address. Use the portal by following the steps in the [Create a Windows VM](../../virtual-machines/windows/quick-create-portal.md) or [Create a Linux VM](../../virtual-machines/linux/quick-create-portal.md) articles. After you create the VM, you can change the IP address type from dynamic to static and add additional IP addresses using the portal by following steps in the [Add IP addresses to a VM](#add) section of this article.
+* The ability to add any of the private IP addresses for any of the NICs to an Azure Load Balancer back-end pool. In the past, only the primary IP address for the primary NIC could be added to a back-end pool. For more information about load balancing multiple IP configurations, see [Load balancing multiple IP configurations](../../load-balancer/load-balancer-multiple-ip.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-## <a name="add"></a>Add IP addresses to a VM
+Every NIC attached to a VM has one or more IP configurations associated to it. Each configuration is assigned one static or dynamic private IP address. Each configuration may also have one public IP address resource associated to it. To learn more about IP addresses in Azure, read the [IP addresses in Azure](../../virtual-network/ip-services/public-ip-addresses.md) article.
-You can add private and public IP addresses to an Azure network interface by completing the steps that follow. The examples in the following sections assume that you already have a VM with the three IP configurations described in the [scenario](#scenario), but it's not required.
+> [!NOTE]
+> All IP configurations on a single NIC must be associated to the same subnet. If multiple IPs on different subnets are desired, multiple NICs on a VM can be used. To learn more about multiple NICs on a VM in Azure, read the [Create VM with Multiple NICs](../../virtual-machines/windows/multiple-nics.md) article.
+
+There's a limit to how many private IP addresses can be assigned to a NIC. There's also a limit to how many public IP addresses that can be used in an Azure subscription. See the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) article for details.
+
+This article explains how to add multiple IP addresses to a virtual machine using the Azure portal.
+
+> [!NOTE]
+> If you want to create a virtual machine with multiple IP addresses, or a static private IP address, you must create it using [PowerShell](virtual-network-multiple-ip-addresses-portal.md) or the [Azure CLI](virtual-network-multiple-ip-addresses-cli.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure virtual machine. For more information about creating a virtual machine, see [Create a Windows VM](../../virtual-machines/windows/quick-create-portal.md) or [Create a Linux VM](../../virtual-machines/linux/quick-create-portal.md).
+
+ - The example used in this article is named **myVM**. Replace this value with your virtual machine name.
+
+> [!NOTE]
+> Though the steps in this article assigns all IP configurations to a single NIC, you can also assign multiple IP configurations to any NIC in a multi-NIC VM. To learn how to create a VM with multiple NICs, see [Create a VM with multiple NICs](../../virtual-machines/windows/multiple-nics.md).
-### <a name="coreadd"></a>Core steps
-1. Browse to the Azure portal at https://portal.azure.com and sign into it, if necessary.
-2. In the portal, click **More services** > type *virtual machines* in the filter box, and then click **Virtual machines**.
-3. In the **Virtual machines** pane, click the VM you want to add IP addresses to. Navigate to **Networking** Tab. Click **Network interface** on the page. As shown in the picture below:
+## Add public and private IP address to a VM
+You can add a private and public IP address to an Azure network interface by completing the following steps.
- ![Add a public IP address to a VM](./media/virtual-network-multiple-ip-addresses-portal/figure200319.png)
-4. In the **Network interface** pane, click the **IP configurations**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-5. In the pane that appears for the NIC you selected, click **IP configurations**. Click **Add**, complete the steps in one of sections that follow, based on the type of IP address you want to add, and then click **OK**.
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-### Add a private IP address
+3. In **Virtual machines**, select **myVM** or the name of your virtual machine.
-Complete the following steps to add a new private IP address:
+4. Select **Networking** in **Settings**.
-1. Complete the steps in the [Core steps](#coreadd) section of this article and ensure you are on the **IP configurations** section of the VM Network Interface. Review the subnet shown as default (such as 10.0.0.0/24).
-2. Click **Add**. In the **Add IP configuration** pane that appears, create an IP configuration named *IPConfig-4* with a new *Static* private IP address by picking a new number for the final octet, then click **OK**. (For the 10.0.0.0/24 subnet, an example IP would be *10.0.0.7*.)
+5. Select the name of the network interface of the virtual machine. In this example, it's named **myvm889_z1**.
- > [!NOTE]
- > When adding a static IP address, you must specify an unused, valid address on the subnet the NIC is connected to. If the address you select is not available, the portal displays an X for the IP address and you must select a different one.
-3. Once you click OK, the pane closes and you see the new IP configuration listed. Click **OK** to close the **Add IP configuration** pane.
-4. You can click **Add** to add additional IP configurations, or close all open blades to finish adding IP addresses.
-5. Add the private IP addresses to the VM operating system by completing the steps in the [Add IP addresses to a VM operating system](#os-config) section of this article.
+6. In the network interface, select **IP configurations** in **Settings**.
-### Add a public IP address
+7. The existing IP configuration is displayed. This configuration is created when the virtual machine is created. To add a private and public IP address to the virtual machine, select **+ Add**.
-A public IP address is added by associating a public IP address resource to either a new IP configuration or an existing IP configuration.
+8. In **Add IP configuration**, enter or select the following information.
+
+| Setting | Value |
+| - | -- |
+| Name | Enter **ipconfig2**. |
+| **Private IP address settings** | |
+| Allocation | Select **Static**. |
+| IP address | Enter an unused address in the network for your virtual machine. </br> For the 10.1.0.0/24 subnet in the example, an IP would be **10.1.0.5**. |
+| **Public IP address** | Select **Associate** |
+| Public IP address | Select **Create new**. </br> Enter **myPublicIP-2** in **Name**. </br> Select **Standard** in **SKU**. </br> Select **OK**. |
+
+9. Select **OK**.
+ > [!NOTE]
-> Public IP addresses have a nominal fee. To learn more about IP address pricing, read the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) article.
->
+> When adding a static IP address, you must specify an unused, valid address on the subnet the NIC is connected to. If the address you select is not available, the portal displays an X for the IP address and you must select a different one.
+
+> [!IMPORTANT]
+> After you change the IP address configuration, you must restart the VM for the changes to take effect in the VM.
-### <a name="create-public-ip"></a>Create a public IP address resource
+## Add private IP address to a virtual machine
-A public IP address is one setting for a public IP address resource. If you have a public IP address resource that is not currently associated to an IP configuration that you want to associate to an IP configuration, skip the following steps and complete the steps in one of the sections that follow, as you require. If you don't have an available public IP address resource, complete the following steps to create one:
+You can add a private IP address to a virtual machine by completing the following steps.
-1. Browse to the Azure portal at https://portal.azure.com and sign into it, if necessary.
-3. In the portal, click **Create a resource** > **Networking** > **Public IP address**.
-4. In the **Create public IP address** pane that appears, enter a **Name**, select an **IP address assignment** type, a **Subscription**, a **Resource group**, and a **Location**, then click **Create**, as shown in the following picture:
+1. Sign in to the [Azure portal](https://portal.azure.com).
- ![Create a public IP address resource](./media/virtual-network-multiple-ip-addresses-portal/figure5.png)
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-5. Complete the steps in one of the sections that follow to associate the public IP address resource to an IP configuration.
+3. In **Virtual machines**, select **myVM** or the name of your virtual machine.
-#### Associate the public IP address resource to a new IP configuration
+4. Select **Networking** in **Settings**.
-1. Complete the steps in the [Core steps](#coreadd) section of this article.
-2. Click **Add**. In the **Add IP configuration** pane that appears, create an IP configuration named *IPConfig-4*. Enable the **Public IP address** and select an existing, available public IP address resource from the **Choose public IP address** pane that appears.
+5. Select the name of the network interface of the virtual machine. In this example, it's named **myvm889_z1**.
- Once you've selected the public IP address resource, click **OK** and the pane closes. If you don't have an existing public IP address, you can create one by completing the steps in the [Create a public IP address resource](#create-public-ip) section of this article.
-3. Review the new IP configuration. Even though a private IP address wasn't explicitly assigned, one was automatically assigned to the IP configuration, because all IP configurations must have a private IP address.
-4. You can click **Add** to add additional IP configurations, or close all open blades to finish adding IP addresses.
-5. Add the private IP address to the VM operating system by completing the steps for your operating system in the [Add IP addresses to a VM operating system](#os-config) section of this article. Do not add the public IP address to the operating system.
+6. In the network interface, select **IP configurations** in **Settings**.
-#### Associate the public IP address resource to an existing IP configuration
+7. The existing IP configuration is displayed. This configuration is created when the virtual machine is created. To add a private and public IP address to the virtual machine, select **+ Add**.
-1. Complete the steps in the [Core steps](#coreadd) section of this article.
-2. Click the IP configuration you want to add the public IP address resource to.
-3. In the IPConfig pane that appears, click **IP address**.
-4. In the **Choose public IP address** pane that appears, select a public IP address.
-5. Click **Save** and the panes close. If you don't have an existing public IP address, you can create one by completing the steps in the [Create a public IP address resource](#create-public-ip) section of this article.
-3. Review the new IP configuration.
-4. You can click **Add** to add additional IP configurations, or close all open blades to finish adding IP addresses. Do not add the public IP address to the operating system.
+8. In **Add IP configuration**, enter or select the following information.
+
+| Setting | Value |
+| - | -- |
+| Name | Enter **ipconfig3**. |
+| **Private IP address settings** | |
+| Allocation | Select **Static**. |
+| IP address | Enter an unused address in the network for your virtual machine. </br> For the 10.1.0.0/24 subnet in the example, an IP would be **10.1.0.6**. |
+
+9. Select **OK**.
+ > [!NOTE]
-> After you change the IP address configuration, you must restart the VM for the changes to take effect in the VM.
+> When adding a static IP address, you must specify an unused, valid address on the subnet the NIC is connected to. If the address you select is not available, the portal displays an X for the IP address and you must select a different one.
+> [!IMPORTANT]
+> After you change the IP address configuration, you must restart the VM for the changes to take effect in the VM.
[!INCLUDE [virtual-network-multiple-ip-addresses-os-config.md](../../../includes/virtual-network-multiple-ip-addresses-os-config.md)]