This example demonstrates how HashiCorp tools run on AWS, including:
- Boundary
- HashiCorp Cloud Platform Vault
- HashiCorp Cloud Platform Consul
- Terraform Cloud
It uses the following AWS services:
- Amazon ECS
- AWS KMS
To run this example, you need Terraform Cloud to set up a series of workspaces. The workspaces need to be set up as follows, with the appropriate working directory, secrets, and remote workspace sharing.
| Workspace Name | Working Directory for VCS | Variables | Remote State Sharing |
|---|---|---|---|
| hcp | hcp/ |
name, trusted_role_arn, bootstrap AWS access keys, HCP credentials | infrastructure, consul, boundary, vault-aws |
| vault-aws | vault/aws/ |
name, AWS access keys (for AWS secrets engine) | |
| infrastructure | infrastructure/ |
name, client_cidr_block, HCP service principal credentials, database_password, boundary_database_password, key_pair_name. [FROM VAULT] AWS access keys | boundary, apps, vault-products |
| vault-products | vault/products/ |
name, HCP service principal credentials | boundary |
| boundary | boundary/ |
name. [FROM VAULT] db_password, db_username, AWS access keys | |
| apps | apps/ |
name, client_cidr_block. [FROM VAULT] db_password, db_username, AWS access keys |
You need to run plan and apply for each workspace in the order indicated.
Imagine you want to issue AWS access keys for each group that runs Terraform. You can use Vault's AWS secrets engine to generate access keys for each group.
For example, you set up an initial AWS access and secret key for Vault to issue new credentials. The AWS access and secret key assume a role with sufficient permissions for Terraform to configure infrastructure on AWS.
-
Run
terraform applyfor thehcpworkspace. It creates:- HCP network
- HCP Vault cluster
- HCP Consul cluster
- AWS IAM Role for Terraform
-
Sets the Vault address, token, and namespace for you to get a new set of AWS access keys from Vault in your CLI.
source set.sh -
Next, generate a set of AWS access keys for the Vault secrets engine. These should be different than the ones you used to bootstrap HCP and the AWS IAM role!
-
Add the new AWS access keys to
vault-awsworkspace. -
Run
terraform applyfor thevault-awsworkspace. It creates:- Path for AWS secrets engine in Vault at
terraform/aws - Role for your team (e.g.,
hashicups)
- Path for AWS secrets engine in Vault at
-
Run
make vault-aws. This retrieves a new set of AWS access keys from Vault via the secrets engine and saves it to thesecrets/directory locally.make vault-aws
-
Use the AWS access and secret keys from
secrets/aws.jsonand add them to theinfrastructure,boundary, andappsworkspaces. -
Run
terraform applyfor theinfrastructureworkspace. It creates:- AWS VPC and peers to HCP network
- HashiCups database (PostgreSQL)
- Boundary cluster (1 worker, 1 controller, database)
- Amazon ECS cluster (1 EC2 container instance)
We need to generate a few things for the products API (and Boundary).
- Database secrets engine for HashiCups data (used by
product-apiand Boundary) - AWS IAM Auth Method for
vault-agentin HashiCupsproduct-api
To configure this, you need to add HCP service credentials with the Vault address, token, and
namespace to vault-products.
You have two identities that need to access the application's database:
- Application (
product-api) to read from the database - Human user (
opsordevteam) to update the database using Boundary
Configure the following.
- Run
terraform applyfor thevault-productsworkspace. It creates:- Path for database credentials in Vault at
hashicups/database - Role for the application that will access it (e.g.,
product) - Role for Boundary user to access it (e.g.,
boundary)
- Path for database credentials in Vault at
Boundary needs a set of organizations and projects. You have two projects:
core_infra: ECS container instance. Allowopsteam to SSH into it.product_infra: Application database. Allowopsordevteam to configure it.
Configure the following.
- Run
terraform applyfor theboundaryworkspace. It creates:- Two projects, one for
core_infraand the other forproduct_infra. - Three users,
jefffor theopsteam,rosemaryfor thedevteam, andtaylorfor thesecurityteam. - Two targets:
- ECS container instance (not yet added)
- Application database, brokered by Vault credentials
- Two projects, one for
-
Run
source set.shto set your Boundary address. -
Run
make boundary-host-catalogto configure the host catalog for the ECS container instances. This uses dynamic host catalog plugins in Boundary to auto-discover AWS EC2 instances with the cluster tag. -
You can also SSH into the ECS container instance as the
opsteam. Runmake ssh-ecs.
- Boundary uses Vault as a credentials store to retrieve a new set of database credentials!
Run
make configure-dbto log into Boundary as thedevteam and configure the database - all without knowing the username or password!
You may need to control network policy between services on ECS and other services registered to Consul. You can use intentions to secure service-to-service communication.
-
Run
terraform applyfor theappsworkspace. It creates three ECS services:frontend(Fargate launch type)public-api(Fargate launch type)product-api(EC2 launch type)
-
Run
terraform applyfor thevault-productsworkspace. It adds:- AWS IAM authentication method for ECS task to authenticate to vault
-
Run
make productsto mark theproduct-apito be recreated. -
Run
terraform applyfor theappsworkspace. It should redeploy theproduct-api. -
Try to access the frontend via the ALB. You might get an error! We need to enable traffic between the services registered to Consul.
-
Try to access the frontend via the ALB. You'll get a
Packer Spiced Latte!