I have been using AWS CloudFormation as my main IaC tool for handling AWS resources for a better half of a decade. CloudFormation is an excellent tool to manage resources. However, it is rather low-level. Most of the time I have ended up using a wrapper tool to generate and deploy CloudFormation templates. While good, usually these tools have lacked some functionality. This in mind, my colleague Henri has created Takomo (engl. forge). Takomo is inspired by Cloudreach’s excellent Sceptre, a CloudFormation wrapper tool built with Python, and Terraform built by Hashicorp.
Built with large-scale deployments in mind, Takomo brings some cool extra features on top of your standard CloudFormation tooling, including:
Full list of features can be found at [1].
Installing Takomo is easy, thanks to the NPM package. Following quick-start guide [2] I am up to speed in no time. Stack deployment is launched with a simple tkm stacks deploy -command. Below is an example output of command.
ws-lsten-mbp:takomo lauri$ tkm stacks deploy 2020-05-18 09:11:48 +0300 [info ] - Build configuration 2020-05-18 09:11:48 +0300 [info ] - Prepare context 2020-05-18 09:11:48 +0300 [info ] - Load existing stacks Review stacks deployment plan: ------------------------------ A stacks deployment plan has been created and is shown below. Stacks will be deployed in the order they are listed, and in parallel when possible. Stack operations are indicated with the following symbols: + create Stack does not exist and will be created ~ update Stack exists and will be updated ± recreate Previous attempt to create stack has failed, it will be first removed, then created Following 1 stack(s) will be deployed: + /vpc.yml/eu-west-1: name: vpc status: PENDING account id: 123456789012 region: eu-west-1 credentials: user id: AROA12982FVG5L5TQV2:someone@somewhere.com account id: 123456789012 arn: arn:aws:sts::123456789012:assumed-role/administrator/someone@somewhere.com dependencies: [] ? Continue to deploy the stacks? Yes 2020-05-18 09:12:09 +0300 [info ] - /vpc.yml/eu-west-1 - Create stack Changes to stack: /vpc.yml/eu-west-1 + VPC type: AWS::EC2::VPC physical id: undefined replacement: undefined scope: details: Add: 1, Modify: 0, Remove: 0 ? Deploy stack? yes 2020-05-18 09:12:41 +0300 [info ] - /vpc.yml/eu-west-1 - vpc vpc AWS::CloudFormation::Stack CREATE_IN_PROGRESS User Initiated 2020-05-18 09:12:42 +0300 [info ] - /vpc.yml/eu-west-1 - vpc VPC AWS::EC2::VPC CREATE_IN_PROGRESS 2020-05-18 09:12:44 +0300 [info ] - /vpc.yml/eu-west-1 - vpc VPC AWS::EC2::VPC CREATE_IN_PROGRESS Resource creation Initiated 2020-05-18 09:12:59 +0300 [info ] - /vpc.yml/eu-west-1 - vpc VPC AWS::EC2::VPC CREATE_COMPLETE 2020-05-18 09:13:00 +0300 [info ] - /vpc.yml/eu-west-1 - vpc vpc AWS::CloudFormation::Stack CREATE_COMPLETE 2020-05-18 09:13:00 +0300 [info ] - /vpc.yml/eu-west-1 - Stack deploy completed with status: CREATE_COMPLETE Stack path Stack name Status Reason Time Message ------------------ ---------- ------- -------------- ----- ------- /vpc.yml/eu-west-1 vpc SUCCESS CREATE_SUCCESS 51.8s Success Completed in 1m 12.9s with status: SUCCESS
When I deploy my basic VPC stack for the first time I see something I haven’t seen with other tools before: I am prompted twice whether I want to proceed. First prompt confirms whether the role and account I am using are correct. Next prompt is from the CloudFormation Change Set. Once approved, I am able to deploy the stack. After a while I have my freshly created VPC up and running.
I personally use Change Sets quite frequently. Whenever I am changing production resources I want to verify that I am doing the right thing. Attaching the Change Set feature into the tool is a rather useful feature, at least for me.
Modifying my existing stack is easy. After making changes to the configuration and the template I simply relaunch the tool. Change Set will prompt me for changes and I can proceed with the deployment. Cleaning up stacks is just as easy. However, this time I use tkm stacks undeploy to remove my environment.
After my initial test I want to proceed deploying something a bit more useful. I am currently running a few Lightsail instances hosting my personal website. I want to migrate the website inside VPC. My aim is to containerize the CMS and refactor some of the resources into Lambda functions. First step in the progress is to migrate the existing environment into VPC. To accomplish this I need to create the environment for EC2 instances, including VPC, subnets and other mandatory resources.
In quick start I simply created a stack configuration file, a stack template and launched the CloudFormation stack. This time I want to be able to have multiple environments. I need at least separate development and production environments. In each environment, I will have stacks related to infrastructure and applications. To achieve this, I can use Stack Groups to organize infrastructure into logical groups.
The idea of stack groups is rather simple. I can create subdirectories beneath the main stack configuration folder. Each subdirectory will be treated as a separate stack group. For example, I can create a stack group for the development environment by creating a subdirectory with name dev . Inside the dev directory I can create separate subdirectories e.g. for infrastructure and application stacks. Once done, I can also provide a base configuration for the whole stack group. For example, I can provide a list of tags or define a project name (which will be used as a prefix for all the stacks). These properties would be used by all the stacks inside the stack group. Configuration file is cascading and if needed, you can override parts or all of it in the nested directories.
When your environment is getting larger you want to be able to centralize your configuration into as few files as possible. While providing parameters for some resources can be done in the stack configuration files sometimes it can be rather cumbersome.
Takomo has extensive support for passing variables [3]. For example, I can pass variables via command line, environment variables or config.yml.
In my example, I want to define a name for my environment. I will create a new entry to my dev/config.yml and call it “Environment”. After this I can refer to the variable on my stack configuration file.
project: web regions: - eu-north-1
Example: stacks/config.yml
data: Environment: dev
Example: stacks/dev/config.yml
template: infra/vpc.yml parameters: CidrBlock: 192.168.1.0/18 Project: {{ stackGroup.project }} Environment: {{ stackGroup.data.Environment }}
Example: stacks/dev/infra/vpc.yml
Now I can pass the parameter into CloudFormation and use it e.g. when naming resources. This makes creating new environments easy. I can just duplicate my dev -folder (let’s call it prod) and modify Environment parameter on my prod/config.yml. Now I am ready to launch a separate production environment.
One of the most important features of CloudFormation is the ability to refer to the outputs of other stacks. I want to split my stacks into small logical groups instead of monolithic stacks. This way I can make small, controlled changes more easily and efficiently.
One of many features of Takomo is the ability to use your own resolvers . The idea of resolvers is simple: they allow you to fetch stack input parameters from various sources. For example, you might want to create an API key to your proprietary monitoring system before launching an EC2 instance and pass the key to your instance upon launch. All you need to do is to create a custom resolver that connects to your monitoring system and creates the API key you then push into CloudFormation as a parameter.
On top custom resolvers , Takomo includes a few resolvers, including stack output resolver .
Using the resolver I don’t need to manually copy and paste parameters of resources that might change. Let me give you an example. Above I created a VPC stack. Stack consists of only VPC and nothing else. Stack outputs the ID of the VPC. I create subnets in a separate subnet stack and I need to provide the VPC ID as a parameter into the subnet stack. I can provide this information manually. All I need to do is to copy the output value and paste it into the configuration file. However, if I recreate the VPC the new VPC will have a new ID I need to manually copy and paste again. Stack output resolver automates this process for you.
template: infra/subnets.yml parameters: VpcId: resolver: stack-output stack: /{{ stackGroup.data.Environment }}/infra/vpc.yml output: VpcId IgwId: resolver: stack-output stack: /{{ stackGroup.data.Environment }}/infra/vpc.yml output: IgwId PublicSubnet1Cidr: 192.168.1.0/24 PublicSubnet2Cidr: 192.168.2.0/24 PrivateSubnet1Cidr: 192.168.3.0/24 PrivateSubnet2Cidr: 192.168.4.0/24 Project: {{ stackGroup.project }} Environment: {{ stackGroup.data.Environment }}
Example: stacks/dev/infra/subnets.yml
With the example above I can launch my subnet stack and create required resources without manual steps.
Now I can re-launch my VPC stack and add outputs. Using these new output parameters and parameters I added into my main configuration.yml I can also launch my subnet stack and have a base environment ready to be used.
Takomo is something we have been using internally at Webscale for a while now. With the release of Takomo 1.0.0 we feel like the tool is now mature enough to be launched externally.
This post is just a brief introduction to the core features of the tool. I will write another post where we will take a look at more advanced features of the tool.
If you have any questions or requests regarding the tool feel free to comment or participate in the development in Github [4].