Using modules in Terraform
05 Jan 2016Lately, I’ve been using Terraform quite a bit, both on an open source project I am working on (shhh… soon) and for Gogobot.
Since Gogobot’s infrastructure is quite complex, it’s not that easy to create a contained terraform file that will not get out of hand.
I like infrastructure to be context bound, for example: (Web context, Search Context, Monitoring context) etc…
Describing each of these “contexts” in the infrastructure is easy enough, but they all share a base context.
For example, the monitoring context has it’s own instances, security groups and connections but it needs to share the cluster
security group and the external_connections
security group.
external_connections
is a security group used to connect from the outside to services, any server that is allowed to have connections from the outside needs to have this security group.
There are of course other things like the vpc_id
and other things that all servers share.
In order to explain it best, lets look at the terraform example here.
# Specify the provider and access details
provider "aws" {
region = "${var.aws_region}"
}
# Create a VPC to launch our instances into
resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
}
# Create an internet gateway to give our subnet access to the outside world
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.default.id}"
}
# Grant the VPC internet access on its main route table
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.default.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
# Create a subnet to launch our instances into
resource "aws_subnet" "default" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
}
# A security group for the ELB so it is accessible via the web
resource "aws_security_group" "elb" {
name = "terraform_example_elb"
description = "Used in the terraform"
vpc_id = "${aws_vpc.default.id}"
# HTTP access from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Our default security group to access
# the instances over SSH and HTTP
resource "aws_security_group" "default" {
name = "terraform_example"
description = "Used in the terraform"
vpc_id = "${aws_vpc.default.id}"
# SSH access from anywhere
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP access from the VPC
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "web" {
name = "terraform-example-elb"
subnets = ["${aws_subnet.default.id}"]
security_groups = ["${aws_security_group.elb.id}"]
instances = ["${aws_instance.web.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
resource "aws_key_pair" "auth" {
key_name = "${var.key_name}"
public_key = "${file(var.public_key_path)}"
}
resource "aws_instance" "web" {
# The connection block tells our provisioner how to
# communicate with the resource (instance)
connection {
# The default username for our AMI
user = "ubuntu"
# The connection will use the local SSH agent for authentication.
}
instance_type = "m1.small"
# Lookup the correct AMI based on the region
# we specified
ami = "${lookup(var.aws_amis, var.aws_region)}"
# The name of our SSH keypair we created above.
key_name = "${aws_key_pair.auth.id}"
# Our Security group to allow HTTP and SSH access
vpc_security_group_ids = ["${aws_security_group.default.id}"]
# We're going to launch into the same subnet as our ELB. In a production
# environment it's more common to have a separate private subnet for
# backend instances.
subnet_id = "${aws_subnet.default.id}"
# We run a remote provisioner on the instance after creating it.
# In this case, we just install nginx and start it. By default,
# this should be on port 80
provisioner "remote-exec" {
inline = [
"sudo apt-get -y update",
"sudo apt-get -y install nginx",
"sudo service nginx start"
]
}
}
Looking at this piece of code will look familiar if you ever saw a blog about terraform or any of the examples, but if you try to use this as a basis to describe your infrastructure you will soon find it doesn’t scale if you want to describe part of the infrastructure without really getting a full blown plan
or apply
.
But what if you now have a SOLR context and you want to launch it/change it. The promise of terraform is that you can do this all together but I found it much more convenient to modularize it.
So, lets get this started.
This is what the project directory will look like now (all files are still placeholders at this point).
├── base
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── web
├── main.tf
├── outputs.tf
└── variables.tf
We divided into two modules, base
will be the basis for our infrastructure, it will include the vpc, security groups etc, while web
will be the instance and the load balancer.
The basics of modules
Before diving into the code too much, lets understand the basics of terraform modules first.
- Modules are logical parts of your infrastructure
- Modules can have outputs
- It is better to “inject” variables into the modules during definitions. All variables better come from the root.
- You can only use the outputs of modules as inputs for other parts. For example. if a module has
aws_vpc
you can’t use it directly, you can output outputvpc_id
and use it in another module/object.
The last point here is something that is not really clear at first, but we’ll clarify it more with the code.
Diving into the code
base/main.tf
# Specify the provider and access details
resource "aws_key_pair" "auth" {
key_name = "${var.key_name}"
public_key = "${file(var.public_key_path)}"
}
provider "aws" {
region = "${var.aws_region}"
}
# Create a VPC to launch our instances into
resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
}
# Create an internet gateway to give our subnet access to the outside world
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.default.id}"
}
# Grant the VPC internet access on its main route table
resource "aws_route" "internet_access" {
route_table_id = "${aws_vpc.default.main_route_table_id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
# Create a subnet to launch our instances into
resource "aws_subnet" "default" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
}
# Our default security group to access
# the instances over SSH and HTTP
resource "aws_security_group" "default" {
name = "terraform_example"
description = "Used in the terraform"
vpc_id = "${aws_vpc.default.id}"
# SSH access from anywhere
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP access from the VPC
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
base/outputs.tf
output "default_vpc_id" {
value = "${aws_vpc.default.id}"
}
output "default_security_group_id" {
value = "${aws_security_group.default.id}"
}
output "default_subnet_id" {
value = "${aws_subnet.default.id}"
}
output "aws_key_pair_id" {
source "${aws_key_pair.id.auth.id}"
}
web/main.tf
module "base" {
source = "../base"
public_key_path = "${var.public_key_path}"
key_name = "${var.key_name}"
aws_region = "${var.aws_region}"
}
# A security group for the ELB so it is accessible via the web
resource "aws_security_group" "elb" {
name = "terraform_example_elb"
description = "Used in the terraform"
vpc_id = "${module.base.default_vpc_id}"
# HTTP access from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "web" {
name = "terraform-example-elb"
subnets = ["${module.base.default_subnet_id}"]
security_groups = ["${module.base.default_security_group_id}"]
instances = ["${aws_instance.web.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
resource "aws_instance" "web" {
# The connection block tells our provisioner how to
# communicate with the resource (instance)
connection {
# The default username for our AMI
user = "ubuntu"
# The connection will use the local SSH agent for authentication.
}
instance_type = "m1.small"
# Lookup the correct AMI based on the region
# we specified
ami = "${lookup(var.aws_amis, var.aws_region)}"
# The name of our SSH keypair we created above.
key_name = "${module.base.aws_key_pair_id}"
# Our Security group to allow HTTP and SSH access
vpc_security_group_ids = ["${module.base.default_security_group_id}"]
# We're going to launch into the same subnet as our ELB. In a production
# environment it's more common to have a separate private subnet for
# backend instances.
subnet_id = "${module.base.default_subnet_id}"
# We run a remote provisioner on the instance after creating it.
# In this case, we just install nginx and start it. By default,
# this should be on port 80
provisioner "remote-exec" {
inline = [
"sudo apt-get -y update",
"sudo apt-get -y install nginx",
"sudo service nginx start"
]
}
}
As you can see, the code in the web
context is much cleaner and really only includes the things you need for the web. The other parts of your infrastructure are really created on the base
context.
Now, you can “include” that context in other infrastrucutre contexts and expand on it.
I created a repository with the code Here