Cloud
Infrastructure as Code: From the Iron Age to the Cloud Age
/etc/hosts
files on every server in your non-production estate, making it impossible to ssh into any of them — or to run the tool again to fix the error? I have.
# STAGING ENVIRONMENT
resource “aws_vpc” “staging_vpc” {
cidr_block = “10.0.0.0/16”
}
resource “aws_subnet” “staging_subnet” {
vpc_id = “${aws_vpc.staging_vpc.id}”
cidr_block = “10.0.1.0/24”
}
resource “aws_security_group” “staging_access” {
name = “staging_access”
vpc_id = “${aws_vpc.staging_vpc.id}”
}
resource “aws_instance” “staging_server” {
instance_type = “t2.micro”
ami = “ami-ac772edf”
vpc_security_group_ids = [“${aws_security_group.staging_access.id}”]
subnet_id = “${aws_subnet.staging_subnet.id}”
}
# PRODUCTION ENVIRONMENT
resource “aws_vpc” “production_vpc” {
cidr_block = “10.0.0.0/16”
}
resource “aws_subnet” “production_subnet” {
vpc_id = “${aws_vpc.production_vpc.id}”
cidr_block = “10.0.1.0/24”
}
resource “aws_security_group” “production_access” {
name = “production_access”
vpc_id = “${aws_vpc.production_vpc.id}”
}
resource “aws_instance” “production_server” {
instance_type = “t2.micro”
ami = “ami-ac772edf”
vpc_security_group_ids = [“${aws_security_group.production_access.id}”]
subnet_id = “${aws_subnet.production_subnet.id}”
}
./our-project/staging/main.tf
./our-project/production/main.tf
aws s3 sync ./our-project/ s3://our-project-repository/1.0.123/
aws s3 sync — delete
s3://our-project-repository/1.0.123/
s3://our-project-repository/staging/
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.