I´m using Terraform to create a script that builds some EC2 Servers and a MySQL RDS (using AWS Amazon Provider).
Is there a way to execute a SQL script on this created RDS (i want to create users, tables, etc)?
Thanks in advance,
Att,
I´m using Terraform to create a script that builds some EC2 Servers and a MySQL RDS (using AWS Amazon Provider).
Is there a way to execute a SQL script on this created RDS (i want to create users, tables, etc)?
Thanks in advance,
Att,
Like this solution, You can also avoid instance setup time/cost by using your own machine with local-exec IF your RDS database is publicly available and you have setup ingress
to allow your machine to connect. Then, with credentials stored securely in your environment, you would just do something like:
resource "null_resource" "db_setup" {
# runs after database and security group providing external access is created
depends_on = ["aws_db_instance.your_database_instance", "aws_security_group.sg_allowing_external_access"]
provisioner "local-exec" {
command = "database connection command goes here"
environment = {
# for instance, postgres would need the password here:
PGPASSWORD = "${var.database_admin_password}"
}
}
}
Keep in mind that passwords and other sensitive variables can be input into terraform separately.
Building upon ecoe's answer:
For future readers using a Postgres RDS instance, this is what worked for me (you must have psql installed on your machine):
variable "db_username" {
type = string
}
variable "db_password" {
type = string
}
resource "null_resource" "db_setup" {
provisioner "local-exec" {
command = "psql -h host_name_here -p 5432 -U \"${var.db_username}\" -d database_name_here -f \"path-to-file-with-sql-commands\""
environment = {
PGPASSWORD = "${var.db_password}"
}
}
}
You'll want to use terraform remote exec in a provisioner resource. Getting db credentials to the instance is a different story. You can use something like consul or pull the creds from s3 and have the script parse.
resource "aws_instance" "web" {
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh args",
]
}
}
https://www.terraform.io/docs/provisioners/remote-exec.html#script
Elaborating on the previous answers with a MySQL version.
MySQL version with file
resource "null_resource" "db_setup" {
depends_on = [module.db, aws_security_group.rds_main, aws_default_security_group.default]
provisioner "local-exec" {
command = "mysql --host=${module.db.this_db_instance_address} --port=${var.dbport} --user=${var.dbusername} --password=${var.dbpassword} --database=${var.dbname} < ${file(${path.module}/init/db_structure.sql)}"
}
}
MySQL version with local_file
data "local_file" "sql_script" {
filename = "${path.module}/init/db_structure.sql"
}
resource "null_resource" "db_setup" {
depends_on = [module.db, aws_security_group.rds_main, aws_default_security_group.default]
provisioner "local-exec" {
command = "mysql --host=${module.db.this_db_instance_address} --port=${var.dbport} --user=${var.dbusername} --password=${var.dbpassword} --database=${var.dbname} < ${data.local_file.sql_script.content}"
}
}
my database is in private subnets and the security groups allow only the bastion to connect.
I have an SQL file that creates the different users and schemas.
There can be better ways, but this works for my use case.
the SQL file has passwords which I pass through terraform variables -
data "template_file" "init_db" {
template = file("./init-db.sql")
vars = {
account_service_db_password = var.account_service_db_password
account_service_db_admin_password = var.account_service_db_admin_password
question_service_db_password = var.question_service_db_password
question_service_db_admin_password = var.question_service_db_admin_password
}
}
now I have to copy the SQL file and then execute commands -
resource "aws_instance" "bastion" {
# ...
instance_type = "t2.micro"
# ...
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file("./n_virginia.pem")
}
provisioner "file" {
content = data.template_file.init_db.rendered
destination = "init-db.sql"
}
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install mysql -y",
"mysql -u ${var.db_root_username} -h ${aws_db_instance.main.address} -p${var.db_root_password} < init-db.sql"
]
}
}
I answered a similar question, this answer works for creating users, databases, roles, etc...
Terraform's AWS provider only has functionality to provision the platform that hosts the MySQL DBMS (Aurora). Creating schemas, databases, roles etc... has to be done using the DBMS (running SQL queries).
In order to do this through terraform, you'll have to use a provider created for MySQL (a quick google search found petoju/mysql - please complete the nessecary due dilligence when using open source libraries in a production environment!).
Best practices for running database clusters in the cloud call for isolating the cluster in it's own private network (AWS VPC) with tightly controlled rules for incoming and outgoing traffic. This poses a problem as terraform will need to connect directly to MySQL in order to provision the resources.
If you're running terraform from a CI/CD pipeline (GH Actions, CircleCI, etc...) that has limitied networking configuration, or on a local machine. You can create a 'bastion' ec2 instance that lives in a subnet with access to the outside internet. You can then setup an SSH tunnel to that instance from the CI runner that you can proxy the request through using the provider's proxy function or terraform's ALL_PROXY environment variable.
If you have full control of the CI/CD runners (GH Actions self hosted runners) and they are running in another AWS VPC, you can setup VPC peering which will allow the runners to directly communicate with the Aurora clusters.
Creating/Modifying tables should be controlled at the application layer, there are many platforms/libraries that allow you to create and manage migrations properly (such as liquibase, or more framework specific tools like Umzug for NodeJS).
Managing table migrations through terraform is going to lead to a lot of headache.