/r/Terraform
Terraform discussion, resources, and other HashiCorp news.
This subreddit is for Terraform (IaC - Infrastructure as Code) discussions to get help, educate others and share the wealth of news.
Feel free to reach out to mods to make this subreddit better.
Rules:
Be nice to each other!
MultiLink_Spam == Perm.Ban
/r/Terraform
We have a system where we are using terraform project on the fly for different vendors. Now when a vendor is done with the project we are using terraform destroy to completely remove the project. But since as part of creation we are creation composer instances which in turn creates persistent disks and buckets. With terraform destroy these are not getting deleted resulting in failure to delete the project as well. How can we handle this better or any particular way to make sure any related resources are fist cleaned up to successfully delete the project.
Here is info about our technical development environment :
The AWS Lambda project contains the following:
I am trying to use Terraform CDKTF tool to deploy a C#-based AWS Lambda to Amazon AWS Cloud.
In my Visual Studio 2022 Solution, I have the following projects:
I build the AWSSrvlessHelloWorldApp Lambda Project.
I wanted to supply argument parameters when I run cdktf deploy
Therefore, as I researched, I came across the following webpage link that describes how to use Terraform variables as input parameters:
https://developer.hashicorp.com/terraform/cdktf/concepts/variables-and-outputs
Here is an excerpt from the aforementioned webpage that shows sample declaration & instantiation of Terraform variables as input parameters in C#:
I build the AWSSrvlessHelloWorldApp Lambda Project.
I wanted to supply argument parameters when I run cdktf deploy
Therefore, as I researched, I came across the following webpage link that describes how to use Terraform variables as input parameters:
https://developer.hashicorp.com/terraform/cdktf/concepts/variables-and-outputs
Here is an excerpt from the aforementioned webpage that shows sample declaration & instantiation of Terraform variables as input parameters in C#:
TerraformVariable imageId = new TerraformVariable(this, "imageId",
new TerraformVariableConfig
{
Type = "string",
Default = "ami-abcde123",
Description = "What AMI to use to create an instance",
});
new Instance(this, "hello", new InstanceConfig
{
Ami = imageId.StringValue,
InstanceType = "t2.micro",
});
In the MyTerraformStack ( i.e. the TerraformStack Project) I have the following files:
In Program.cs , I have the following code( Please note that I use Debugger.Launch() so that I can bring up the MyTerraformStack in Visual Studio 2022 whenever I run cdktf from Powershell commandline ) :
class Program
{
public static void Main(string[] args)
{
Debugger.Launch();
App app = new App();
MainStack stack = new MainStack(app, "aws_instance");
app.Synth();
}
}
In MainStack.cs, the code content’s are:
class MainStack : TerraformStack
{
public MainStack(Construct scope, string id) : base(scope, id)
{
TerraformVariable imageId = new TerraformVariable(this, "imageId", new
TerraformVariableConfig
{
Type = "string",
Default = "ami-abcde123",
Description = "What AMI to use to create an instance",
});
new Instance(this, "hello", new InstanceConfig
{
Ami = imageId.StringValue,
InstanceType = "t2.micro",
});
Console.Error.WriteLine("imageId.ToString()");
Console.Error.WriteLine(imageId.ToString());
Console.Error.WriteLine("imageId.StringValue");
Console.Error.WriteLine(imageId.StringValue);
...Blah Blah configuring ITerraformAssetConfig and
IS3BucketObjectConfig and IIamRoleConfig
Blah Blah...
...Blah Blah instantiating S3BucketObject , IamRole ,
LambdaFunction Blah Blah Blah
}
}
In Windows Powershell, I've run the following:
$Env:TF_VAR_imageId="testing"
cdktf synth
However, when I Console Writeline code gets executes, I gives the following output:
imageId.ToString()
[2024-04-08T14:59:56.904] [ERROR] default - ${TfToken[TOKEN.1]}
imageId.StringValue
[2024-04-08T14:59:56.904] [ERROR] default - ${TfToken[TOKEN.2]}
The imageId's value fails to get applied during the cdktf synth execution. Why, and could someone please tell me how to resolve said problem?
The backend config dont allow variables here (I tried).
How are you masking the sensitive credentials without exposing it in your repos?
Want to see how you are passing these values…
Cheers!
I'm trying to set up a module so that we can define an IP space for an Azure virtual network and create subnets automatically based on that space. We're defining the variable as such in variables.tf:
variable "testProject" {
type = map(object({
teamName = string
addressSpace = list(string)
homeIPs = list(string)
}))
}
and populating them from auto.tfvars:
testProject= {
bigProject = {
teamName = "FakeName"
homeIPs = ["192.168.1.2"]
addressSpace = ["10.99.0.0/16"]
}
}
We want to use the cidrsubnets function to create incrementally increasing subnet ranges for multiple subnets within the vnet:
resource "azurerm_virtual_network" "test-vnet" {
for_each = var.testProject
name = "${each.value.teamName}-test-vnet"
address_space = each.value.addressSpace
resource_group_name = azurerm_resource_group.test-rg[each.key].name
location = azurerm_resource_group.test-rg[each.key].location
lifecycle {
ignore_changes = [tags]
}
}
resource "azurerm_subnet" "test-subnet-storage" {
for_each = var.testProject
name = "${each.value.projectName}-test-subnet-storage"
resource_group_name = azurerm_resource_group.test-rg[each.key].name
virtual_network_name = azurerm_virtual_network.test-vnet[each.key].name
address_prefixes = cidrsubnets(azurerm_virtual_network.test-vnet[each.key].addressSpace,8,0)
service_endpoints = ["Microsoft.AzureCosmosDB", "Microsoft.KeyVault", "Microsoft.Storage","Microsoft.CognitiveServices"]
}
When I run tf plan, I get an error that a string is required. I tried using tostring:
address_prefixes = cidrsubnets(tostring(azurerm_virtual_network.why-test-vnet[each.key].address_space),8,0)
but this throws an error that it can't convert a list of strings to a string.
How should I go about getting cidrsubnets to just take the address space as an input here?
I’m trying to deploy a 2 tier architecture using terraform , currently I’m trying to automate the process but I m unable to dynamically grab my rds name provisioned in the wp config file , all other variables were populate but not the rds dns name , I would appreciate any feedback back on how I can do this , I m currently using a template file approach but no luck yet.
Thanks in advance! I'm very very new to DevOps and I'm trying to to set up an AWS infrastructure with terraform. I already used
resource "tls_private_key" "ssh_key" {
algorithm = "RSA"
rsa_bits = 4096
}
to generate the key-pair and use it with aws_key_pair. However I also need to automate the deployment via Gitlab CI/CD. So, connect to the EC2 instance using SSH. How should I approach reusing the key-pair to parse to 'ssh -i /path/key-pair-name.pem instance-user-name@instance-public-dns-name'? At the same time, how can I reuse the public_dns output? Should I store them with CI/CD variables?
I've uploaded an image on ECR and trying to deploy 3 containers on ECS Fargate using target groups, the tasks keep going up, failing health checks and then drain, im new to this, can't find any log or valuable error message anywhere except for Task failed ELB health checks in Target Group
, any help would be appreciated. I don't know which part of the code to provide as i can't find the exact issue.
resource "aws_ecs_task_definition" "Gitlab_task" {
family = "Gitlab-task"
container_definitions = <<DEFINITION
[
{
"name": "Gitlab-task",
"image": "${aws_ecr_repository.my_first_ecr_repo.repository_url}",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 512,
"cpu": 256
}
]
DEFINITION
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
memory = 512
cpu = 256
execution_role_arn = aws_iam_role.ecsGitlabTaskExecutionRole.arn
}resource "aws_ecs_task_definition" "Gitlab_task" {
family = "Gitlab-task"
container_definitions = <<DEFINITION
[
{
"name": "Gitlab-task",
"image": "${aws_ecr_repository.my_first_ecr_repo.repository_url}",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 512,
"cpu": 256
}
]
DEFINITION
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
memory = 512
cpu = 256
execution_role_arn = aws_iam_role.ecsGitlabTaskExecutionRole.arn
}
resource "aws_ecs_service" "Gitlab_service" {
name = "test-gitlab-service"
cluster = aws_ecs_cluster.Gitlab_cluster.id
task_definition = aws_ecs_task_definition.Gitlab_task.arn
launch_type = "FARGATE"
desired_count = 3
load_balancer {
target_group_arn = aws_lb_target_group.target_group.arn
container_name = aws_ecs_task_definition.Gitlab_task.family
container_port = 80
}
network_configuration {
subnets = [aws_subnet.Gitlab-subnet-1.id, aws_subnet.Gitlab-subnet-2.id]
assign_public_ip = true
security_groups = [aws_security_group.lb_sg.id]
}
}
I need suggestions what's more efficient way to do this. I created custom modules example is for our VM.
Let say for example I need to create 3 VMs.
Should I do a foreach loop inside the modules? Or should I do loop when calling the module(outside)?
Im leaning towards looping outside. But need some suggestions.
Hey guys, I am relatively new to using Terraform. So looking for support here. I have .env file with the variables for my project and also the variables.tf for using the similar variables on main.tf.
It would be very convenient to keep referring the variables directly from the .env file leading to lower duplication errors.
I have seen in the documentation that it might be possible using TF_VAR. However, the documentation isn't quite clear to me.
How do you solve this problem? Or is it normal to be using two different files for project variables?
To start with I have this defined:
vpc_cidr = "
10.0.0.0/16
"
availability_zone_count = 3
subnet_size_private = 22
subnet_size_public_inbound = 24
subnet_size_public_outbound = 27
What I want to do is take my three subnet types (private, inbound, outbound) and multiplies them by the number of availability zones I define, and passes the whole thing into cidrsubnets() to do the grunt work of chopping up my vpc cidr. That should look like this:
locals {
sub_priv_bits = var.subnet_size_private - split("/", var.vpc_cidr)[1]
sub_pub_bits_in = var.subnet_size_public_inbound - split("/", var.vpc_cidr)[1]
sub_pub_bits_out = var.subnet_size_public_outbound - split("/", var.vpc_cidr)[1]
subnet_bits = flatten([ for az in range(var.availability_zone_count): [ local.sub_priv_bits, local.sub_pub_bits_in, local.sub_pub_bits_out ]])
subnets = cidrsubnets(var.vpc_cidr, local.subnet_bits)
}
That last line however, cidrsubnets(var.vpc_cidr, local.subnet_bits)
, fails with, "Invalid value for "newbits" parameter: number required.
" I've found old issues describing this problem, but no fixes and of course as modern projects like to do the issue bots just auto-closed them. :/ OpenToFu doesn't work either. I'm very tempted to use this as an excuse to get into golang and try fixing it myself and submit a PR, but that's for another day.
I can't seem to figure out a mechanism to dereference local.subnet_bits
such that it can be passed as a list of numbers that cidrsubnets() is asking for? I've tried tolist(), etc all with the same results. It seems to only accept explicit arguments and so far as I'm aware there's no deref equivalent in HCL that would allow something like **local.subnet_bits to pass this as separate args rather than a list object?
I've worked around the issue for with a big if/then tree...but since Terraform also doesn't support any sane conditional syntax I'm limited to using ?: form. And because Terraform can't parse ?: that spans multiple lines the monstrosity becomes the absolute bonkers 1825 character long "one liner" shown below. This only works upto 6 availability zones because of course it's an if/then block and has to end somewhere. That limitation is annoying, but not really a problem in practice for this example. The problem I have is this is a gigantic hack and I'd like a better solution for general use. Even my "error handling" for az > 6 is a fugly hack using file(), because Terraform lacks proper exception features too.
Please someone tell me I've using this all wrong and there's something easy I'm stupidly missing, because this is ridiculous for a seemingly "normal" ask?
Thanks,
Here's the fugly but working line:
subnets = var.availability_zone_count > 1 ? var.availability_zone_count > 2 ? var.availability_zone_count > 3 ? var.availability_zone_count > 4 ? var.availability_zone_count > 5 ? var.availability_zone_count == 6 ? cidrsubnets(var.vpc_cidr, local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out) : file("ERROR: var.availability_zone_count must be <= 6") : cidrsubnets(var.vpc_cidr, local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out) : cidrsubnets(var.vpc_cidr, local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out) : cidrsubnets(var.vpc_cidr, local.sub_priv_bits,local.sub_priv_bits,local.sub_priv_bits,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_out, local.sub_pub_bits_out, local.sub_pub_bits_out) : cidrsubnets(var.vpc_cidr, local.sub_priv_bits,local.sub_priv_bits,local.sub_pub_bits_in,local.sub_pub_bits_in,local.sub_pub_bits_out, local.sub_pub_bits_out) : cidrsubnets(var.vpc_cidr, local.sub_priv_bits,local.sub_pub_bits_in,local.sub_pub_bits_out)
Dear Seniors,
I was asked to convert this statement inside main.tf under most of my resource into a file like variables.tf. It goes something like this.
resource ec2 some name
count = aws_vpc_name == "some string" ? "another string" : 0
Do I turn it into module like this
variable some_count
resource ec2 some name
count = var.some_count
The logic is some_string is repeated throughout the script.
Thanks.
Is it anti-pattern? Why if so?
And if it isn't, how to properly type the input variable in the receiving module to avoid using just `{}`?
Example:
# ./main.tf
module "a" {
source = "./modules/a"
}
module "b" {
source = "./modules/b"
module_a = module.a
}
# ./modules/b/variables.tf
variable "module_a" {}
I spinned up anew PostgreSQL Database and added the connection info and ran a siple terraform init anmd got this below error:
Initializing the backend...
╷
│ Error: parse "postgres://username:password@PGSERVER.name.com/terraform_backend": net/url: invalid userinfo
│
My password is a combination of alphanumeric & special characters. Is this what might be causing this issue?
What am I missing here people's?
is anybody managing services in systemd with terraform? or any other init system for that matter?
I've got a map that uses a list of strings as one of its entries:
variable "projectName" {
type = map(object({
teamName = string
homeIPs = list(string)
addressSpace = string
}))
}
I'd been using coalesce
for other entries from the map since they only have one entry for the string, but since homeIPs
contains multiple values and is a list of strings, it throws an error that it's an inappropriate value when I call it as follows:
ip_rules = coalesce(each.value.homeIPs,each.key)
How should I call the list of strings from the map in order to stay as a list of strings?
Hi,
Instances in target group in unhealthy when i make a change in the configuration code and apply. Wen I destroy and apply it's healthy. Do you know what can be the cause ?
Hi,
Instances in target group in unhelthy when i make a change in the configuration code and apply. Wen I destroy and apply it's healthy. Do you know what can be the cause ?
Hello everyone,
I am currently getting into Kubernetes and play around with EKS. I have seen that when you define a node group with the resource aws_eks_node_group you are a bit restricted if you don't spin up instances from launch templates as you can't specifiy which EBS volume to use. My question would be: what is the best practice here or what are you guys generally using? Create node groups always from launch templates or if you are happy with the root EBS volume use the parameters of aws_eks_node_group, like instance_types, disk_size, capacity_type, etc. (stuff you can also specify in launch templates)? If I am getting anything wrong please feel free to correct me.
I have a preexisting Terraform state (local backend) that I want to use for a new stack on Spacelift. I have tried to upload the preexisting state file while creating the stack, however the proposed plan show creation of all resources.
I am open to both Spacelift managing my state or using S3 on my own account. I would appreciate if you could even guide me to proper documentation. Thanks!
We are in AWS. I've been hearing this issue at work in the past few months that sometimes, terraform apply fails because our policy size is full. I don't have the exact errors but maybe you've also experienced it before. What is the proper way to address this issue?
I need to build our own private TF registry in AWS. I’m thinking I could accomplish this with API gateway -> Lambda -> S3 bucket. Are there any good docs out there that could assist me? Thanks.
Everything should be bone stock defaults. I have Vault deployed (following the deploy section of the getting started guide) to a VM on the same network as the host running the Terraform templates. The Vault VM is deployed with the following configs
storage "raft" {
path = "/var/lib/vault"
node_id = "node1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = "true"
}
api_addr = "http://localhost:8200"
cluster_addr = "https://localhost:8201"
ui = true
log_requests_level = "info"
log_level = "info"
Terraform tfvars file
vault_address = "http://10.69.69.145:8200"
vault_token = "hvs.<redacted but this is the original root token>"
And here's the TF template itself
variable "vault_address" {}
variable "vault_token" {}
provider "vault" {
address = var.vault_address
token = var.vault_token
}
resource "vault_generic_secret" "store_access_keys_in_vault" {
path = "cubbyhole/secret/terraformTest"
data_json = <<EOT
{
"Hello": "world"
}
EOT
}
When I apply the template, it says it successfully saved the secret
PS C:\Users\Levantine\PycharmProjects\terraform> terraform apply --var-file=vars/production.tfvars
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# vault_generic_secret.store_access_keys_in_vault will be created
+ resource "vault_generic_secret" "store_access_keys_in_vault" {
+ data = (sensitive value)
+ data_json = (sensitive value)
+ delete_all_versions = false
+ disable_read = false
+ id = (known after apply)
+ path = "cubbyhole/secret/terraformTest"
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
vault_generic_secret.store_access_keys_in_vault: Creating...
vault_generic_secret.store_access_keys_in_vault: Creation complete after 0s [id=cubbyhole/secret/terraformTest]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
But when I look in the vault UI, I don't see any secret being saved, and when I run terraform destroy, it's not able to destroy what it created either.
PS C:\Users\Levantine\PycharmProjects\terraform> terraform destroy --var-file=vars/production.tfvars
vault_generic_secret.store_access_keys_in_vault: Refreshing state... [id=cubbyhole/secret/terraformTest]
No changes. No objects need to be destroyed.
Either you have not created any objects yet or the existing objects were already deleted outside of Terraform.
Destroy complete! Resources: 0 destroyed.
BTW, this worked when I started vault on my local machine in dev mode. But when I deployed vault to a VM and i'm pretty sure i followed the getting started guide instructions pretty well, the only way I can read/write secrets is via the UI it seems. What could I be missing?
Could someone explain to me deploying to on-prem / private infrastructure is much harder with Terraform Cloud than Enterprise? Is there anything they can even do to fix this?
Hi guys.
I am writing the Terraform module to create an AWS Organization on the AWS root account of my company. I would like to automatic create (like a setup) an IAM role in each child account that is being created within the organization's scope. Could you please share some thoughts about what should be the best approach to accomplish this?
Is it through a remote-exec provisioner block with aws create-role CLI commands within the resource "aws_organizations_account" block?
Following https://developer.hashicorp.com/terraform/language/import/generating-configuration we are trying to generate config file.
We're running "terraform plan -generate-config-out=generated_resources.tf", in the azure pipeline. Import blocks are defined with necessary details of the resources to be imported.
When the pipeline runs, it shows plan stage with some errors in the config file generated, but the teraform plan stage fails and I do not find generated_resources.tf file.
Has anybody tried this? If the log shows errors in the generated config, is it deleted because the tereform plan has failed? Anyway to debug this further?
Appreciate your time and help Thank you
I am studying vpc/subnet/security group/route.
Placing ec2/ecs/rds on each resource and modify the structure, i will be in a still destroying ... problem.
How should i solve it?
I'm still learning, so I have to change the subnet a lot
I'm waiting for it to be still destroying ... for 30 minutes once I change the code.