/r/Terraform
Terraform discussion, resources, and other HashiCorp news.
This subreddit is for Terraform (IaC - Infrastructure as Code) discussions to get help, educate others and share the wealth of news.
Feel free to reach out to mods to make this subreddit better.
Rules:
Be nice to each other!
MultiLink_Spam == Perm.Ban
/r/Terraform
I have AWS CCP and SAA certificate. Planning to take Terraform associate next. Any udemy courses, practice exams suggestions that actually helped you pass?
I want to create AWS Glue table with 2 partition keys (also ordered). The generation of such table should look like:
CREATE TABLE firehose_iceberg_db.iceberg_partition_ts_hour (
eventid string,
id string,
customername string,
customerid string,
apikey string,
route string,
responsestatuscode string,
timestamp timestamp)
PARTITIONED BY (month(`timestamp`),
customerid)
I try to create the table in the same way, but using Terraform, using this resource: https://registry.terraform.io/providers/hashicorp/aws/4.2.0/docs/resources/glue_catalog_table
However, I cannot find a way, under the partition_keys
block, of doing the same.
Regarding the partition keys, I tried to conifgure:
partition_keys {
name = "timestamp"
type = "timestamp"
}
partition_keys {
name = "customerId"
type = "string"
}
Per the docs of this resource, glue_catalog_table
, I cannot find a way to the same for the timestamp
field (month(timestamp)
). And second point is that the partition of timestamp
should be primary first one, and the customerId
partition should be the secondary (as same as configured in the SQL query I added). Is it guaranteed to preserve this order if I did the same in the partition_keys
block order? You can see in my TF configuration, timstamp
comes before customerId
We've been using Atlantis with GitLab, and it worked really well. But after upgrading GitLab to version 15.11.13 earlier this week, the autoplan right after a merge request is submitted doesn't seem to trigger anymore. However, when I manually type 'atlantis plan', it runs the plan just fine and output of the plan is displayed in the merge request. Interestingly, if I make changes to the merge request, the autoplan works as expected. It's really weird, to be honest! I've been back and forth watching Eiffel Tower after every brand new merge request submission for hours without any luck. Anyone has experienced this issue?
This is what I use;
alias tfapply="terraform apply -var-file=/home/mypath/terraform/terraform.tfvars --auto-approve"
Although this works for me, I can't use extra flags in the apply command - and I need to have a tfdestroy alias too to pass the var file.
There does not seem to be any global variable for the "var-file" - how are we supposed to do this?
Hi, Im using telmate provider = "telmate/proxmox" version = "3.0.1-rc4"
And It creates vm but only when I set disk and cloudinit as local-lvm - when im change it to my "storage" storage ( bigger disk added to proxmox-server ) I have a problems with "resizing"
I tried all combinations like local-lvm + local-lvm / storage + local-lvm / etc ...
and still got
When Im creating manually new vm, and set disk storage as "STORAGE" everything works fine.
Only with terraform I can't create proper disks.
again - it works fine if I set cloudinit and scsi0 disk as local-lvm
Hello, we are evaluating an approach where we can build opinionated modules(mainly key-value) and let our customers(internal teams) create their infra through them. E.g. we can couple few AWS components in one module and then, when the team which needs this use-case, it will just refer our module with params and will get its infra created. I assume this is "terraservices" pattern. The tricky part is how we define providers with secrets, environments, use-case bounded providers and how we design overall architecture.
Does anyone has any examples or experience?
Thanks in advance
tfkonf allows you to generate Terraform configuration files using TypeScript.
As a heavy user of CDKTF, I’ve found its API to feel awkward and overly complex due to its multi-language code generation design. Many of you may already know that CDKTF is no longer well-maintained, and CDK8s is effectively on life support.
With tfkonf, my goal is to create a lightweight and spiritual successor to these tools.
At the moment, tfkonf is not quite ready for daily use. Features like native Terraform functions, meta arguments, and others are still under development—but they’re coming soon!
I’m excited to announce this project, gather feedback from the community, and collaboratively build a strong foundation for tfkonf.
I’d love to hear your thoughts and ideas! Whether it’s features you’d like to see, improvements to the API, or general feedback, your input will help shape the future of this project.
I am trying to apply a terragrunt.hcl
file. It gives the plan output as normal but when I type "yes" and hit enter it gives me errors like this variables:
│ Error: Can't change variable when applying a saved plan
│
│ The variable private_subnets cannot be set using the -var and -var-file
│ options when applying a saved plan file, because a saved plan includes the
│ variable values that were set when it was created. The saved plan specifies
│ "[\"10.0.11.0/24\"]" as the value whereas during apply the value tuple with
│ 1 element was set by an environment variable. To declare an ephemeral
│ variable which is not saved in the plan file, use ephemeral = true.
I don't use any variable file or pass variables with -var
flag. I also tried using terragrunt plan -out=planfile
then applying it with terragrunt apply planfile
but I got the same error.
I have a cloud run service deployed on GCP.
In order to deploy it, I first build the dockerfile, and then push the image to the gcp artifact registry, and then redeploy the service.
The problem is, when I run terraform apply
, it doesn't automatically redeploy the service with the new image, since I guess it cannot track the change of the image in the local docker repository.
What is the best practice to handle this? I guess I can add a new version number to the image every time I build, and pass this as an argument to terraform, but not sure if there is a better way to handle it.
Hello Everyone
I am creating ACM certificate and Route 53 records using terraform in AWS. My code is perfectly working for a domain, subdomain and another distinct domain but I have requirement that I have to add multiple distinct domains in a single ACM certificate with different hosted zone. I able add one main domain and multiple subdomains of it also another distinct subdomain. But not able to add multiple distinct alternatives domains in it.
Without terraform by the AWS Console it is possible. And able to do it.
I trying to use for_each or distinct I am getting many issues which says Invalid syntax or not support in terraform
Anyone please help me.
Note. We have only one AWS Account We created separate hosted zones for each distinct domain.
Hi folks,
I have joined a new company recently and in very first week I'm asked to enhance their terraform scripts and automate few of the manual tasks being done. I'm not so familiar with terraform, apart from the basics and understanding of code. What would be the best resource to get started? Are there any tools or sites which help with understanding the terraform flow via code and can use to understand the automation aspect for certain manual tasks?
Ps: manual tasks details can be discussed in comments if anyone is interested. Or please DM me.
Thanks!!
Hello,
I am working on creating an Azure Linux Function App using Python as the runtime and the Flexi Consumption App Service Plan, implemented through Terraform.
However, I am encountering the following error. Could someone please provide guidance?
Thank you!
Error:
{"Code": "BadRequest", "Message":"Site. Func tionAppConfig is invalid. The FunctionAppConfig section was not specified in the request, which is required for Flex | Consumption sites. To proceed, please add the FunctionAppConfig section in your request.", "Target": null," Details": [{"Message":"Site.FunctionAppConfig is linvalid. The FunctionAppConfig section was not specified in the request, which is required for Flex Consumption sites. To proceed, please add the FunctionAppConfig section in your request.",{"Code": "BadRequest",, {"ErrorEntity": {"ExtendedCode": "51021", "MessageTemplate ":"{O} is invalid. |{1}" "Parameters": ["Site.FunctionAppConfig", "The FunctionAppConfig section was not specified in the request, which is required for Flex Consumption sites. To I proceed, please add the FunctionAppConfig section in your request."],"Code": "BadRequest", "Message". " Site.FunctionAppConfig is invalid. The FunctionAppConfig I section was not specified in the request, which is required for Flex Consumption sites. To proceed, please add the FunctionAppConfig section in your request.")," nererror": nully
Hello. I have two S3 buckets created for static website and each of them have resource aws_s3_bucket_website_configuration
. As I understand, if I want to redirect incoming traffic from bucket B to bucket A in the website configuration resource of bucket B I need to use redirect_all_requests_to{}
block with host_name
argument, but I do not know what to use in this argument.
What should be used in this host_name
argument below ? Where should I retrieve the hostname of the first S3 bucket hosting my static website from ?
resource "aws_s3_bucket_website_configuration" "b_bucket" {
bucket = "B"
redirect_all_requests_to {
host_name = ???
}
}
What are your thoughts and how do you foresee this improving your current workflows? Since I work with Vault a lot, this seems to help solve issues with seeding Vault, retrieving and using static credentials, and providing credentials to resources/platforms that might otherwise end up in state.
It also supports providing unique values for each Terraform phase, like plan and apply. Where do you see this improving your environment?
I am in the process of designing an end-to-end infrastructure and deployment structure for product and would appreciate your input on the best practices and approaches used in currently.
For this project, I plan to utilize the following tools:
Question 1:Â Should Kubernetes (K8s) addon dependencies (e.g., ALB ingress controller. Karpenter, Velero, etc.) be managed within Terraform or outside of Terraform? Some of these dependencies require role ARNs to be passed as values to the Helm charts for the addons.
Question 2:Â If the dependencies are managed outside of Terraform, should the application Helm chart and the addon dependencies be managed together or separately? I aim to implement a GitOps approach for both infrastructure and application, as well as addon updates.
I would appreciate any insights on the best practices for implementing a structure like this any reference could be very helpful.
Thank you.
Hello all !
I'm looking to give this exam. Could perhaps someone suggest the most appropriate materials to prepare for it ?
Many thanks in advance!
There is a WIP for Terragrunt v1.0 which I am interested in; however, if Opentofu and Terraform stacks is already working on this approach would companies begin to migrate off of Terragrunt?
I am happy with Terragrunt and what it has given. Many people have a hard time with it's setup in companies but I actually like it when it comes to complicated infrastructures that have many regions in the cloud to deploy to and having state files broken into units. Nevertheless, the amount of `terragrunt.hcl` files are a PITA to manage.
I hate Terraform Workspaces and branching methodology the MOST compared to Terragrunt. Hell, I prefer having directories like so:
terraform-repo/
├── modules/ # Reusable modules
│ ├── network/ # Example module: Network resources
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ └── README.md
│ ├── compute/ # Example module: Compute resources
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ └── README.md
│ └── ... # Other reusable modules
├── environments/ # Environment-specific configurations
│ ├── dev/
│ │ ├── main.tf # Root module for dev
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ ├── backend.tf # Remote state configuration (specific to dev)
│ │ └── terraform.tfvars
│ ├── qa/
│ │ ├── main.tf # Root module for QA
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ ├── backend.tf # Remote state configuration (specific to QA)
│ │ └── terraform.tfvars
│ └── prod/
│ ├── main.tf # Root module for prod
│ ├── variables.tf
│ ├── outputs.tf
│ ├── backend.tf # Remote state configuration (specific to prod)
│ └── terraform.tfvars
└── README.md # Documentation for the repository
Would like to know what you guys think on this.
I have a question about resources counts in terreaform. Our group has a very specific eks cluster requirement, and to run our app we have a very specific number of components that we need to deploy. I'll give an example, we deploy 2 vpc, 1 eks cluster, one ec2 instance, two RDS and 5-6 buckets.
The total number of resources created comes up to be around 180 or so, but what would be the best practice in this case since I'm mostly working with modules ?
Should I count the logical resources ( that will come out to about 10 ) or keep in mind the total resources ?
Please note that our environment is very specific, meaning to work it will need a specific set of resources and just change things like instance size, count etc... The total length of the main.tf is a bit less than 200 lines.
This makes the pipelines we use to deploy the infrastructure easy enough without the need of additional scripts to cycle directories, but I'm wondering what I can do to improve it.
Hi,
We are moving from terragrunt to Terraform, and encountered a problem. When calling a child module, the called child module needs to know what git version is being called.
We always pin our child module versions in separate repos in Azure DevOps, and child modules are called with the git version x.y.z as the source.
Each child module has some code in which it needs to know, for accurate tagging, which git version of the child module has been called. Is it possible to do this without any extra code in the root module? Or does Terraform not store at all what module version is used and therefore has to be passed manually through the root module calling the child module?
Appreciate any help
ETA as I was unclear:
When the child module is called, is there a way for the child module to know that git version tag of itself is being called?
So EG if root is calling child module A from a git repo, using the git ref version 1.1.6, is there a way for child module A to know it's version 1.1.6 being called?
This is because child module A then calls child module B, and it needs to tell child module B what version of itself (child module A) is being used (1.1.6) to create a tag
"Hi Terraform community! I'm looking for a Terraform lab environment to practice and learn more about infrastructure as code. Could you please share any resources, tutorials, or GitHub repositories that provide a Terraform lab setup? Any help would be greatly appreciated!"
Per Terraform docs, "Provider configurations can be defined only in a root Terraform module." If you violate this and define a provider in a sub-module, you'll probably get what you want at first, but later on you'll run into a variety of issues. One of which is that you can't just remove the module after it's been created. If you try to remove a module call that has provider configurations in it, you'll get an error. The docs say, "you must ensure that all resources that belong to a particular provider configuration are destroyed before you can remove that provider configuration's block from your configuration", but you can't do that if you're, in effect, removing the resource and its provider at the same time. So don't do it. Don't define provider configurations in a module that is intended to be called by another module.
Hello! I’m new to working with AWS and terraform and I’m a little bit lost as to how to tackle this problem. I have a global RDS cluster that I want to access via a terraform file. However, this resource is not managed by this terraform set up. I’ve been looking for a data source equivalent of the aws_rds_global_cluster resource with no luck so I’m not sure how to go about this – if there’s even a good way to go about this. Any help/suggestions appreciated.
Hi all,
Company I'm working at is starting to get stricter on the Azure policy side of things with the knock on effect being that TF pipelines will run fine through the test / verification stages but fail when trying to apply as that's when a policy clash happens.
We've spoken to our Microsoft team lead but they don't have any suggestions on how to verify a plan against Azure Policies so I was wonder how other companies handle this.
Thanks.
Hi All
I need to copy a .ps1 script for my gitrepo to a Azure vm via terrafom.
Will this code work ?
provisioner "file" {
source = "path/to/your/local/file.txt"
destination = "C:\\path\\to\\destination\\file.txt"
}
provisioner "remote-exec" {
inline = [
"echo 'File has been copied!'"
]
connection {
type = "winrm"
user = "adminuser"
password = "Password1234!"
host = self.public_ip_address
port = 5986
https = true
insecure = true
}
Hello, I'm working with loops in Terraform to create multiple resources within a resource group, but I'm stuck at a certain point.
I need to create two resource groups and four key vaults: two key vaults in each resource group. The naming convention for the resource groups and key vaults should follow this pattern:
example-resource-group1
should contain two key vaults:kv-example-resource-group1-dev
kv-example-resource-group1-test
example-resource-group2
should contain two key vaults:kv-example-resource-group2-dev
kv-example-resource-group2-test
I've been able to get as far as creating the resource groups and a single key vault, but now I'm stuck when trying to create both the dev and test key vaults in each resource group.
I also understand that key vault names are limited to 24 characters, so the names I provided above are just examples, but they adhere to the character limit.
Any help on how to modify my Terraform code to achieve this would be greatly appreciated!
module "key_vault" {
 for_each = {
  for rg_name, rg_data in var.resource_groups :
  rg_name => {
   dev  = { name = "${rg_name}-dev" }
   test = { name = "${rg_name}-test" }
  }
 }
 source = "./modules/key_vault"
 name         = each.value.dev.name # or use `test.name` for test Key Vaults
 location       = module.resource_groups[each.key].location
 resource_group_name = module.resource_groups[each.key].name
 sku_name       = "standard"
 tenant_id      = data.azurerm_client_config.current.tenant_id
}
Hi all,
I'm a week into my first DevOps position and was assigned a task to organize and tag our Terraform modules, which have been developed over the past few months. The goal is to version them properly so they can be easily referenced going forward.
Our code is hosted on Bitbucket, and I have the flexibility to decide how to approach this. Right now, I’m considering whether to:
The team lead leans toward a single repository for simplicity, but I’ve noticed tagging and referencing individual modules might be a bit trickier in that setup.
I’m curious to hear how others have approached this and would appreciate any input on:
If you’ve handled something similar, I’d appreciate your perspective.
Thanks!