/r/Terraform

Photograph via snooOG

Terraform discussion, resources, and other HashiCorp news.

This subreddit is for Terraform (IaC - Infrastructure as Code) discussions to get help, educate others and share the wealth of news.

Feel free to reach out to mods to make this subreddit better.

Rules:

  • Be nice to each other!

  • MultiLink_Spam == Perm.Ban

/r/Terraform

58,007 Subscribers

3

Terraform Associate BEST Udemy Course?

I have AWS CCP and SAA certificate. Planning to take Terraform associate next. Any udemy courses, practice exams suggestions that actually helped you pass?

1 Comment
2024/12/01
11:01 UTC

2

How to create AWS Glue table with partition key of timestamp, with "month" function?

I want to create AWS Glue table with 2 partition keys (also ordered). The generation of such table should look like:

CREATE TABLE firehose_iceberg_db.iceberg_partition_ts_hour (

  eventid string,

  id string,

  customername string,

  customerid string,

  apikey string,

  route string,

  responsestatuscode string,

  timestamp timestamp)

PARTITIONED BY (month(`timestamp`),

customerid)

I try to create the table in the same way, but using Terraform, using this resource: https://registry.terraform.io/providers/hashicorp/aws/4.2.0/docs/resources/glue_catalog_table

However, I cannot find a way, under the partition_keys block, of doing the same.

Regarding the partition keys, I tried to conifgure:

  partition_keys {
    name = "timestamp"
    type = "timestamp"
  }

  partition_keys {
    name = "customerId"
    type = "string"
  }

Per the docs of this resource, glue_catalog_table, I cannot find a way to the same for the timestamp field (month(timestamp)). And second point is that the partition of timestamp should be primary first one, and the customerId partition should be the secondary (as same as configured in the SQL query I added). Is it guaranteed to preserve this order if I did the same in the partition_keys block order? You can see in my TF configuration, timstamp comes before customerId

0 Comments
2024/12/01
09:31 UTC

0

It's behaving differently after upgrade

We've been using Atlantis with GitLab, and it worked really well. But after upgrading GitLab to version 15.11.13 earlier this week, the autoplan right after a merge request is submitted doesn't seem to trigger anymore. However, when I manually type 'atlantis plan', it runs the plan just fine and output of the plan is displayed in the merge request. Interestingly, if I make changes to the merge request, the autoplan works as expected. It's really weird, to be honest! I've been back and forth watching Eiffel Tower after every brand new merge request submission for hours without any luck. Anyone has experienced this issue?

2 Comments
2024/12/01
09:23 UTC

1

Terraform plan, apply, destroy - running them I have to pass the same tfvars file. I use the same file in every project. Is it not possible to set this globally? I use a bash alias at the moment

This is what I use;

alias tfapply="terraform apply -var-file=/home/mypath/terraform/terraform.tfvars --auto-approve"

Although this works for me, I can't use extra flags in the apply command - and I need to have a tfdestroy alias too to pass the var file.

There does not seem to be any global variable for the "var-file" - how are we supposed to do this?

33 Comments
2024/11/30
10:06 UTC

1

Proxmox provider problem

Hi, Im using telmate provider = "telmate/proxmox" version = "3.0.1-rc4"
And It creates vm but only when I set disk and cloudinit as local-lvm - when im change it to my "storage" storage ( bigger disk added to proxmox-server ) I have a problems with "resizing"

I tried all combinations like local-lvm + local-lvm / storage + local-lvm / etc ...

and still got

When Im creating manually new vm, and set disk storage as "STORAGE" everything works fine.

Only with terraform I can't create proper disks.
again - it works fine if I set cloudinit and scsi0 disk as local-lvm

https://preview.redd.it/doerr5fcuw3e1.png?width=1982&format=png&auto=webp&s=aa7415aed5e44dfb8be8fb5d68544a89fb430349

https://preview.redd.it/5ovv3cu7uw3e1.png?width=552&format=png&auto=webp&s=3cc498325f2a791b7b62c0b8436762982656dd3b

0 Comments
2024/11/29
21:42 UTC

4

Terraservices example

Hello, we are evaluating an approach where we can build opinionated modules(mainly key-value) and let our customers(internal teams) create their infra through them. E.g. we can couple few AWS components in one module and then, when the team which needs this use-case, it will just refer our module with params and will get its infra created. I assume this is "terraservices" pattern. The tricky part is how we define providers with secrets, environments, use-case bounded providers and how we design overall architecture.

Does anyone has any examples or experience?

Thanks in advance

6 Comments
2024/11/29
12:29 UTC

6

Introducing tfkonf. TypeScript library for defining infrastructure configurations! 🚀

tfkonf allows you to generate Terraform configuration files using TypeScript.

As a heavy user of CDKTF, I’ve found its API to feel awkward and overly complex due to its multi-language code generation design. Many of you may already know that CDKTF is no longer well-maintained, and CDK8s is effectively on life support.

With tfkonf, my goal is to create a lightweight and spiritual successor to these tools.

At the moment, tfkonf is not quite ready for daily use. Features like native Terraform functions, meta arguments, and others are still under development—but they’re coming soon!

I’m excited to announce this project, gather feedback from the community, and collaboratively build a strong foundation for tfkonf.

I’d love to hear your thoughts and ideas! Whether it’s features you’d like to see, improvements to the API, or general feedback, your input will help shape the future of this project.

https://github.com/konfjs/tfkonf

12 Comments
2024/11/29
11:34 UTC

3

"Can't change variable when applying a saved plan"

I am trying to apply a terragrunt.hcl file. It gives the plan output as normal but when I type "yes" and hit enter it gives me errors like this variables:

│ Error: Can't change variable when applying a saved plan
│
│ The variable private_subnets cannot be set using the -var and -var-file
│ options when applying a saved plan file, because a saved plan includes the
│ variable values that were set when it was created. The saved plan specifies
│ "[\"10.0.11.0/24\"]" as the value whereas during apply the value tuple with
│ 1 element was set by an environment variable. To declare an ephemeral
│ variable which is not saved in the plan file, use ephemeral = true.

I don't use any variable file or pass variables with -var flag. I also tried using terragrunt plan -out=planfile then applying it with terragrunt apply planfile but I got the same error.

2 Comments
2024/11/29
09:40 UTC

4

How can I trigger the redeploy of a cloud run service on GCP when the image changes?

I have a cloud run service deployed on GCP.

In order to deploy it, I first build the dockerfile, and then push the image to the gcp artifact registry, and then redeploy the service.

The problem is, when I run terraform apply, it doesn't automatically redeploy the service with the new image, since I guess it cannot track the change of the image in the local docker repository.

What is the best practice to handle this? I guess I can add a new version number to the image every time I build, and pass this as an argument to terraform, but not sure if there is a better way to handle it.

9 Comments
2024/11/28
22:20 UTC

1

Issue at AWS ACM with alternative distinct domain

Hello Everyone

I am creating ACM certificate and Route 53 records using terraform in AWS. My code is perfectly working for a domain, subdomain and another distinct domain but I have requirement that I have to add multiple distinct domains in a single ACM certificate with different hosted zone. I able add one main domain and multiple subdomains of it also another distinct subdomain. But not able to add multiple distinct alternatives domains in it.

Without terraform by the AWS Console it is possible. And able to do it.

I trying to use for_each or distinct I am getting many issues which says Invalid syntax or not support in terraform

Anyone please help me.

Note. We have only one AWS Account We created separate hosted zones for each distinct domain.

8 Comments
2024/11/28
10:39 UTC

0

TERRAFORM HELP!!

Hi folks,

I have joined a new company recently and in very first week I'm asked to enhance their terraform scripts and automate few of the manual tasks being done. I'm not so familiar with terraform, apart from the basics and understanding of code. What would be the best resource to get started? Are there any tools or sites which help with understanding the terraform flow via code and can use to understand the automation aspect for certain manual tasks?

Ps: manual tasks details can be discussed in comments if anyone is interested. Or please DM me.

Thanks!!

14 Comments
2024/11/28
06:51 UTC

1

Flexi consumption-azure function app error

Hello,

I am working on creating an Azure Linux Function App using Python as the runtime and the Flexi Consumption App Service Plan, implemented through Terraform.

However, I am encountering the following error. Could someone please provide guidance?

Thank you!

Error:

{"Code": "BadRequest", "Message":"Site. Func tionAppConfig is invalid. The FunctionAppConfig section was not specified in the request, which is required for Flex | Consumption sites. To proceed, please add the FunctionAppConfig section in your request.", "Target": null," Details": [{"Message":"Site.FunctionAppConfig is linvalid. The FunctionAppConfig section was not specified in the request, which is required for Flex Consumption sites. To proceed, please add the FunctionAppConfig section in your request.",{"Code": "BadRequest",, {"ErrorEntity": {"ExtendedCode": "51021", "MessageTemplate ":"{O} is invalid. |{1}" "Parameters": ["Site.FunctionAppConfig", "The FunctionAppConfig section was not specified in the request, which is required for Flex Consumption sites. To I proceed, please add the FunctionAppConfig section in your request."],"Code": "BadRequest", "Message". " Site.FunctionAppConfig is invalid. The FunctionAppConfig I section was not specified in the request, which is required for Flex Consumption sites. To proceed, please add the FunctionAppConfig section in your request.")," nererror": nully

1 Comment
2024/11/27
20:48 UTC

0

Wanting to create AWS S3 Static Website bucket that would redirect all requests to another bucket. What kind of argument I need to define in `redirect_all_requests_to{}` block in `host_name` argument ?

Hello. I have two S3 buckets created for static website and each of them have resource aws_s3_bucket_website_configuration . As I understand, if I want to redirect incoming traffic from bucket B to bucket A in the website configuration resource of bucket B I need to use redirect_all_requests_to{} block with host_name argument, but I do not know what to use in this argument.

What should be used in this host_name argument below ? Where should I retrieve the hostname of the first S3 bucket hosting my static website from ?

resource "aws_s3_bucket_website_configuration" "b_bucket" {
  bucket = "B"

  redirect_all_requests_to {
    host_name = ???
  }
}
2 Comments
2024/11/27
18:16 UTC

4

KubeCon OpenTofu Day - Mutually Assured Development

1 Comment
2024/11/27
16:23 UTC

51

Terraform 1.10 is out with Ephemeral Resources and Values

What are your thoughts and how do you foresee this improving your current workflows? Since I work with Vault a lot, this seems to help solve issues with seeding Vault, retrieving and using static credentials, and providing credentials to resources/platforms that might otherwise end up in state.

It also supports providing unique values for each Terraform phase, like plan and apply. Where do you see this improving your environment?

37 Comments
2024/11/27
13:32 UTC

2

Best Practices for Infrastructure and Deployment Structure

I am in the process of designing an end-to-end infrastructure and deployment structure for product and would appreciate your input on the best practices and approaches used in currently.

For this project, I plan to utilize the following tools:

  • Terraform for infrastructure provisioning, anything related to cloud
  • Helm for deploying 3 micro services (app1, app2 and app3) and managing Kubernetes dependencies (e.g., AWS ALB Controller, karpenter, velora etc)
  • GitHub Actions for CI/CD pipelines
  • ArgoCD for application deployment

Question 1: Should Kubernetes (K8s) addon dependencies (e.g., ALB ingress controller. Karpenter, Velero, etc.) be managed within Terraform or outside of Terraform? Some of these dependencies require role ARNs to be passed as values to the Helm charts for the addons.

Question 2: If the dependencies are managed outside of Terraform, should the application Helm chart and the addon dependencies be managed together or separately? I aim to implement a GitOps approach for both infrastructure and application, as well as addon updates.

I would appreciate any insights on the best practices for implementing a structure like this any reference could be very helpful.

Thank you.

1 Comment
2024/11/27
12:57 UTC

0

TF associate certification exam

Hello all !
I'm looking to give this exam. Could perhaps someone suggest the most appropriate materials to prepare for it ?
Many thanks in advance!

10 Comments
2024/11/27
10:13 UTC

12

With the advent of Terraform Stacks and, in the works Opentofu Stacks, is Terragrunt losing relevancy?

There is a WIP for Terragrunt v1.0 which I am interested in; however, if Opentofu and Terraform stacks is already working on this approach would companies begin to migrate off of Terragrunt?

I am happy with Terragrunt and what it has given. Many people have a hard time with it's setup in companies but I actually like it when it comes to complicated infrastructures that have many regions in the cloud to deploy to and having state files broken into units. Nevertheless, the amount of `terragrunt.hcl` files are a PITA to manage.

I hate Terraform Workspaces and branching methodology the MOST compared to Terragrunt. Hell, I prefer having directories like so:

terraform-repo/
├── modules/                # Reusable modules
│   ├── network/            # Example module: Network resources
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   │   └── README.md
│   ├── compute/            # Example module: Compute resources
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   │   └── README.md
│   └── ...                 # Other reusable modules
├── environments/           # Environment-specific configurations
│   ├── dev/
│   │   ├── main.tf         # Root module for dev
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   │   ├── backend.tf      # Remote state configuration (specific to dev)
│   │   └── terraform.tfvars
│   ├── qa/
│   │   ├── main.tf         # Root module for QA
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   │   ├── backend.tf      # Remote state configuration (specific to QA)
│   │   └── terraform.tfvars
│   └── prod/
│       ├── main.tf         # Root module for prod
│       ├── variables.tf
│       ├── outputs.tf
│       ├── backend.tf      # Remote state configuration (specific to prod)
│       └── terraform.tfvars
└── README.md               # Documentation for the repository

Would like to know what you guys think on this.

24 Comments
2024/11/27
03:32 UTC

0

Best practices and resource counts

I have a question about resources counts in terreaform. Our group has a very specific eks cluster requirement, and to run our app we have a very specific number of components that we need to deploy. I'll give an example, we deploy 2 vpc, 1 eks cluster, one ec2 instance, two RDS and 5-6 buckets.

The total number of resources created comes up to be around 180 or so, but what would be the best practice in this case since I'm mostly working with modules ?

Should I count the logical resources ( that will come out to about 10 ) or keep in mind the total resources ?

Please note that our environment is very specific, meaning to work it will need a specific set of resources and just change things like instance size, count etc... The total length of the main.tf is a bit less than 200 lines.

This makes the pipelines we use to deploy the infrastructure easy enough without the need of additional scripts to cycle directories, but I'm wondering what I can do to improve it.

6 Comments
2024/11/26
23:56 UTC

0

Output child module git version?

Hi,

We are moving from terragrunt to Terraform, and encountered a problem. When calling a child module, the called child module needs to know what git version is being called.

We always pin our child module versions in separate repos in Azure DevOps, and child modules are called with the git version x.y.z as the source.

Each child module has some code in which it needs to know, for accurate tagging, which git version of the child module has been called. Is it possible to do this without any extra code in the root module? Or does Terraform not store at all what module version is used and therefore has to be passed manually through the root module calling the child module?

Appreciate any help

ETA as I was unclear:

When the child module is called, is there a way for the child module to know that git version tag of itself is being called?

So EG if root is calling child module A from a git repo, using the git ref version 1.1.6, is there a way for child module A to know it's version 1.1.6 being called?

This is because child module A then calls child module B, and it needs to tell child module B what version of itself (child module A) is being used (1.1.6) to create a tag

10 Comments
2024/11/26
15:31 UTC

1

Question: Terraform Lab Environment

"Hi Terraform community! I'm looking for a Terraform lab environment to practice and learn more about infrastructure as code. Could you please share any resources, tutorials, or GitHub repositories that provide a Terraform lab setup? Any help would be greatly appreciated!"

3 Comments
2024/11/26
10:25 UTC

14

Providers configurations in sub-modules are not a good idea

Per Terraform docs, "Provider configurations can be defined only in a root Terraform module." If you violate this and define a provider in a sub-module, you'll probably get what you want at first, but later on you'll run into a variety of issues. One of which is that you can't just remove the module after it's been created. If you try to remove a module call that has provider configurations in it, you'll get an error. The docs say, "you must ensure that all resources that belong to a particular provider configuration are destroyed before you can remove that provider configuration's block from your configuration", but you can't do that if you're, in effect, removing the resource and its provider at the same time. So don't do it. Don't define provider configurations in a module that is intended to be called by another module.

8 Comments
2024/11/25
19:12 UTC

3

RDS Global Cluster Data Source?

Hello! I’m new to working with AWS and terraform and I’m a little bit lost as to how to tackle this problem. I have a global RDS cluster that I want to access via a terraform file. However, this resource is not managed by this terraform set up. I’ve been looking for a data source equivalent of the aws_rds_global_cluster resource with no luck so I’m not sure how to go about this – if there’s even a good way to go about this. Any help/suggestions appreciated.

4 Comments
2024/11/25
18:48 UTC

8

Testing against Azure policies before apply stage

Hi all,

Company I'm working at is starting to get stricter on the Azure policy side of things with the knock on effect being that TF pipelines will run fine through the test / verification stages but fail when trying to apply as that's when a policy clash happens.

We've spoken to our Microsoft team lead but they don't have any suggestions on how to verify a plan against Azure Policies so I was wonder how other companies handle this.

Thanks.

13 Comments
2024/11/25
16:14 UTC

0

copy file to vm

Hi All

I need to copy a .ps1 script for my gitrepo to a Azure vm via terrafom.
Will this code work ?

  provisioner "file" {
    source      = "path/to/your/local/file.txt"
    destination = "C:\\path\\to\\destination\\file.txt"
  }


  provisioner "remote-exec" {
    inline = [
      "echo 'File has been copied!'"
    ]


    connection {
      type     = "winrm"
      user     = "adminuser"
      password = "Password1234!"
      host     = self.public_ip_address
      port     = 5986
      https    = true
      insecure = true
    }
13 Comments
2024/11/25
14:31 UTC

3

Iterating resource creation with loops.

Hello, I'm working with loops in Terraform to create multiple resources within a resource group, but I'm stuck at a certain point.

I need to create two resource groups and four key vaults: two key vaults in each resource group. The naming convention for the resource groups and key vaults should follow this pattern:

  • Resource Group 1: example-resource-group1 should contain two key vaults:
    • kv-example-resource-group1-dev
    • kv-example-resource-group1-test
  • Resource Group 2: example-resource-group2 should contain two key vaults:
    • kv-example-resource-group2-dev
    • kv-example-resource-group2-test

I've been able to get as far as creating the resource groups and a single key vault, but now I'm stuck when trying to create both the dev and test key vaults in each resource group.

I also understand that key vault names are limited to 24 characters, so the names I provided above are just examples, but they adhere to the character limit.

Any help on how to modify my Terraform code to achieve this would be greatly appreciated!

module "key_vault" {
  for_each = {
    for rg_name, rg_data in var.resource_groups :
    rg_name => {
      dev  = { name = "${rg_name}-dev" }
      test = { name = "${rg_name}-test" }
    }
  }

  source = "./modules/key_vault"

  name                = each.value.dev.name # or use `test.name` for test Key Vaults
  location            = module.resource_groups[each.key].location
  resource_group_name = module.resource_groups[each.key].name
  sku_name            = "standard"
  tenant_id           = data.azurerm_client_config.current.tenant_id
}
4 Comments
2024/11/24
22:46 UTC

19

Versioning our Terraform Modules

Hi all,

I'm a week into my first DevOps position and was assigned a task to organize and tag our Terraform modules, which have been developed over the past few months. The goal is to version them properly so they can be easily referenced going forward.

Our code is hosted on Bitbucket, and I have the flexibility to decide how to approach this. Right now, I’m considering whether to:

  1. Use a monorepo to store all modules in one place, or
  2. Create a dedicated repo for each module.

The team lead leans toward a single repository for simplicity, but I’ve noticed tagging and referencing individual modules might be a bit trickier in that setup.

I’m curious to hear how others have approached this and would appreciate any input on:

  • Monorepo vs. multiple repos for Terraform modules (especially for teams).
  • Best practices for tagging and versioning modules, particularly on Bitbucket.
  • Anything you’d recommend keeping in mind for maintainability and scalability.

If you’ve handled something similar, I’d appreciate your perspective.

Thanks!

35 Comments
2024/11/24
15:36 UTC

Back To Top