/r/Terraform

Photograph via snooOG

Terraform discussion, resources, and other HashiCorp news.

This subreddit is for Terraform (IaC - Infrastructure as Code) discussions to get help, educate others and share the wealth of news.

Feel free to reach out to mods to make this subreddit better.

Rules:

  • Be nice to each other!

  • MultiLink_Spam == Perm.Ban

/r/Terraform

56,775 Subscribers

2

Is there an easy way to convert a public module into my local module (instead of copying all of the resource blocks into my local main.tf file)

I am setting up my own AWS infrastructure and wants to leverage AWS modules.

Repo: https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/master/main.tf

Instead of copying the code block (of multiple resource blocks and data blocks), is there a better way to automate (or convert) the public modules into our local child modules?

I know we can add a module block and add a source and add necessary parameters (from the public module) in my local module block , but since this repo has multiple resource blocks, not sure how can I do that?

module "aws-lambda" {
  source: git: https://github.com/terraform-aws-modules/terraform-aws-lambda.git
  function_name        = var.function_name
  role                 = var.create_role ? aws_iam_role.lambda[0].arn : var.lambda_role
  ...
}
2 Comments
2024/10/31
17:50 UTC

1

Sharing common modules between environments in terragrunt?

TL;DR: I want to have a way to define an infra "blueprint" once, and be able to create clones of it

Hi!

I'm setting up a new project with terragrunt. It seems promising, but I still didn't get my head around how I can take advantage of its benefits.

Previously with pure terraform how I did it is in this folder structure (just a minimalistic example):

environments
  prod
    main.tf
  stag
    main.tf
modules
  environments
    all
      aws
        eu-west-1
          main.tf
    prod
      aws
        eu-west-1
          main.tf
    stag
      aws
        eu-west-1
          main.tf
  resources
    ecs
      main.tf

Where I ran plan/apply at `/environments/[env]` level, and in there each matching env module was loaded. For example `prod` loaded as modules:

  • `all/aws/eu-west-1`
  • `prod/aws/eu-west-1`

and `stag` loaded as modules:

  • `all/aws/eu-west-1`
  • `stag/aws/eu-west-1`

Each module is "hoisting" their outputs to their parents, and this is how sibling modules are able to reference each other with their parents.

The main goal of this setup is to eliminate the need for having to replicate code between environments. There are very few resources that are needed in one environment, and not in another. Those are stored in the `all` module, and with a few lines of code I'm able to replicate the prod environment in staging, and when for example I release a new microservice, I have to write it once, put it in `all`, and it will get populated in all environments.

Upon reading terragrunt's documentation it seems like it has the same DRY mentality, but I couldn't find a solution yet how the above would be possible. If I understand it correctly, if I need to setup a new app for example I have to create terragrunt files in each environment.

The only way I can think of so far to reach what I planned is basically back to square one, create terraform modules and use them inside terragrunt, but then it doesn't matter if I stay with the existing solution.

1 Comment
2024/10/31
17:36 UTC

1

terraform -generate-config-out is gone?

Hi, a few months ago I used extensively to generate the configuration files for the imported recourses using 'terraform plan generate-config-out=..." command. I can't seems find no documentation on this anywhere, is this feature removed?

3 Comments
2024/10/31
14:46 UTC

1

Make a list of object from map

Hello everyone,

I'm trying to make a list of object to associate link to virtual network in azure. I did the following code:

locals {
  flat_pdz_configurations = merge([
    for vnet, v in var.azure_vnet :
    {
      for pdz in v["private_dns_zone_name"] :
      "${v.name}-${v["private_dns_zone_link_name"]}" =>
      {
        vnet_name                  = vnet
        private_dns_zone_link_name = v["private_dns_zone_link_name"]
        private_dns_zone_name      = pdz
      }...
    }
    ]...
  )
}

The output is the following:

flat_pdz_configurations = {
      vnet-git-prd-fc-001-pdnsl-git-prd-fc-001 = [
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.azurecr.io"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "francecentral.data.privatelink.azurecr.io"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.blob.core.windows.net"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.file.core.windows.net"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.asse.backup.windowsazure.com"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.frc.backup.windowsazure.com"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.use.backup.windowsazure.com"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.vaultcore.azure.net"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "privatelink.azurewebsites.net"
              + vnet_name                  = "github"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
              + private_dns_zone_name      = "azurecontainerapps.io"
              + vnet_name                  = "github"
            },
        ]
      + vnet-pla-prd-fc-001-pdnsl-pla-prd-fc-001 = [
          + {
              + private_dns_zone_link_name = "pdnsl-pla-prd-fc-001"
              + private_dns_zone_name      = "privatelink.blob.core.windows.net"
              + vnet_name                  = "platform"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-pla-prd-fc-001"
              + private_dns_zone_name      = "privatelink.file.core.windows.net"
              + vnet_name                  = "platform"
            },
          + {
              + private_dns_zone_link_name = "pdnsl-pla-prd-fc-001"
              + private_dns_zone_name      = "privatelink.vaultcore.azure.net"
              + vnet_name                  = "platform"
            },
        ]
    }

I'd like an output that looks like this:

flat_pdz_configurations = [
{
       private_dns_zone_link_name = "pdnsl-git-prd-fc-001"]
       private_dns_zone_name      = "privatelink.azurecr.io"
       vnet_name                  = "github"
},
{
       private_dns_zone_link_name = "pdnsl-git-prd-fc-001"
        private_dns_zone_name      = "privatelink.blob.core.windows.net"
        vnet_name                  = "github"
}
...
]

Do you know how to do that ? I'm stuck on this loop for a while and I cannot figure out this.

Thanks !

2 Comments
2024/10/31
11:00 UTC

7

Best way to manage IAM policies in TF

Hey all,

I’m working on building out our terraform strategy at work and looking for a more efficient way to handle IAM policies.

We have a “deploy” role per account (like 50) that is used to deploy all tf infra. The original idea was to just create a base level policy that allows you to create/attach IAM policies. These are made “manually” (quotes because not in TF) with an init script using the CLI. That way someone can onboard a new Aws account, have permissions to create policies out of the box and then can go ahead and start making resources.

This way doesn’t seem to scale since we try to deploy resources but don’t have the permissions to do so, so we then try to add and attach the permissions to the role in terraform but obviously you need the permissions in order to apply so it’s not very robust.

The option I’m thinking of now is to just have instructions in a readme saying don’t introduce resource and IAM changes in the same PR or it won’t work. This isn’t the best way but I’d like to avoid all policies being made manually.

Does any on have a good solution to this problem or suggestions?

5 Comments
2024/10/30
21:47 UTC

2

How to mark an App Runner deployment as completed?

When I create my App Runner resource in TF (a dockerised Spring Boot service producing to a Kafka topic), it seems to create without issue but the deployment never 'completes'. As it's an ongoing service, it's not performing a task and then shutting down so I'm not sure if I'm using the wrong AWS service for what I'm trying to do or if there's something missing from my TF script or something else. Could someone suggest how I might be able to do this please?

`resource "aws_apprunner_service" "example" { service_name = "aws_app_runner_service"

source_configuration { authentication_configuration { access_role_arn = aws_iam_role.apprunner_iam_role.arn } image_repository { image_configuration { port = "8000" runtime_environment_variables = { "CLUSTER_API_KEY" = var.confluent_key "CLUSTER_API_SECRET" = var.confluent_secret } } image_identifier = "999991588534.dkr.ecr.eu-west-2.amazonaws.com/ecr_docker_repo:7373737373" image_repository_type = "ECR" } auto_deployments_enabled = true }

tags = { Name = "my-apprunner-service" } }`

The GitHub action that runs the script shows: ws_apprunner_service.example: Still creating... [20s elapsed] aws_apprunner_service.example: Still creating... [30s elapsed] aws_apprunner_service.example: Still creating... [40s elapsed] aws_apprunner_service.example: Still creating... [50s elapsed] aws_apprunner_service.example: Still creating... [1m0s elapsed]

On repeat until I cancel.

(Sorry if this formats poorly...).

3 Comments
2024/10/30
21:23 UTC

10

Why add random strings to resource ids

I've been working on some legacy Terraform projects and noticed random strings were added to certain resource id's. I understand why you would do that for an S3 bucket or a Load Balancers and modules that would be reused in the same environment. But would you add a random string to every resource name and ID? If so, why and what are the benefits?

16 Comments
2024/10/30
13:23 UTC

6

TF Associate exam is tomorrow, need last minute suggestions

Tomorrow is my TF Associate level exam and i am bit panic, as i am preparing for IJP and i dont have prior experience with working on TF i learned from Udemy and then solved practice question. i am confident but not that much..

Still i have 24+ hours left i am preparing and revising all i learned, kindly give me some suggestion prior to exam.

11 Comments
2024/10/30
03:55 UTC

5

Nested For Loops for Sequential Process

I'm working on a dynamic module to create resources that may have different numbers of disks.

The disks need to be created and then attached.

I have two variables I'm using to step through these, because I need to sequentially create and name the disks.

locals {  
  vm\_range = range(1, vm\_count)  
  disk\_range = range(1, var.disk\_count - 1)  
}

I've attempted both of the following structures. In the first, Terraform complains that there needs to be a name for the resource. In the second, Terraform doesn't allow me to have a for loop in the location.

resource "disk" "secondary" {  
  for dsk in disk\_range :  
  {  
    for vm in vm\_range :  
    {  
      # Disk number is dsk+1 because the boot disk is created when the VM is created.  
      name              = "<sanitized>-${format("%02d", vm)}-${format("%02d", (dsk + 1))}"  
      # Disk creation variables  
    }  
  }  
}  

for dsk in disk\_range :  
{  
  for vm in vm\_range :  
  {  
    resource "disk" "secondary" {
      # Disk number is dsk+1 because the boot disk is created when the VM is created.  
      name              = "<sanitized>-${format("%02d", vm)}-${format("%02d", (dsk + 1))}"  
      # Disk creation variables  
    }  
  }  
}  

Because this question is about the logic and structure, I removed the actual provider details and other things that I don't believe are relevant.

My question is whether or not there's a better way to do this. I'm relatively new to using loops in Terraform. I've seen instances where there's the use of foreach and each.key, but I'm not sure how to do that in this instance because each disk has a corresponding set of values for size and other details.

Edit: I'm going to try using for_each with a map of objects. I was strugling to find a solution, but searching for that put me where I needed to be.

9 Comments
2024/10/29
20:19 UTC

61

Plan and Apply with PR Automation via GitHub Actions

Thought I'd finally make an original post on Reddit, since GitHub tells me that's where most people come from. DevSecTop/TF-via-PR tackles 3 key problems. (TL;DR with working code examples at the end.)

1. Summarize plan changes with diff

It's handy to sanity-check the plan output within a PR comment, but reviewing 100s or 1000s of lines isn't feasible. On the other hand, the standard 1-line summary leaves a lot to be desired.

So why not visualize the summary of changes the same way Git does—with diff syntax highlighting (as well as including the full-phat plan output immediately below, and a link to the workflow log if it exceeds the character limit truncation).

PR comment of the plan output with \"Diff of changes\" section expanded.

2. Reuse plan file with encryption

Generating a plan is one thing, reusing that plan file during apply is another. We've all seen the risks of using apply -auto-approve, which doesn't account for configuration drift outside the workflow.

Even if we upload it, we still need to fetch the correct plan file for each PR branch, including on push trigger. Plus, we need to encrypt the plan file to prevent exposing any sensitive data. Let's go ahead and check off both of those, too.

Matrix-friendly workflow job summary with encrypted plan file artifact attachment.

3. Apply before or after PR merge

When we're ready to apply changes, the same GitHub Action can handle all CLI arguments—including workspace, var-file, and backend-config—to fit your needs. Plus, the apply output is added to the existing PR comment, making it easy to track changes with revision history, even for multiple parallel runs.

Revision history of the PR comment, comparing plan and apply outputs in collapsible sections.

TL;DR

The DevSecTop/TF-via-PR GitHub Action has streamlined our Terraform provisioning pipeline by outlining change diffs and reusing the plan file during apply—all while supporting the full range of CLI arguments.

This could be just what you need if you're a DevOps or Platforms engineer looking to secure your self-service workflow without the overhead of dedicated VMs or Docker.

If you have any thoughts or questions, I'll do me best to point you in the right direction with workflow examples. :)

on:
  pull_request:
  push:
    branches: [main]

jobs:
  provision:
    runs-on: ubuntu-latest

    permissions:
      actions: read        # Required to identify workflow run.
      checks: write        # Required to add status summary.
      contents: read       # Required to checkout repository.
      pull-requests: write # Required to add comment and label.

    steps:
      - uses: actions/checkout@4
      - uses: hashicorp/setup-terraform@v3
      - uses: devsectop/tf-via-pr@v12
        with:
          # For example: plan by default, or apply with lock on merge.
          command: ${{ github.event_name == 'push' && 'apply' || 'plan' }}
          arg-lock: ${{ github.event_name == 'push' }}
          arg-var-file: env/dev.tfvars
          arg-workspace: dev-use1
          working-directory: path/to/directory
          plan-encrypt: ${{ secrets.PASSPHRASE }}
31 Comments
2024/10/29
19:44 UTC

0

Azure - Unable to create azurerm_mysql_flexible_server using private dns zone in different sub

I am trying to use this code:

resource "azurerm_mysql_flexible_server" "shared" {
  name                   = var.mysql_server_name
  resource_group_name    = azurerm_resource_group.shared.name
  location               = azurerm_resource_group.shared.location
  administrator_login    = var.mysql_server_admin_username
  administrator_password = random_password.sql_server.result
  backup_retention_days  = 7
  delegated_subnet_id    = azurerm_subnet.database.id
  private_dns_zone_id    = var.ops_private_dns_zone_id
  sku_name               = var.mysql_sku
  depends_on             = [azurerm_private_dns_zone_virtual_network_link.ops_link]
  tags                   = var.tags
}

To create a mysql flexible server, we have a private DNS zone in a hub vnet and from there we peer out to many spokes, this being one of them. All the other services have no issue connecting to the Private DNS Zone (things like AKS clusters etc) but no matter what name I choose, I just get this error every time

Status: "InvalidPrivateDnsZoneName"
│ Code: ""
│ Message: "The Private DNS Zone name provided is invalid. It must end with 'mysql.database.azure.com', and shouldn't contain underscore. Currently we do not support anything.mysql.database.azure.com as the private dns zone name either."
│ Activity Id: ""
│ 
│ ---
│ 
│ API Response:
│ 
│ ----[start]----
│ {"name":"c7ec66bb-2d21-47d9-a43b-69528025a220","status":"Failed","startTime":"2024-10-29T16:22:40.423Z","error":{"code":"InvalidPrivateDnsZoneName","message":"The Private DNS Zone name provided is invalid. It must end with 'mysql.database.azure.com', and shouldn't contain underscore. Currently we do not support anything.mysql.database.azure.com as the private dns zone name either."}}
│ -----[end]-----

I have tried with-hyphens and without shrt reallylongIncaseSomethingIsClashing and no matter what I get that same error every time. The value I am providing for the private dns zone id is the exact value I use in other places and they all work fine.

Any insight into this would be amazing, as you have to provide a private DNS zone id if you delegate to a subnet, and as this all has to be fairly secure I need it on that subnet so that I can access it via a jumpbox.

It's the we do not support anything.mysql.database.azure.com as the private dns zone name either that is really the kicker as it shows that it ends with mysql.database.azure.com so that's clearly fine, there's no underscore and just no more information

10 Comments
2024/10/29
16:35 UTC

2

Assistance needed with Autoscaler and Helm chart for Kubernetes cluster (AWS)

Hello everyone,

I've recently inherited the maintenance of an AWS Kubernetes cluster that was initially created using Terraform. This change occurred because a colleague left the company, and I'm facing some challenges as my knowledge of Terraform, Helm, and AWS is quite limited (just the basics).

The Kubernetes cluster was set up with version 1.15, and we are currently on version 1.29. When I attempt to run terraform apply, I encounter an error related to the "autoscaler," which was created using a Helm chart with the following code:

resource "helm_release" "autoscaler" {
  name       = "autoscaler"
  repository = "https://charts.helm.sh/stable"
  chart      = "cluster-autoscaler"
  namespace  = "kube-system"

  set {
    name  = "autoDiscovery.clusterName"
    value = 
  }

  set {
    name  = "awsRegion"
    value = var.region
  }

  values = [
    file("autoscaler.yaml")
  ]

  depends_on = [
    null_resource.connect-eks
  ]
}var.name

The error message I receive is as follows:

Error: "autoscaler" has no deployed releases
 with helm_release.autoscaler,
│   on  line 1, in resource "helm_release" "autoscaler":helm-charts.tf

my plan for autoscaler looks like that

Terraform will perform the following actions:
# helm_release.autoscaler will be updated in-place
~ resource "helm_release" "autoscaler" {
id                         = "autoscaler"
name                       = "autoscaler"
~ repository                 = "https://kubernetes.github.io/autoscaler" -> "https://charts.helm.sh/stable"
~ status                     = "uninstalling" -> "deployed"
~ values                     = [......

I would appreciate any guidance on how to resolve this issue or any best practices for managing the autoscaler in this environment. Thank you in advance for your help!

1 Comment
2024/10/29
13:48 UTC

1

Trigger AWS Step Functions after deploy

I'm currently stuck and looking for a clean solution - I'm trying to trigger a Step Function that builds and runs an image, e.g. multiple qa-environments.

I have a solution working with a null_resource and local-exec, but that only works when deploying tf locally. When using GitHub actions, I use OIDC, therefore I can call for the current user, but not the current credentials.

What would be the best approach?

4 Comments
2024/10/29
13:25 UTC

2

AADDS and setting the DNS servers on the VNET

So I've deployed AADDS with Terraform, nice.

I'm now wondering how I can automatically grab the info from Azure regarding the IP addresses of the DNS servers that are created. I can then push this to the VNET config to update the DNS servers there.

https://preview.redd.it/puiem3jyqoxd1.png?width=600&format=png&auto=webp&s=a5021989d246603b8c019de4a9bd78ec4e7bab21

1 Comment
2024/10/29
12:03 UTC

1

Transit Gateway Peering Attachments

Hi, first time poster here - couldn't find anything in the searchbar for this other than 3rd party TF modules which I'd like to avoid using...

We have 6 regions in a Transit Gateway mesh, not all regions need to talk to one another so it's not a full mesh and this configuration pre-dates our Terraform configuration, so I'm looking to create the configuration and import the resources. All resources are in the same account, just using individual tfvars files per region.

Does anyone know of a way using the aws_ec2_transit_gateway_peering_attachment resource to set up peering attachments conditionally on a bool if we want that region? E.g. London needs to talk to Canada and Ohio but Ohio might not need to talk to Canada.

I'd like to be able to reference the attachment ID for the routes at a later date, but stuck on the conditional meshing without causing conflicts.

Grateful for any help!

2 Comments
2024/10/29
11:43 UTC

3

Terraform plan -generate-config-out is creating TF backup state and not config file?

I'm using terraform plan -generate-config-out=generated_resources.tf to create config for existing recourses which I don't have in Terraform. I'm following the steps in the link below but of instead of finding the config in the newly generated generated_resources.tf the file isn't being created and instead a terraform.tfstate.1234567800.backup file is being created. I've tried multiple times and instead of a newly generated resource file it keep generating a new tftstate backup file Why is this happening?

https://developer.hashicorp.com/terraform/language/import/generating-configuration

9 Comments
2024/10/29
02:58 UTC

2

AWS provider throws warning when role_arn is dynamic

Hi, Terraform noob here so bare with me.

I have a TF workflow that creates a new AWS org account, attaches it to the org, then creates resources within that account. The way I do this is to use assume_role with the generated account ID from the new org account. However, I'm getting a warning of Missing required argument. It runs fine and does what I want, so the code must be running properly:

main.tf

provider "aws" {
  profile = "admin"
}

# Generates org account
module "org_account" {
  source            = "../../../modules/services/org-accounts"
  close_on_deletion = true
  org_email         = "..."
  org_name          = "..."
}

# Warning is generated here:
# Warning: Missing required argument
# The argument "role_arn" is required, but no definition was found. This will be an error in a future release.
provider "aws" {
  alias   = "assume"
  profile = "admin"
  assume_role {
    role_arn = "arn:aws:iam::${module.org_account.aws_account_id}:role/OrganizationAccountAccessRole"
  }
}

# Generates Cognito user pool within the new account
module "cognito" {
  source    = "../../../modules/services/cognito"
  providers = {
    aws = aws.assume
  }
}
19 Comments
2024/10/28
17:17 UTC

1

Does Terraform Support Azure V2 Dashboards yet?

So I am just about to start a new project where I create a fairly complex dashboard for one of our services. And I noticed Azure has a preview of the Azure Shared Dashboards V2 available. Not quite sure how long it has been around for since I don't often creat dashboards.

But has anyone used Terraform to generate these? Is it even compatible yet?

I don't want to waste time developing the dashboard in our dev tenant just to have to re-create the thing again in our prod tenant manually.

Thanks.

Edit: Thanks for all your responses. Seems this new dashboard is a no go. It’s very restricted in terms of tiles you can add. It’s also not possible to pin Workbook/ Workbook elements to the V2 dashboards. I assume this is something Azure will add in the future. But yeah for now my quest to investigate a TF solution for this is over.

7 Comments
2024/10/28
12:06 UTC

1

Azure terraform issue with dependencies for IP Groups and Firewall

Hi All,

I have some terraform code that creates a set of IP groups and then uses those IP groups within a firewall rule collection group.

However I'm struggling with built in terraform dependencies for this and not too sure if its a bug.

I am trying to remove some redundant IP groups which are no longer needed. I have removed the groups from within my code. Terraform successfully detects that it needs to destroy the 2 IP groups and also needs to amend the firewall rule collection group. However when I'm trying to run the apply it seems as though it is trying to do the destroy first instead of the change and then remove.

I don't have any manual dependencies within my code.

Has anybody come across something similar?

│ Error: deleting I P Group (Subscription: ""

│ Resource Group Name: "rg_group_name"

│ Ip Group Name: "IPG-NAME"): performing Delete: unexpected status 400 (400 Bad Request) with error: IpGroupsHasFirewallPolicyReferences: IpGroups '/subscriptions/sub_name/resourcegroups/rg_group_name/providers/Microsoft.Network/ipGroups/IPG-NAME' cannot be deleted since there are firewall policies using this resource.

1 Comment
2024/10/28
11:33 UTC

6

Can't install terraform in lebanon

I tried checking tha sanctions or whatever reasons that might be allowing them to block terraform in lebanon. But can't find any. Any idea about this?

update: why is this getting downvoted I am not stupid I didn't post any logs or troubleshooting because the error is clear. when opening the registry I get:
This content is not currently

available in your region

Please see trade controls.

Anyways. I contacted them through support to get more information. Thank you for the help :)

26 Comments
2024/10/27
21:34 UTC

0

Multi-Cloud Secure Federation: One-Click Terraform Templates for Cross-Cloud Connectivity

Tired of managing Non-Human Identities (NHIs) like access keys, client IDs/secrets, and service account keys for cross-cloud connectivity? This project eliminates the need for them, making your multi-cloud environment more secure and easier to manage.

With these end-to-end Terraform templates, you can set up secure, cross-cloud connections seamlessly between:

  • AWS ↔ Azure
  • AWS ↔ GCP
  • Azure ↔ GCP

The project also includes demo videos showing how the setup is done end-to-end with just one click.

Check it out on GitHub: https://github.com/clutchsecurity/federator

Please give it a star and share if you like it!

5 Comments
2024/10/27
06:59 UTC

3

Error when creating terraform state in localstack with Terragrunt

Hi

I am trying to create the terraform state bucket with terragrunt configured as follows:

remote_state {
  backend = "s3"
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite"
  }
  config  = {
    bucket  = "terraform-state-tl-tests"
    key     = "terraform-state-candidate/terraform.tfstate"
    region  = "eu-west-1"
    profile = "test"

    encrypt = false

    #localstackconfig
    force_path_style            = true
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    endpoint = "http://localhost:4566"
  }
}

When I run terraform apply I get the error

20:11:42.097 INFO   Downloading Terraform configurations from .. into ./.terragrunt-cache/G0AuEXBeOW0SUpZYtUNDJo1-lTM/D0Yyz5YOHZXT3XIsAvGb91gdugk
Remote state S3 bucket terraform-state-tl-tests does not exist or you don't have permissions to access it. Would you like Terragrunt to create it? (y/n) y
20:11:46.961 ERROR  error getting AWS account ID  for bucket terraform-state-tl-tests: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 5552c4f9-3b6d-4888-b0ec-4a4b9b510ac7
20:11:46.961 ERROR  Unable to determine underlying exit code, so Terragrunt will exit with error code 1

despite the profile is right.
In fact, I can create the bucket manually with the same profile or even create the infra if I do not use the S3 backend to keep the terraform state.

The error is driving me crazy, I have searching internet and found nothing and then I am not sure if I am missing something or if it is a bug from terragrunt, Terraform, localstack or my mac.

Any clue is welcome

Thanks in advance

9 Comments
2024/10/26
18:15 UTC

1

Specify variable in a module if it's a list(string)?

Hello! I'm calling a sub module and passing variables. I've defined empty variables in the sub module and then enter my variable values in main. This works fine when the variable is a string. But when the variable is a list(string) I have to specify the variable when calling the module. Is this expected?

Here's what I mean if the above wasn't clear.

module/vars.tf

variable "internal_subnets" {
  type = list(string)
  default = [""]
}

main/vars.tf

variable "internal_subnets" {
  type = list(string)
  default = [
    "subnet-12344566778788888",
    "subnet-99876543323455566"
  ]
}

main/ec2.tf

module "test" {
  source           = "../mod/windows"
  internal_subnets = var.internal_subnets

If I don't specify internal_subnets = var.internal_subnets then it returns an empty list. Terraform v1.7.5. Thanks!

3 Comments
2024/10/25
15:17 UTC

1

Maximum Size for Local Variable Maps

Hi,

does someone know what the maximum length oft terraform local maps is? I load my map Form a yaml file. Is a maximum length of parsing yaml files?

Thanks a lot

2 Comments
2024/10/25
09:05 UTC

2

how to create a pod with 2 images / containers?

hi - anyone have an example or tip on how to create a pod with two containers / images?

I have the following, but seem to be getting an error about "containers = [" being an unexpected element.

here is what I'm working with

resource "kubernetes_pod" "utility-pod" {
  metadata {
name      = "utility-pod"
namespace = "monitoring"
  }
  spec {
containers = [
{
name  = "redis-container"
image = "uri/to my reids iamage/version"
ports  = {
container_port = 6379
}
},
{
name  = "alpine-container"
image = "....uri to alpin.../alpine"
}
]
  }
}

some notes:

terraform providers shows:

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws] ~> 5.31.0
├── provider[registry.terraform.io/hashicorp/helm] ~> 2.12.1
├── provider[registry.terraform.io/hashicorp/kubernetes] ~> 2.26.0
└── provider[registry.terraform.io/hashicorp/null] ~> 3.2.2

(i just tried 2.33.0 for kubernetes with an upgrade of the providers)

the error that i get is

│ Error: Unsupported argument
│
│   on utility.tf line 9, in resource "kubernetes_pod" "utility-pod":
│    9:     containers = [
│
│ An argument named "containers" is not expected here.
3 Comments
2024/10/24
17:28 UTC

12

Storing AWS Credentials?

Hi all,

Im starting to look at migrating our AWS infra management to Terraform. Can I ask what you all use to manage AWS Access and Secret keys as naturally dont want to store them in my tf files.

Many thanks

27 Comments
2024/10/24
15:24 UTC

6

Indecisive with Terragrunt/Terraform + CICD

Just started on Terraform and CICD this year, but for my first project I took a straight dive by using Terragrunt.

Being indecisive, I did quite a lot of readings but haven't come to a point to convince myself it is the "right methods". Managing another project in vanilla Terraform now but couldn't decide which one to stick to.

There are a lot of post out on telling us how to structure the project folder. What is baffling me is during the CICD stage.

TLDR, am interested in how you are managing your Terraform structure and how your CICD works assuming with the following constraints:

  1. Just need to manage between max of 3 different environments, prd, stg, dev. No region is needed.
  2. Would prefer to be able to split them into many different states, to reduce the blast radius
  3. Without the need of a very complicated CI steps (Read below where I have the python scripts to generate what needs to be deployed using Terragrunt)
  4. As DRY as possible

For example, I am having the current structure in my Gitlab.

.
├── documentations
├── pipeline      
└── infrastructure
    ├── modules
    │   ├── vpc
    │   └── subnet
    ├── common
    |   └── prefixes.hcl # Used by both environments, e.g. prefixes, project name etc 
    ├── prd
    │   ├── vpc
    │   ├── subnet
    │   ├── sgrp
    │   ├── ...
    │   └── env.hcl
    └── stg
        ├── vpc
        └── subnet
        ├── sgrp
        ├── ...
        └── env.hcl

terragrunt run-all plan wouldn't work well as there are dependencies - Understood this limitation

So, I have a stage which includes using some python script to generate child pipeline, high level overview it does the following:

# A map to store what needs to be run
directory_to_include = {
  prd = []
  stg = []
}
  • Get repo diff between working branch and main branch
    • Filter out which modules has changes
      • So, if modules/vpc/ has changes, it will add "./infrastrucutre/prd/vpc" and "./infrastructure/stg/vpc" to prd and stg in the map respectively.
    • Filter out which environment has changes
      • "For example if subnet has changes in prd, it will just append "./instrastucture/prd/subnet" to prd
  • run terragrunt graph-dependencies * From here I will add what whatever stuff that was dependent on it.
  • Once 'directory_to_include' mapping is build, I did some cleanup, only keep the path up till the folder level and remove duplicate
  • Then I further group the list in {env} based on terragrunt output-module-groups
  • Based on the changes, I will trigger child pipeline
  • So, the pipeline for each environment will look like this something like this
    • [Generate Child Pipeline] → [Plan-Group-1] → [Apply-Group-1] → [Plan-Group-2] → [Apply-Group-2]
    • Each of the plan and apply will look something like this
    • terragrunt run-all <action> --terragrunt-strict-include <all paths of the groups>

As of now this kind of works, I felt that it is quite jarring/messy. And I am not sure if anything will get missed out down the road and one more thing to manage.

A few months later, I was managing another teams IAC that was handed down to me. It is pure vanilla Terraform. Which has all of the .tf contained in one folder. Making changes to it can take quite a while to run even though I am just making one change as it is a monolith structure.

In terms of CICD, this is less complex. which is nice. But what I missed in Terragrunt is:

  1. Different state files for each "category of resource"
  2. When applying only relevant state needs to be fetched → speed up deployment

So, I look into it, where I can break them up and achieve the same as what I did in Terragrunt by using data block to fetch state from another remote state.

I have also look into Terraform workspace which I later found out Hashicorp don't recommend it as well for production. The cons are all the environments can only use one version of the provider and the same backend.

Thus, I am here to see how others are implementing and see if I am missing out of anything.

3 Comments
2024/10/24
13:15 UTC

1

Issue with Lambda Authorizer in API Gateway (Terraform)

I'm facing an issue with a Lambda authorizer function in API Gateway that I deployed using Terraform. After deploying the resources, I get an internal server error when trying to use the API.

Here’s what I’ve done so far:

  1. I deployed the API Gateway, Lambda function, and Lambda authorizer using Terraform.
  2. After deployment, I tested the API and got an internal server error (500).
  3. I went into the AWS Console → API Gateway → [My API] → Authorizers, and when I manually edited the "Authorizer Caching" setting (just toggling it), everything started working fine.

Has anyone encountered this issue before? I’m not sure why I need to manually edit the authorizer caching setting for it to work. Any help or advice would be appreciated!

https://preview.redd.it/850x5q6s9owd1.png?width=1545&format=png&auto=webp&s=cb4b9613fd7d3af2029fc61b5baa84833629d6de

https://preview.redd.it/ym054sjqaowd1.png?width=1549&format=png&auto=webp&s=e34a19ff1916c0e02edf1101781ec3f1e8208b92

8 Comments
2024/10/24
09:31 UTC

Back To Top