/r/aws

Photograph via snooOG

News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.

News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.

Note: ensure to redact or obfuscate all confidential or identifying information (eg. public IP addresses or hostnames, account numbers, email addresses) before posting!

Smokey says: shower less often, for less time and with less water to fight climate change! [see more tips]

If you're posting a technical query, please include the following details, so that we can help you more efficiently:

  • an outline of your environment
  • a description of the problem
  • things you've tried already
  • output that was displayed (if any)

Resources:

Sort posts by flair:

Other subreddits you may like:

Does this sidebar need an addition or correction? Tell us here

/r/aws

276,989 Subscribers

6

In what scenario does using Java in Lambda make sense?

Just a curious question.

We have been using NodeJS and Java(quarkus and spring boot, both) and we noticed Java has the cold start problem even with Snapstart.

It's not that it's causing latency, but when comparing with Node it definitely has a delay in booting up.

As such, we have decided to make Lambdas on pure Javascript runtimes.

Does it make sense to use Java in a server less context? In containers and servers I absolutely understand.

Thoughts?

6 Comments
2024/03/28
05:45 UTC

1

AWS + PagerDuty - how to view custom data in PD

We have the traditional AWS + PagerDuty setup where we have CloudWatch alarms publish to an SNS topic, and PagerDuty is subscribed to that SNS topic. When an alarm triggers, we get notified. My problem is that in the custom details in PagerDuty we only have alarm information, but I want to add some more there. For example, I also want to show the relevant logs in PagerDuty. This would be done by firing off a Lambda function, and automatically querying CloudWatch for relevant errors logs.

Does anyone have any information on how to do this? Or maybe relevant articles?

Thanks!

0 Comments
2024/03/28
05:32 UTC

2

EC2 vs Workspaces costs

Why are workspaces so much more expensive than ec2 instances ?

This is the cost of a workspaces machine:

https://preview.redd.it/u1klyb1mxzqc1.png?width=859&format=png&auto=webp&s=bcd5e0e8d8efa49d85486d0b9812beb43f517e94

And this is the cost of a similar configuration ec2 instance (g4dn.8xlarge its actually slightly better):

https://preview.redd.it/63ayzm5zxzqc1.png?width=857&format=png&auto=webp&s=0f3c45b361f5f558f70944730a32420828c124c6

Is there something I'm missing? I can't justify or imagine why anyone would chose workspaces with such a massive cost increase?

Thanks,

6 Comments
2024/03/28
03:45 UTC

1

Migration Hub...what am I doing wrong?

Hello all. Really getting into the AWS stuff as I am working with a team developing a sandbox to use for training VMs. I'm trying to get a server migrated up but I seem to be hitting roadblock after roadblock.

I have a server, created in VMWare Workstation Pro 16, that has 3 VMDKs.

In the most current iteration, I am trying to load the VMDKs into an AMI using Migration Hub. I pointed the workflow to look at the S3 bucket that has the 3 VMDKs in it, nothing else. The migration runs for a bit, then fails with the following error code:

" The step failed to complete because import-ami-0a2ae8c929dbb6d1e | deleted | ClientError: Disk validation failed [Corrupted VMDK: VMDK Descriptor does not exist.] "

Don't really understand what I am doing wrong. Is it just because I am uploading the VMDK files and not the OVF and MF files too?

1 Comment
2024/03/28
02:34 UTC

5

VPC endpoints for ECR not working in private subnet

I've been having a terrible time with this and can't seem to find any info on why this doesn't work. My understanding is that VPC endpoints do not need to have any sort of routing yet my ECS task cannot connect to the ECR when inside a private subnet. The inevitable result of what is below is a series of error messages which usually are a container image pull failure. (I/O timeout, so not connecting)

This is done in terraform:

 locals {
  vpc_endpoints = [
    "com.amazonaws.${var.aws_region}.ecr.dkr",
    "com.amazonaws.${var.aws_region}.ecr.api",
    "com.amazonaws.${var.aws_region}.ecs",
    "com.amazonaws.${var.aws_region}.ecs-telemetry",
    "com.amazonaws.${var.aws_region}.logs",
    "com.amazonaws.${var.aws_region}.secretsmanager",
  ]
}

resource "aws_subnet" "private" {
  count = var.number_of_private_subnets
  vpc_id = aws_vpc.main_vpc.id
  cidr_block = cidrsubnet(aws_vpc.main_vpc.cidr_block, 8, 20 + count.index)
  availability_zone = "${var.azs[count.index]}"
  tags = {
    Name = "${var.project_name}-${var.environment}-private-subnet-${count.index}"
    project = var.project_name
    public = "false"
  }
}

resource "aws_vpc_endpoint" "endpoints" {
  count = length(local.vpc_endpoints)
  vpc_id = aws_vpc.main_vpc.id
  vpc_endpoint_type = "Interface"
  private_dns_enabled = true
  service_name = local.vpc_endpoints[count.index]
  security_group_ids = [aws_security_group.vpc_endpoint_ecs_sg.id]
  subnet_ids = aws_subnet.private.*.id
  tags = {
    Name = "${var.project_name}-${var.environment}-vpc-endpoint-${count.index}"
    project = var.project_name
  }
}

The SG:

resource "aws_security_group" "ecs_security_group" {
    name = "${var.project_name}-ecs-sg"
    vpc_id = aws_vpc.main_vpc.id
    ingress {
        from_port = 0
        to_port = 0
        protocol = -1
        # self = "false"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port = 0
        to_port = 0
        protocol = -1
        cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
      Name = "${var.project_name}-ecs-sg"
    }
}

And the ECS Task:

resource "aws_ecs_task_definition" "kgs_frontend_task" {
  cpu = var.frontend_cpu
  memory = var.frontend_memory
  family = "kgs_frontend"
  network_mode = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  execution_role_arn = aws_iam_role.ecsTaskExecutionRole.arn
  container_definitions = jsonencode([
    {
      image = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.aws_region}.amazonaws.com/${var.project_name}-kgs-frontend:latest",
      name = "kgs_frontend",
      portMappings = [
        {
          containerPort = 80
        }
      ],
      logConfiguration: {
        logDriver = "awslogs"
        options = {
          awslogs-group = aws_cloudwatch_log_group.aws_cloudwatch_log_group.name
          awslogs-region = var.aws_region
          awslogs-stream-prefix = "streaming"
        }
      }
    }
  ])
  tags = {
    project = var.project_name 
  }
}

EDIT: Thank you everyone for the great suggestions. I finally figured out the issue. Someone suggested the s3 endpoint specifically needs to be given a route table associated with the private subnets and that was exactly the problem.

22 Comments
2024/03/28
01:08 UTC

1

How can I do a post using api gateway

Technically I have event data in the event bridge and I to post the event data to an API endpoint. Now I never wrote IAC code and all the documents I found about the request parameter or get method. Can anyone point me to the right direction please about how I can send the JSON payload as post using apigateway?

1 Comment
2024/03/28
00:56 UTC

2

Automation for installations/ODBC connection, etc... While ec2 windows server launch

Hi guys,

I have two questions.

  1. Is it possible to trigger the aws lambda function while ec2 windows server launch using user data or anyother?

  2. we are creating a windows ec2 using a existing server launch template. After launching new server we are manually installing ts print, ts scan, odbc connection, db connection with application so is it possible to configure all this things which launching a new server

3 Comments
2024/03/27
17:25 UTC

1

Sharing the Same domain

Hi, I'd like advice on migrating AWS infrastructure from monolithic to microservices.

I'm running a web service that is on a LAMP stack. I want to make it separated into a microservice architecture. The problem is to keep the static image file URLs that are used by customers.

What I'm thinking is Nuxt Js served by Amplify as a front-end, S3 as a storage, and Aurora(MySQL) is for DB. It would be possible to keep high availability.

I wonder if it's possible to make the routes (for example, the domain is "abc.com")

  1. Nuxt Js : abc.com/
  2. S3 : abc.com/files/

Since all the static files are in the "/files" folder in the LAMP stack. I have to keep the structure. I want to find a solution or guide but it's not luck so far. it's because too basic to mention.

Please share your knowledge or experiences.
Thank you.

1 Comment
2024/03/27
13:14 UTC

1

Close audit account , while creating accounts with AFT

I'm using AWS Control Tower with Account Factory for Terraform (AFT) to provision accounts in my landing zone. However, the landing zone automatically creates an audit account, and I don't need it. How can I modify the AFT configuration to avoid provisioning the audit account and prevent potential errors during account creation?

0 Comments
2024/03/27
10:57 UTC

0

Help with documentation

Hi guys!

Can anyone recommend any tools that can scan a AWS environment (and Azure is a plus too) to help our engineers create environment documentation?

Thanks in advance!

Richard

1 Comment
2024/03/27
21:53 UTC

1

Docker error while creating environment in aws elastic beanstalk

I am deploying an ML model using Docker in Beanstalk. Firstly, I uploaded my Docker image (which contains my ML model) to Docker Hub. Then, I deployed it into Beanstalk using docker-compose.yml. In Beanstalk, I am using Docker as a platform (Platform branch: Docker running on 64bit Amazon Linux 2), and my model requires GPU support. For that, I have used the Deep Learning AMI GPU CUDA 11.5.2 (Amazon Linux 2) 20230104, which is built with NVIDIA CUDA, cuDNN, NCCL, GPU Driver, Docker, NVIDIA-Docker, and EFA support. However, when I am building the environment using this configuration, I encounter the following error:

**[ERROR]** An error occurred during execution of command [app-deploy] - [Track pids in healthd]. Stop running the command. Error: update processes [docker eb-docker-compose-events eb-docker-compose-log eb-docker-events cfn-hup healthd] pid symlinks failed with error read pid source file /var/pids/docker.pid failed with error:open /var/pids/docker.pid: no such file or directory.

Means my environment is building but it is giving the error message like:

Env build successfully but with some errors.

I found this error message in the eb.engine.log.

Additionally, I checked the EC2 instance through SSH, it showed that NVIDIA drivers, NVIDIA CUDA, and Docker are already installed (verified using these commands: nvidia-smi, docker -v). I have tried multiple different deep learning AMIs, but I encounter the same issue with all of them. Also, I noticed a strange thing while trying out different AMIs: if I build the environment with default settings like Docker platform with the default Docker AMI, it builds successfully without any error. However, when I pass a different AMI ID in the configuration, it is not able to build the environment properly.

How to solve this error?

0 Comments
2024/03/27
04:44 UTC

2

Dealing with aged resources?

Hey there, my organization has an internal AWS Training Account that isn't massively regulated or monitored. I was looking into cost explorer and can see the billing is costed hundreds of $$$'s a month for unused resource and would like to put automation in place to deleted resources that are say 2 weeks old.

I can write lambdas that will run every so often to check for any resources incrementing cost that are weeks old but pretty sure that the script would be difficult due to needing to delete resources in such a specific order.

Any recommendations I would really appreciate!

5 Comments
2024/03/27
20:22 UTC

0

Am I making any PUT/COPY/POST/List requests for S3 bucket?

https://preview.redd.it/mmmef2i8lxqc1.png?width=1451&format=png&auto=webp&s=e02d2c8a6e6f497ff33c90686811be811fb75790

Hi, I'm doing a side project building a website. In this project, I have been manually uploading images on a S3 bucket via console. I think I have uploaded only 30 images so far.

In the HTML, there are several image tags with a public link of S3 image as a src, so that images are rendered on the website. Therefore, there isn't any HTML tag that allows a web user to upload an image to S3 bucket. In this context, I believe there should only be GET requests so far. However, as shown in the image above, I have been charged, very few though, of PUT/COPY/POST/List requests. What am I missing?

4 Comments
2024/03/27
19:55 UTC

1

can ECS Anywhere services communicate with each other?

Hello,

I have tasks/services deployed on-prem using ECS Anywhere. I have them configured with bridge mode for networking but doesn't seem like they're able to connect. Is this a feature? - I haven't been able to find an answer through the documentation yet (appears to be very sparse).

0 Comments
2024/03/27
19:40 UTC

1

Inherited AWS Project - Need Advice

Hey Everyone,

I'm new to AWS (I have much more working knowledge in Azure). I've inherited a project that I think should be manageable, but I'd like to get some advice from real AWS pros.

We have a client that has 12TB of data sitting on a bare-metal file server (SMB shares). He would like to host this data completely in AWS. He would like to spin-up an EC2 instance to host the data and join the VM to his on-prem domain (would the data actually be stored in an S3 bucket and access via EC2 VM?). Obviously, we would need a S2S VPN from on-prem to the EC2 machine. His remote workforce would use P2S VPN to access on-prem infrastructure and connect to EC2 instance from there.

I know there is going to be a lot of granularity here, but I'm curious to everyone's opinion on this. Is this a good strategy? I was reading about Storage Gateway and specifically the File Gateway option as well. What are your thoughts on that option or any other for that matter.

Now, pleassseeee don't rip me apart here. Just doing my job...

5 Comments
2024/03/27
19:12 UTC

0

How do I get the ORIGIN of request in Lambda function?

I set up an API Gateway routing requests to my Lambda function. In my Lambda function, I want to read the origin of the request. Is it something I can get without anyone tampering this value?

export const handler = (event) => {
    const origin = ???;
}

I have an hardcoded list of domains (let's assume: xxx.com and yyy.com) I configured for the API Gateway's CORS using the ALLOW ORIGINS, so that, only those origins can send requests to my Lambda function. So this API Gateway cannot be accessed using cURL, Postman, ...

Is it a value I should access from the request headers, using:
const origin = event.headers...

Also, I don't want to allow xxx.com to modify the headers and tamper it with yyy.com origin for example. Is it possible?

11 Comments
2024/03/27
19:01 UTC

1

Issue with cron job

Hi,

Its aurora postgres version 15.4. We have scheduled partition maintenance through pg_partman and its scheduled through pg_cron something as below. The same script has been executed in dev and test environments, and we are seeing the cron job is scheduled in both the environments as because we see the record in table cron.job. But somehow we are not seeing any entries in cron.job_run_details for test database and also partitions are not getting created/dropped as we have expected on test database. So it means its not running in one of the database. Wondering why is its so? Is there any way to debug this issue and fix ?

select partman.create_parent(
   p_parent_table := 'TAB1',
   p_control := 'part_col',
   p_type := 'native',
   p_interval := '1 day',
   p_premake := 5,
   p_start_partition => '2024-02-01 00:00:00'
);

update partman.part_config set infinite_time_partitions = 'true' , retention = '1 months', retention_keep_table='true', retention_keep_index='true'
where parent_table = 'TAB1';

SELECT cron.schedule('@daily',partman.run_maintenance());

3 Comments
2024/03/27
19:00 UTC

23

What do you do when something out of your control happens and AWS doesn't respond to the ticket?

We have an RDS proxy that suddenly stopped connecting to an RDS server at exactly 9pm, without our team doing anything. We've checked everything on our side and can confirm nothing changed (passwords, security groups...).

We need to know what happened, so we can be prepared if this happens again, or even better, make sure this never ever happens again.

We've upgraded our support plan to Developer to try to get an answer from AWS, but it's been 3 days and no activity at all on the ticket. I'm not sure if we can do more? It's frustrating because as far as we know, the issue lies within AWS.

My team and I would like to sleep a bit better at night :)

45 Comments
2024/03/27
17:59 UTC

3

Can't connect to EC2 Instance

First of all I'm brand new (like started yesterday new) so excuse my ignorance, I'm trying to learn the ropes here. Yesterday I created an EC2 instance, set up my security group, hopped on using EC2 Instance Connect, and managed to SCP a file from my PC to the instance. Great!

Today, I can't connect using EC2 Instance. It tells me to try again later. Okay, whatever, I can ssh in from powershell and keep working, so I do. I set up node.js and accompanying software, configured it with a basic index.js script and a page to render, checked that it was working on the localhost, and tried to check it out from my browser. I copy the public IPV4 DNS for the instance into my browser, and get...

Refused to connect. I double-checked my security group, I checked to make sure the attached subnet was public, I've tried everything I could find online, but I still cannot 1) connect via EC2 Instance Connect or 2) View the webpage on my browser. I don't know what I haven't thought of but I've been trying just to connect for hours. I disabled my firewall, I triple-checked my security group to make sure my HTTP and HTTPS stuff was configured, and I just don't know what to try next. Any help is massively appreciated.

13 Comments
2024/03/27
17:51 UTC

1

Api Gateway dropping GET request body

Hello all!

I am trying to passthrough the body on an GET request via an API request.
The original API gateway is done via VPC Integration and path proxy. This request goes to a Network ELB

The first approach was done to put an specific resource for the api and try to modify the Integration Request to pass the request to an HTTP Integration again to the Network ELB but passing over the body.
However it only works if I turn on the HTTP proxy integration which, again, drops the body. If this option is not turned on, the api responds with a 403 with a "status":403,"error":"Forbidden","message":"Access Denied","path" :"XXX/XX/XXX"
Same issue seems to be happening if I still use the VPC Integration with out the proxy integration.

As a side note the URL endpoint is configured as:
http://testurlto.elb.region.amazonaws.com/path/to/resouce/

I cannot find any documentation about this or a how to do it. Is this possible to achieve?
We asked the Dev team to change the GET request into a POST but I would like to know if there's a way to allow the body not to be dropped.

thanks

2 Comments
2024/03/27
16:48 UTC

6

Which AWS certifications come up the most in job listings?

If you're looking to get an AWS certification because you want to increase your chances of being promoted, looking for a new job, or just looking to improve your career options in general, which certificates should you get? Which AWS certificates have you seen show up the most in job listings?

13 Comments
2024/03/27
16:24 UTC

0

Could someone go over my security group rules and tell me why I can't ping?

Hi everyone, I seem to have made some elementary mistakes with my security groups and would like some help. I am unable to ping and commands like curl randomly fail. I do not have an NACL for this VPC, it's just a security group for this instance.

# Security group configuration

resource "aws_security_group" "instance_security_group_k8s" {
  name        = "instance_security_group_k8s"
  description = "SSH"
  vpc_id      = aws_vpc.aws_vpc.id

  tags = {
    Name = "instance_security_group"
  }
}

# SSH rules

resource "aws_vpc_security_group_ingress_rule" "instance_security_group_ingress_ssh_ipv4_k8s" {
  security_group_id = aws_security_group.instance_security_group_k8s.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = var.ssh_from_port
  ip_protocol       = "tcp"
  to_port           = var.ssh_to_port
}

resource "aws_vpc_security_group_ingress_rule" "instance_security_group_ingress_ssh_ipv6_k8s" {
  security_group_id = aws_security_group.instance_security_group_k8s.id
  cidr_ipv6         = "::/0"
  from_port         = var.ssh_from_port
  ip_protocol       = "tcp"
  to_port           = var.ssh_to_port
}

resource "aws_vpc_security_group_egress_rule" "instance_security_group_egress_ssh_ipv6_k8s" {
  security_group_id = aws_security_group.instance_security_group_k8s.id
  cidr_ipv6         = "::/0"
  from_port         = var.ssh_from_port
  ip_protocol       = "tcp"
  to_port           = var.ssh_to_port
}

# HTTPS rules

resource "aws_vpc_security_group_egress_rule" "instance_security_group_egress_https_ipv4_k8s" {
  security_group_id = aws_security_group.instance_security_group_k8s.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = var.https_from_port
  ip_protocol       = "tcp"
  to_port           = var.https_to_port
}

resource "aws_vpc_security_group_egress_rule" "instance_security_group_egress_https_ipv6_k8s" {
  security_group_id = aws_security_group.instance_security_group_k8s.id
  cidr_ipv6         = "::/0"
  from_port         = var.https_from_port
  ip_protocol       = "tcp"
  to_port           = var.https_to_port
}

# DNS rules

resource "aws_vpc_security_group_egress_rule" "instance_security_group_egress_dns_ipv4_k8s" {
  security_group_id = aws_security_group.instance_security_group_k8s.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = var.dns_from_port
  ip_protocol       = "udp"
  to_port           = var.dns_to_port
}

resource "aws_vpc_security_group_egress_rule" "instance_security_group_egress_dns_ipv6_k8s" {
  security_group_id = aws_security_group.instance_security_group_k8s.id
  cidr_ipv6         = "::/0"
  from_port         = var.dns_from_port
  ip_protocol       = "udp"
  to_port           = var.dns_to_port
}

I am unable to find out why I'm facing such problems, help would be appreciated!

Thanks

25 Comments
2024/03/27
15:22 UTC

2

Multi-account setup & preparation for eventual separation?

Asking for a friend!

It is a greenfield project and they can do whatever is best.

They are about to launch master account + control tower.

Then setup a bunch of accounts for apps & customers.

However, they want to be "prepared" to let some customers split off from the original setup and "take" their account with them and out of the control-tower hierarchy.

Customer apps are each self-contained in their own account.

Are there any special considerations or setup necessary to be able to do so easily?

Their current understanding is that it would be sufficient to create a root user in that account and add a payment method (e.g. the customer CC) — then it is split off and done.

1 Comment
2024/03/27
15:20 UTC

1

CodePipeline Deploy Stage missing AWSLambda

I have a CI/CD pipeline from GitLab to AWS ECR and I am looking to automatically update my running image in ECS with Fargate when a new image is pushed (via GitLab) to ECR.

I researched this to ensure it is viable and I saw it is possible to automatically update the running task with the latest image from ECR.

In CodePipeline I added GitLab in the source stage. I skipped build stage and now I'm stuck at deploy stage. I cannot find AWS Lambda and I copied a script (from the link below), but I can't figure out how to add the script to the deploy stage.

The script essentially checks the image in ECR and updates the task when a push request is made to the branch in GitLab.

I used this guide as a reference https://aws.plainenglish.io/automate-application-deployment-using-aws-codepipeline-ecr-to-ecs-122feaafcd93 but I understand I don't need a build stage as this is done with GitLab CI/CD.

0 Comments
2024/03/27
15:11 UTC

0

Battle of the Redis forks?

"Starting gun has been fired and AWS’s Madelyn Olson was one of the first out of the gate after Redis dropped its BSD licence..."

https://www.thestack.technology/battle-of-the-redis-forks-begins/The

3 Comments
2024/03/27
13:15 UTC

0

Web service

I have a ec2 instance with php web service that is accessed by another ec2 instance via private ip. The access is using the url format: http://1.2.3.4/webservice (private ip).

After remove the public ip of the web service ec2 instance, the other ec2 instance can not access web service anymore despite access occurring via private IP. Why?

4 Comments
2024/03/27
12:17 UTC

2

Cloudwatch metric stream not streaming value updates for existing datapoints

We have some custom metrics being sent to cloudwatch and then sent to a firehose via a metric stream. We will on occasion update just the value of an existing datapoint and have found that if we do this and the time stamp doesn't change the update does not get streamed to the firehose. I can't find any information to suggest this is the desired action and all blogs on the subject seem to suggest all updates should be streamed. I think am just trying to understand if this is a 'bug' or a 'feature' any suggestions are greatly appreciated

0 Comments
2024/03/27
07:30 UTC

0

Hello

I'm trying to create a redirect url to send and receive data to and from amazon web services, but I'm new to AWS and don't know how to develop it. I need to create it in 3 days, and I want to create it. I need some help. Thanks..

3 Comments
2024/03/27
07:11 UTC

Back To Top