Photograph via snooOG

News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.

News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.

Note: ensure to redact or obfuscate all confidential or identifying information (eg. public IP addresses or hostnames, account numbers, email addresses) before posting!

Smokey says: avoid streaming video to fight climate change! [see more tips]

If you're posting a technical query, please include the following details, so that we can help you more efficiently:

  • an outline of your environment
  • a description of the problem
  • things you've tried already
  • output that was displayed (if any)

Resources:

Sort posts by flair:

Other subreddits you may like:

Does this sidebar need an addition or correction? Tell us here

/r/aws

319,721 Subscribers

1

How can I run AZ loss simulation with a Fargate based ECS?

Hi there,

I am trying to simulate DR scenario where an AZ is completely lost. I thought of using Amazon Fault injection Service, however its not yet supported for Fargate based ECS tasks as mentioned here:-
https://docs.aws.amazon.com/fis/latest/userguide/az-availability-scenario.html

So what other options do I have? Is it somehow possible through scripting?

Thanks :)

2 Comments
2025/01/03
09:27 UTC

0

Roadmap to a Senior AWS Solutions Architect Role (SAA-Associate, DevOps/DataOps 4 yr Exp)

Hey r/aws,

I'm an AWS Certified Solutions Architect - Associate with 4 years of experience, including significant DevOps/SRE and DataOps work in an early stage startup for whom i built the complete infra on AWS. I am looking for proven strategies to land a senior-level AWS Solutions Architect role. What specific steps have worked best for you in making this career jump? Share your success stories and advice!

I'm interested in advice on the below or anything else i missed:

Skill development: What specific advanced skills are crucial for senior-level roles?

Networking: Best strategies for connecting with recruiters and potential employers?

Portfolio building: How can I showcase my experience effectively to potential employers (beyond my resume)?

Interview preparation: What types of questions should I expect, and how can I best prepare?

0 Comments
2025/01/03
05:32 UTC

0

Scam In Progress?

Weird title but I just got an email from AWS for a bill which got me confused as I have not used AWS in years. Upon logging in and checking what am I being billed for, I saw 4 ec2 instances running. All auto log you in as admin, but on one of them outlook and several other tabs were opened and outlook was signed into some bogus reading email related to donations..

The email had plenty of PayPal notifications about random payments received, but they all look phishy anyways with nothing in the sent folder.

Recent activity of that outlook account show logins from all over the world so clearly someone using a VPN but my question is what should I do?

Open a regular support ticket with AWS? Try to get a hold of a real person over the phone? Is this a bigger issue to report to some agency? Do I need to involve a lawyer or something? I just want to sort this mess out with the least effort from my end.

I just found this out cause I didn't want to pay 600$ for whatever instances have been running for however long and I'm sure as hell not paying for that if someone's been hijacking it to run a scam under my account lol

10 Comments
2025/01/03
03:57 UTC

2

What are the strategies to reduce spark EMR costs?

We utilize the latest version of EMR, operate on Graviton machines, and deploy large machines whenever feasible.

2 Comments
2025/01/03
03:54 UTC

4

Best Service to Deploy a Django app on?

What’s the best way to deploy a Django app?

5 Comments
2025/01/03
03:20 UTC

0

Switching from Godaddy CPanel to AWS - SO LOST. Can someone walk me through Wordpress Installation

Hey All,

I don't know Linux, or any form of machine coding. I want a wordpress account on AWS so I can move off godaddy for a personal website, and I just can't figure out what to do. I made a free account, got to EC2, made an instance, logged in, put in an arcane code I found on the AWS support page, and apparently I need to be a super user.

Anyone have a walkthrough guide? I don't care what the server type is, as long as I have a working wordpress on the front end.

TIA

49 Comments
2025/01/03
01:51 UTC

0

Is there no longer a small MySQL aurora instance available?

I run a couple very small services in my personal AWS account. I usually reserve my rds instance and for a long time I've been on a t3.small instance.

Well today I got my bill and it was much more than I thought it should be. I look into it to find out there's no an additional service charge for being on an older version of MySQL.

I attempt to upgrade MySQL version 2 to MySQL version 3 only to find out my instance class isn't supported.

I go to see what instance classes are supported and to me it looks like there are no small instance classes supported.

I went from $.04/hr for my instance to $.14 and now there are no small classes that will be less than that for MySQL?

What gives? Am I missing some instance class or pattern I should be using here?

9 Comments
2025/01/02
23:02 UTC

1

403 Forbidden when uploading file to S3 bucket using pre signed url

I have an AWS Lambda function that generates a presigned url each time it receives a request. This is the specific code that generates the url:

import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import S3Client from '../../../clients/S3Client';

export const getS3PresignedUrlForUploadingProductListingVideo = async (
  fileName: string,
  bucketName: string,
  contentType: string
) => {
  const command = new PutObjectCommand({
    Bucket: bucketName,
    Key: `product-listing-requests-videos/${fileName}`,
    ContentType: contentType,
  });
  // @ts-ignore
  const url = await getSignedUrl(S3Client, command, { expiresIn: 60 * 5 });
  return url;
};

The Lambda function has the correct permissions to generate the presigned url:

- Effect: Allow
  Action:
    - 's3:PutObject'
    - 's3:GetBucketLocation'
  Resource: ${param:s3-bucket-arn}

I've tested the function individually and the url is being generated correctly.

The Lambda function is requested by my front-end each time a user picks a file using the file input. Then when he submits the file the following front-end code is executed:

export const uploadVideoToS3 = async (file: File, uploadUrl: string) => {
  await axios.put(uploadUrl, file, {
    headers: {
      'Content-Type': file.type,
    },
  });
};

Regarding my bucket configuration:

All public access is blocked (this is a requirement, so disabling this is not an option)

https://preview.redd.it/tm73m0eirnae1.png?width=783&format=png&auto=webp&s=347826513c76d817613dec26cf6ca9c83672726f

This is how my bucket policy looks like

https://preview.redd.it/wcl0wdyyrnae1.png?width=715&format=png&auto=webp&s=75cb60e4af6ea5d664e25fdb039a3dd13982c1b2

I know there are several other posts related to this specific issue, but for some reason none of the fixes is working for me.

Thank you so much for the help you can provide!

3 Comments
2025/01/02
22:44 UTC

0

Do you agree that AWS in terms of cost, is out of reach for anyone outside of Fortune 5000 companies ?

I have looked into several AWS products that I have wanted to leverage on multiple occasions, but the cost of AWS for small teams pre revenue, is just not feasible. Services add up quickly and before you know it, you get a bill that hurts.

Again, I don’t know who their target market is, but as a matter of fact, I have even heard it from large organizations that AWS infrastructure is extremely expensive where many organizations look to leave to cut costs.

I have also had in depth discussions with several APP developers on IOS, that had products that were a hit, but had to get off AWS because the fees were out of control, in their cases, it was much cheaper to own their own infrastructure.

33 Comments
2025/01/02
22:23 UTC

1

Help Needed: Issues with Manual NLB Configuration in AWS EKS

Hi everyone,

I’m having trouble configuring a Network Load Balancer (NLB) manually for my microservices running in an AWS EKS cluster. Here’s a quick breakdown of the situation:

Context:

  1. Automatic NLB Configuration:
    • When I deploy the service using Kubernetes’ default automatic NLB creation, everything works perfectly. The API Gateway forwards traffic to the microservices without issues.
    • The automatically generated NLB configures subnets, security groups, health checks, etc., automatically, and the connection works fine.
  2. Manual NLB Configuration:
    • To gain more control and overcome the 5-security group limit, I’m trying to manually configure the NLB via a custom service.yaml file.
    • However, when I test the endpoint, I get a 500 InternalServerErrorException from the API Gateway.

Details of the Issue:

  • Current YAML: I’ve specified annotations for security groups, subnets, and health checks in the manual configuration. The targetType is set to instance.
  • Logs: The logs show differences in Target Group registrations and health check statuses compared to the automatic deployment.
  • Environment:
    • The EKS cluster is deployed using eksctl with private subnets.
    • The microservices are reachable when using the automatic setup.

.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: ${NLB_NAME}
  namespace: ${CLUSTER_NAME}
  labels:
    app: ${NLB_NAME}
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-name: ${NLB_NAME}
    service.beta.kubernetes.io/aws-load-balancer-security-groups: ${SECURITY_GROUP_IDS}
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "HTTP"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "${PORT}"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/healthcheck"
    service.beta.kubernetes.io/aws-load-balancer-subnets: ${VPC_PRIVATE_SUBNETS},${VPC_PUBLIC_SUBNETS}
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=300,stickiness.enabled=false,proxy_protocol_v2.enabled=false,stickiness.type=source_ip,deregistration_delay.connection_termination.enabled=false,preserve_client_ip.enabled=true
spec:
  type: LoadBalancer
  selector:
    app: ${DEPLOYMENT_IMAGE_NAME}
  ports:
    - port: ${PORT}
      protocol: TCP
      targetPort: ${TARGET_PORT}
      nodePort: ${NODE_PORT}

---
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
  name: ${NLB_NAME}-tgb
  namespace: ${CLUSTER_NAME}
  labels:
    app: ${NLB_NAME}
spec:
  targetGroupARN: ${TARGET_GROUP_ARN}
  serviceRef:
    name: ${NLB_NAME}
    port: ${PORT}
  targetType: instance
  nodeSelector:
    matchLabels:
      beta.kubernetes.io/instance-type: t2.small
      alpha.eksctl.io/cluster-name: ${CLUSTER_NAME}



                          +-----------------+
                          |     Gateway     |
                          +--------+--------+
                                   |
                                   v
                          +--------+--------+
                          | Load Balancer   |
                          +--------+--------+
                                   |
          +------------------------+-------------------------+
          |                        |                         |
          v                        v                         v
 +--------+--------+      +--------+--------+       +--------+--------+
 | Cluster 1       |      | Cluster 2       |       | Cluster 3       |
 | +-------------+ |      | +-------------+ |       | +-------------+ |
 | | Microservice| |      | | Microservice| |       | | Microservice| |
 | |     A       | |      | |     B       | |       | |     C       | |
 | +-------------+ |      | +-------------+ |       | +-------------+ |
 +-----------------+      +-----------------+       +-----------------+

Questions:

  1. What configurations or steps might I be missing to replicate the automatic setup manually?
  2. Should I consider switching to targetType: ip instead of instance for better pod routing?
  3. Are there best practices for replicating the automatic security group and subnet configurations in a manual setup?

Any advice, guidance, or similar experiences would be greatly appreciated! Thank you in advance for your help 🙏

0 Comments
2025/01/02
13:54 UTC

0

Import Model on Bedrock

I tried using Bedrock's import model feature and I've been facing some problems. I imported Llama 3.2 11B Vision Instruct (I know it's already in Bedrock but just wanted to experiment with a multimodal model) and it returns really awful and hallucinated output (kinda like it just spits things out of its training data). The output can be somewhat stabilized with extensive prompt engineering techniques, but it definitely can't work with everyday normal inputs, also it doesn't generate or accept images and can only be used as a single prompt rather than chat in the playground. Let me know your experience with model import feature or maybe if I'm doing something wrong.

0 Comments
2025/01/02
13:49 UTC

1

Redshift serverless auto copy

Hello, What would be the most convenient way to monitor COPY JOBS success/error on a Redshift Serverless? I don't see many monitoring options on the console, not even sure if the serverless version reports metrics to Cloudwatch?

0 Comments
2025/01/02
08:09 UTC

3

AWS Governance/Compliance Execution Strategy

Our organization is working to bring app teams and workloads into compliance for Standards, Governance and Security. I feel this can be achieved in a few ways and it seems like some things should be handled at the pipeline level (app architecture compliance), then some things at the platform (ex via SCP/RCP), then others Post Deployment (ex AWS CONFIG). Is this a sane strategy? One of our groups is saying it can all be achieved via our CI/CD pipeline but we don't have a solid single pipeline strategy yet, so some teams have their own pipeline and others are still deploying via click ops. Is there a model or framework that discusses a multi tiered strategy like this? What is everyone using to achieve compliance and governance?

2 Comments
2025/01/02
19:46 UTC

16

How to reduce cold-start? #lambda

Hello!

I would like to ask help in ways to reduce lambdas cold-start, if possible.

I have an API endpoint that calls for a lambda on NodeJS runtime. All this done with Amplify.

According to Cloudwatch logs, the request operation takes 6 seconds. However, I want to attach logs because total execution time is actually 14 seconds... this is like 8 seconds of latency.

  1. Cloudwatch lambda first log: 2025-01-02T19:27:23.208Z
  2. Cloudwatch lambda last log: 2025-01-02T19:27:29.128Z
  3. Cloudwatch says operation lasted 6 seconds.

However, on the client side I added a console.time and logs are:

  1. Start time client: 2025-01-02T19:27:14.882Z
  2. End time client: 2025-01-02T19:27:28.839Z

Is there a way to reduce this cold start? My app is a chat so I need faster response times

Thanks a lot and happy new year!

35 Comments
2025/01/02
19:33 UTC

1

DMS CDC Tasks (Case Sensitivity)

In case this helps anyone in the future that uses DMS with potential configuration requirements at target endpoints.

Note that DMS tasks each create change tables at the target endpoint (Redshift in this use case) for CDC tasks in the public schema with the following format "awsdms_changes" + unique identifier appended to the table name based on each task's ARN.

For some DMS versions (3.5.2 from my use case although I don't have historical logs to ID if these are via previous versions), each task creates these unique tables with the identifier in lower case format while later querying (whether it's to select, truncate, drop, create, etc) with the unique identifier in CAPS.

Despite bad practice, this should be fine IF the target endpoint doesn't have limitations such as Redshift's enable_case_sensitive_identifier parameter being set to "true". Since AWS did release Zero-ETL general availability with Redshift being one of the supported services, this could affect some users since one of the requirements for Zero-ETL is to enable case sensitivity for Redshift.

Querying the Redshift DB (you can sort by schemaname 'public' or even '%awsdms%') should allow you to identify them even if you have not enabled CloudWatch logging for the task. Dropping the table and restarting the relevant task will recreate the dms change table in CAPS resolving this issue. Naturally, additional action is required based on your use case (ie. need for uptime, SLA metrics, etc).

Note: As briefly mentioned before, I'm aware that 3.5.3 does not have this issue, but I haven't identified release notes on any previous versions identifying this to track when the dms change table behavior was modified. Previous version could have already addressed this.

0 Comments
2025/01/02
18:37 UTC

152

AWS Support Is Excellent.

I just wanted to say they handled my issue wonderfully and I was lucky to have the help from them that I needed.

57 Comments
2025/01/02
17:00 UTC

0

Functional difference between GuardDuty and Defender for Cloud?

It seems like there's a lot of overlap between Microsoft Defender for Cloud and GuardDuty. Is there anything that GuardDuty offers for AWS accounts that Defender for Cloud doesn't have? Thanks!

1 Comment
2025/01/02
16:19 UTC

2

Hosting a Static Personal Portfolio Site?

So i'll clarify that I have very very little knowledge of AWS. I have however purchased a domain through route53 (It's like 50 cents a month).

I basically purchased one of the website templates (I know I know, but I suck at design so). But it's essentially just static files.

What's the best way to host it, it's mainly just going to serve as a resume holder/personal site but I may add some blog posts. Either way it's gonna be low traffic (But I don't wanna end up with some crazy bill if I got DDoS'd).

I've heard of Amplifi but i've also read about people having issues. Is an S3 bucket the best way? Is there a guide for how to do this? Since im very unfamiliar with AWS/Hosting in general.

17 Comments
2025/01/02
16:12 UTC

6

building Serverless app using SAM

i'm trying to build a serverless app that would consist of multiple lambda functions, one api gateway with multiple Apis and S3 bucket. so far everything is working smoothly in terms of the configuration and basic infrastructure. but the docs for SAM isn't that good and sometimes i find it a bit misleading. as an example there is nowhere in the docs (as far as i know) about how to define an S3 bucket in the resources section inside the template.yaml. but i found it briefly mentioned in one of the example on the github repo so my question is there a better resources to learn more about SAM or should i stick to the docs and stackoverflow

16 Comments
2025/01/02
15:13 UTC

15

GitHub self hosted runner on ECS

Hello! Currently my team is migrating from a EKS cluster to ECS, due to some cost limits that we had.
I've sucessfully migrated all the internal tools that were on EKS, the only thing left is the Docker in Docker github self hosted runners that we had.

There seems to be a lot of solutions deploying them to EKS but I can't really find a way to deploy them on ECS. Is it feasible? From what i've seen GitHub's Actions Runner Controller is limited to kubernetes.

Thank you!!

13 Comments
2025/01/02
14:47 UTC

1

Virtual desktop (Windows Desktop + GPU)

Hi, I’m looking for a Windows desktop with an NVIDIA GPU (RTX 2060 or higher, Compute Capability 7.5+) for approximately 90 minutes. I’m considering whether an AWS instance might be a suitable option for this purpose. Could you provide any advice or recommendations? Thank you!

3 Comments
2025/01/02
13:44 UTC

2

Lambda Course Suggestion

I'm searching for a course about lambda that walks me through a project that uses SQS, CDK, API gateway, VPC, DynamoDb, and RDS, I prefer the language to be either Python or Java

0 Comments
2025/01/02
10:18 UTC

2

Permissions with iam or organization?

Looking for the best way to separate dev from production. Is if using iam or utilizing "organization" or is it to just use entirely different master accounts for dev and production?

Want to make sure dev guys can't terminate production instances etc.

7 Comments
2025/01/02
07:41 UTC

16

AWS SES Production Access Denied - What Am I Doing Wrong?

Hello everyone, like many others, I got denied prod access without explanation. I will be using SES in my SAAS, and here are the details that I provided to the support team (the actual submission was more detailed, I used GPT to summarize since it was too long, so sorry for that):

"The service is pretty straightforward - it sends verification emails when users sign up and reminder emails for deadlines they set. I've built in several security measures like email verification, CAPTCHA, DNS verification, and rate limiting to prevent abuse. For the actual reminders, users can set custom due dates and messages if a user sets our reminders to spam they are immediately blocked from sending any more reminders through SNS and webhooks, and everything's kept simple with just the essential info and action buttons.

On the technical side, I've implemented all the recommended security features like DKIM/SPF, and I'm using proper bounce/complaint monitoring. Every email includes clear unsubscribe options that will stop all past scheduled/creation of reminders, and the system automatically handles any bounces or complaints throw our own blacklist. I'm strictly sticking to transactional emails - no marketing stuff at all."

I also provided the verification and reminder email template, I just want to know what I'm doing wrong and if my use case is that bad (so I can give up and just move on).

16 Comments
2025/01/02
07:35 UTC

0

Why didn't my CDK code work?

I want to create a CICD pipeline that pushes a docker image of my portfolio to ECR and deploys with App Runner. Below is what I currently have in my CDK in typescript. The Bootstrap and Synth commands work but Deploy does not. I get an error with AppRunner My IAM user has administrative permission which I'm assuming includes the AppRunnerECR permission.

import * as cdk from "aws-cdk-lib";
import * as ecr from "aws-cdk-lib/aws-ecr";
import * as iam from "aws-cdk-lib/aws-iam";
import * as apprunner from "aws-cdk-lib/aws-apprunner";
import { Construct } from "constructs";

export class AwsLowTrafficPlatformStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const user = new iam.User(this, "myInfraBuilder"); // ECR requires an IAM user for connecting Docker to ECR

    // IAM Role for App Runner
    const appRunnerRole = new iam.Role(this, "AppRunnerRole", {
      assumedBy: new iam.ServicePrincipal("tasks.apprunner.amazonaws.com"),
    });


    // ECR Repository
    const repository = new ecr.Repository(this, "Repository", {
      repositoryName: "myECRRepo",
      imageScanOnPush: true,
    }); // L2 abstraction


    // App Runner Service
    const appRunnerService = new apprunner.CfnService(this, "AppRunnerService",
      {
        serviceName: "StaticWebsiteService",
        sourceConfiguration: {
          authenticationConfiguration: {
            accessRoleArn: appRunnerRole.roleArn,
          },
          imageRepository: {
            imageIdentifier: `${repository.repositoryUri}:latest`,
            imageRepositoryType: "ECR",
          },
          autoDeploymentsEnabled: true,
        },
        instanceConfiguration: {
          cpu: "256",
          memory: "512",
        },
      }
    );

    repository.grantPull(appRunnerRole);
  }
}
2 Comments
2025/01/02
07:05 UTC

3

Does anyone use AWS Infrastructure Composer successfully?

Hello architects. I'm doing my best to utilize as many tools within AWS as possible, to reduce the extraneous applications as much as possible. One thing I wanted to do was attempt to diagram and map out my architecture without resorting to Visio, or Google Drawings, etc. So I learned that the AWS Infrastructure Composer was supposed to solve this natural step in planning architecture.

I don't see how. I can only drag rectangles of AWS components, but I can't draw rectangles, arrows, paths, etc., and there is to true way to save your visual work. The Composer tool doesn't have a cloud save (despite this being AWS), and instead you must designate a local folder on your desktop to sync your canvas. But this doesn't save your canvas visually, it just dumps the raw configuration of each "tile" you added, and doesn't even remember how you arranged them on the canvas.

So, am I just not using the Infrastructure Composer properly, or is this indeed some kind of half-baked Beta? Thanks for reading.

6 Comments
2025/01/02
07:04 UTC

12

Not able to get CloudFront to work with a Custom Origin - Everything is a 404 - at the end of my wits

[SOLVED]

Hi all,

I have been using CloudFront with S3 seamlessly for a while now. But recently I've come across a requirement where I need to use CF with a custom origin, and I can't get past this issue.

Let's say the origin is - example.com and the CF URL is cfurl.cloudfront.net

I am trying to fetch cfurl.cloudfront.net/assets/index-hash.js

And this is the error page I am getting -

A Google 404 for some reason

The response headers are -

Response headers

Here's what I have observed so far -

  1. When I go to example.com/assets/index-hash.js, I get the appropriate js file back and I get access logs on my origin.
  2. When I try cfurl.cloudfront.net/assets/index-hash.js, I get the above 404 and I don't get any access logs on my origin.
  3. The error page makes it seem like that CF is trying to access google.com/assets/index-hash.js ?
  4. The origin domain is correctly configured in the distribution to the best of my understanding, with no origin path.

Additional details -

  1. The origin in this case is a Google Cloud Platform server (not sure if that has anything to do with the Google 404 page)

Is there anything else I can check to figure this one out? Any help is greatly appreciated.

19 Comments
2025/01/02
05:25 UTC

2

AWS Compute Savings Plans and non continuous Fargate ECS consumption

As AWS document specified, the Fargate ECS consumption can benefit from Compute Savings Plans. However, it's still not clear to me how the discounts are applied to the non-continuous consumptions.

For example, I have a dev TASK only runs on week days and auto-scaled down to 0 on weekends.

Let's say the task uses 2 vCPU and 2GB memory, The Fargate prices are as follows,

per vCPU per hour $0.04048

per GB per hour $0.004445

Then the on-demand weekly cost is $10.782

2 vCPU * 0.04048 Unit Price * 5 days * 24 hours per day = $9.7152

2 GB * 0.004445 Unit Price * 5 days * 24 hours per day = $1.0668

Assume this is the only thing applicable to the saving plan.

If I commit to a 3-year all upfront Compute Savings Plan, it gives 52% discount and brings the hours rates to

per vCPU per hour $0.0194304

per GB per hour $0.0021336

Based on my understanding, the weekly cost is $7.245504

2 vCPU * 0.0194304 Unit Price * 7 days * 24 hours per day = $6.5286144

2 GB * 0.0021336 Unit Price * 7 days * 24 hours per day = $0.7168896

Please confirm whether my calculation is correct.

Furthermore, I would like to know how savings plans are applied to multiple tasks.

Let's assume, there's another task with the same spec which is scaled down to 0 on weekdays but scaled up to 1 on weekends.

Will these two tasks consume the same savings plans credits and benefit each other?

1 Comment
2025/01/02
01:03 UTC

2

Umm.. so I have to create a new signed URL for each file?

Is this the recommended approach to upload multiple files to S3 from a web app (HTML input of type file):

  • Call getSignedUrl from @aws-sdk/s3-request-presigner for each file, creating a new PutObjectCommand with Key being the name of the file.
  • Do a PUT request to each signed url for each file

Is this feasible for say hundreds of files? Each request happening sequentially... How else do others handle this simple use case?

17 Comments
2025/01/02
01:20 UTC

Back To Top