/r/aws
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
If you're posting a technical query, please include the following details, so that we can help you more efficiently:
Resources:
Sort posts by flair:
Other subreddits you may like:
Does this sidebar need an addition or correction? Tell us here
/r/aws
Hi,
is the 3 months free Lightsail trial overall for 750h a month, meaning for Windows and Linux combined. Or can I run 750h of Windows and also 750h of a Linux lightsail a month?
I see knowledgeable devs advocate for DynamoDB but I suspect it would just slow you down until you start pushing the limits of a RDBMS. Amplify's use of DynamoDB baffles me.
DynamoDB demands that you know your access patterns upfront, which you won't. You can migrate data to fit new access patterns but migrations take a long time.
GSIs help but they are eventually consistent so they are unreliable - users do not want to place a deposit then see their balance sit at $0 for a few seconds before bouncing up and down.
Compare this to a RDBMS where you can query anything with strong consistency and easily create an index when you need more speed.
Also, the Scan
operation does not return a consistent snapshot, even with strongly consistent reads enabled - another gotcha.
Hello fellow engineers, I am having trouble finding the correct answer to this question.
I have a certain EC2 server with ubuntu 22.04 installed and a java app running on it (linux service). This begun as a small app so I used to t2.medium for it which is using the i386 architecture (did not know at the time). I now need to upgrade this server to a stronger one, and I was thinking t2.xlarge which is using the x86_64 architecture. If I just resize the server (change the type) will the app and everything on the server work regularly or can there be some possible problems there?
What are my options here?
Thanks in advance. :)
.Net
App will run on prem or Azure
The queue receives ~100K messages/day.
Messages are unevenly distributed in time. The amount of messages peaks on certain hours (see image).Processing of a message takes between 200ms and 5s. ~90% finish in 1s.
I need to receive the messages as soon as they appear in the queue and process them. Minimizing delay is commendable.
My questions:
Should I have a single `SQSConsumer` consuming in a infinite loop and push messages to some in-memory queue? (Where I consumer and fire async processing)
Should I have multiple `SQSConsumer` consuming?
Should I have a single or multiple `AmazonSQSClient`?
Should there be no internal queue and should I run ~20 concurrent (threads) consumers each with its own `SQSConsumer`?
Is it recommended to persist messages on prem?
Should I use long or short pooling?
Please advice :)
I am trying to be PCI DSS compliant by having end to end encryption. I am using ECS Fargate, and was wondering if anyone has been able to do end to end encryption somehow? I think Service Connect may work but I am unsure if I need to configure my containers with nginx etc. Any guidance or general discussion about this would be appreciated!
Aurora DSQL announced y'day in re:Invent 2024 https://aws.amazon.com/blogs/database/introducing-amazon-aurora-dsql/ - some of the very interesting features are:
- Multi Region Active-Active
- Strong Consistency across mulktiple regions
- Serverless
- Low Latency
Is this the true equivalent to DynamoDB NOSQL database but in the SQL world?
We're currently using db.m6g.2xlarge and are considering whether the more expensive db.m7g.2xlarge is worth it or not
Our application is a WhatsApp Marketing + Customer Support SaaS. Main points of isses are at
So first we're ofcourse spending more time on making the architecture + queries more efficient but thought if upgrading the m6 to m7 is worth it or not?
found this one article which recommends it for ec2 for sure: https://www.learnaws.org/2023/02/25/m7g-m6g/
Our free memory most of the time is just 20gb out of 32, so definitely not a memory hungry application.
By the way, I have just switched to db.m7g.2xlarge, so will definitely know soon, but what has been your experience if you're also using rds? m5 vs m6/7 is a no brainer because of gravitron, but not sure about this one
I'm building an App (non profit) and for now I use Aws Amplify, S3 and DynamDB. Honestyl I'm not really happy with that. I feel like it's way to complicated and it's very slow. I don't know why but for like 30 items in my DynamoDB it takes seconds to get the items. This is no acceptible! Because what if i have more then thousand of items. Would it takes multiple seconds or worse minutes to get the request fulfilled.
So my Question is. IS this a generel problem with AWS? And what are good or better alternatives? I heard about Supbase. And they using postgressql and it looks like i can connect prisma too. I like to work with prisma and it's fast too.
But how is the pricing? Is it cheaper than aws ? What are the key points i have to take a look?
Or is it better or possible to host on cloudflare your own backend? I mean i need just an Api Backend maybe with Nestjs. And of course an Authentication. Maybe OAuth2.
But i could not find any resource how to do this. Is it possible to host a nestjs app on cloudflare. On the pricing i saw 100k requests/day on the free plan this would be really enough on the first step.
Do they really mean that as I understand, an api request to my backend.
Or what is meant by 100k/request per day.
I hope someone can give me some tipps.
So I have been using AWS services for almost a year now and have promptly paid every month's bills. This time, I am running low on the bucks.
The earliest I can pay is, the 5th of next month even if it means paying the aggregate of 2 months.
Will my account be suspended or my instances be shut down if I delay the payment?
UPDATE: I contacted support and I've gotten approval for an extension till the 10th of Jan as a one time exception. If anyone is curious, the amount is small, about $26.
Hi all,
I have an ingestion pipeline with glue and s3 combination. S3 buckets will be used for Raw layer, Publish layer and Functional layer.
Glue job between Raw and Publish will validate the schema.
I have to do this for 1000 + files which will run on daily basis.
I am looking at creating a generic job to which we will pass file name, it's schema as a parameter. The job will be same and only the parameters will change. It will save a lot of development time.
I would be grateful if you can provide your input on this.
P.S new to aws
If you need a ticket hmu - looking for 20% or less of cost
I was practicing with a s3 static website, deleted everything and started from scratch again, now this time my ACM is taking too long to validate, could it be that i used the same names for the bucket, cloudfront, and route53?
The following policy blocks write access to all the buckets. How to grant read access to one bucket and full access to all other buckets?
####
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "FullAccessToAllResources",
"Effect": "Allow",
"Action": "*",
"Resource": "*"
},
{
"Sid": "DenyWriteAccessToSpecificBucket",
"Effect": "Deny",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectAcl",
"s3:DeleteObjectAcl",
"s3:PutBucketPolicy",
"s3:DeleteBucketPolicy"
],
"Resource": [
"arn:aws:s3:::my-read-only-bucket",
"arn:aws:s3:::my-read-only-bucket/*"
]
},
{
"Sid": "AllowReadAccessToSpecificBucket",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-read-only-bucket",
"arn:aws:s3:::my-read-only-bucket/*"
]
}
]
}
I’ve got some layers that are over the 250mb size limit for lambda layers. Has anyone got any recommendations for what I can do to get around this?
I am trying to design the architecture to a website that allows users to upload pdfs and they can choose a variety of options for how they would like the files manipulated.
I'm using cognito for user management and amplify to host the site. The next and biggest hurdle is the file management. It is my understanding the most common approach is to use a single S3 bucket and giving users presigned urls to upload to it. The question I run into is where in that process do you assign the information to a table to ensure users cant see each others files? If a user logs in what resource would you use to quickly supply them with their files and no one elses? Should I be using cloudfront instead? Should i consider an architecture where every user gets their own subdomain in an S3?
Hey folks, we jotted down some notes from the AWS re:Invent 2024 opening keynote, led by Matt Garman in his debut as AWS CEO. If you missed it, here’s a quick rundown of the big announcements and features coming in 2025:
1. S3 Tables: Optimized for Analytics Workloads
AWS unveiled S3 Tables, a new bucket type designed to revolutionize data analytics on Apache Iceberg, building on the success of Parquet.
2. S3 Metadata: Accelerating Data Discovery
The S3 Metadata feature addresses the pain point of finding and understanding data stored in S3 buckets at scale.
AWS unveiled Nova, a new family of multimodal generative AI models designed for diverse applications in text, image, and video generation. Here's what's new:
That’s the big stuff from the keynote, but what did you think?
Hi everyone,
I'm looking for advice on selecting the right service or combination of services for my specific use case. I need to process new files stored in S3, and I'm aiming to handle them in batches with near-real-time processing.
The files should be grouped based on a particular property found in the S3 event path and timestamp (part of the file name). While there usually aren't many files to process, there are occasions when the files may be larger (up to tens of MB). I'm confident that AWS Lambda can manage the whole processing of these files, even when grouped into batches.
Typically, the files are uploaded within a few minutes, but sometimes the upload process can take longer, and unfortunately, I can't modify this, at the same time I can't get a piece of information that files upload for specific timestamps have ended. Each file is timestamped to the nearest minute.
In essence, I receive S3 event notifications and want to group the events by a path property and their timestamp. Once the events for a given timestamp have stopped coming in (let's say for a minute), I want to send this batch for processing. I would say that overall there will be hundreds of such batches with tens of files for each minute.
I'd appreciate any recommendations or insights on how to best accomplish this. Thanks in advance!
Cloudfront has a compute utilization metric. 71-100 means that your function may suffer from throttling.
How does this behave in practice? Will the execution of the viewer_request
function be ignored and the request proceed or will Cloudfront throw an error? If an error is thrown, how do you fallback to it?
hey all,
is anybody out there in the real world who was able to set up Chatbot with Slack?
maybe it's a #skillissue on my end but two times I tried, two times I failed.
I hit a roadblock either when EventBridge "transforms" the message from CloudWatch or SNS fails with some "unsupported event"-type of error.
is there a github repo I can maybe look into? or a blog post from anybody?
thank you!
Hello everyone, I am currently learning to use AWS through a project and I am having trouble getting my app to talk with my postgres DB. So here's the setup:
My postgres db on RDS works well via pgAdmin
I was suspecting security groups but I can't figure out or find a way to debug.
Speaking of SG:
Security group | Inbound | Outbound |
---|---|---|
ALB | SSH/HTTP/HTTPS | to ECS, all traffic |
RDS | 5432 my ip, 5432 EC2 sg, 5432 ECS sg | all traffic |
ECS | 5432 RDS, 5000 ALB | 5432 RDS, all 0.0.0.0/0 |
EC2 | SSH, 5432 RDS | 5000 0.0.0.0/0 |
Any help would be greatly appreciated. Thanks!
We have an app which uses DB triggers to update certain rows (i.e, updating a user table might then trigger an insert into another table).
We use Elasticache Serverless (valkey) to cache db queries, but where we're getting stuck is when a trigger updates a row, we need to invalidate the cache for that row, AND the cache for other rows which the trigger might have created/updated. The application itself has no knowledge of what the trigger may have done.
How would you design a caching strategy to handle this? MySQL triggers can't call lambda functions, or talk to Elasticache directly. So it seems like you'd need a service to monitor all writes to the DB, then somehow invalidate the cache...?
(It might just be, don't use DB triggers you idiot. But I'm curious if there's a way to make this work.)
Is it possible to start a cloud service business by specializing in AWS? I’m wondering if deep knowledge of AWS could allow me to offer services to companies already using it or help those not yet on the cloud with implementation and optimization. I’d love to hear thoughts or experiences from others! Ps ( im at the beginning of my journey im studying to get my cloud practitioner cert to start 😊)
Hi,
I live in Europe so it's difficult to watch the talks in real-time because of the time difference. I checked out the AWS Events YouTube channel but only 2-3 talks are available.
I'm interested in a couple of the upcoming sessions, but the timing makes it impossible to watch them live. Is there a place where I can watch the recordings afterward,m or are they not available?
Thanks!
All the tutorials and guides when googling this issue go this way, 'how to Geoblock using ACLs/ WAF / other AWS things' But I need to remove it so I can gain access from worldwide ideally, Assumed it should just be this way by default and the guides infer this too.
I setup an EC2 instance and in the Security group O opened a port range and now they are open and working in my country (AU) but not from NZ, browser just shows network/site unreachable.
Can someone guide me to where I need to go to remove any blocking that may be in place or provide an explanation as to why it's operating that way.
This is one EC2 instance launched with next to no config other than the elastic IP applied, I can remote into the instance no issue. but just not other countries.
Where can I find information about the options for networking receptions tonight?
Hello, AWS community,
I'm looking for some advice and insights on how to transition into a role as an AWS Security Architect.
A bit about me:
I'm now looking to grow into a cloud architect role, specifically focusing on security architecture. My challenge is understanding the most effective roadmap to get there.
I’d love to hear your recommendations on:
I deeply appreciate your help and any guidance you can offer.
Thank you in advance!