/r/aws
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
If you're posting a technical query, please include the following details, so that we can help you more efficiently:
Resources:
Sort posts by flair:
Other subreddits you may like:
Does this sidebar need an addition or correction? Tell us here
/r/aws
Hi - I have a platform where I need to send SMS notifications - ideally supporting as many countries as possible. I"m not finding many answers or info as to how AWS SMS works, I was hoping someone here would know:
- It looks like there are some countries where you are required to register a number in order to send a message. but other countries where AWS just uses a shared pool of origination numbers. Is there a list of these countries where I need to register vs just using a shared pool?
- I've registered a sender ID in a few countries - if there is another country I send an SMS message to that doesn't need registration will it automatically use the sender ID I pass in anyways?
- Any way I can log/see the sent messages/failures in AWS console? I tried Cloudwatch but nothing is popping up there.
Any info at all would be helpful!
Hey!
I'm setting up OIDC authentication (not for cognito) using ALB but I'm struggling to validate the "x-amzn-oidc-data" token in Rust.
I've followed the documentation here and here. But I'm always getting "Invalid padding" error.
use base64::{engine::general_purpose::URL_SAFE_NO_PAD, Engine as _};
use pem::parse as parse_pem;
use ring::signature::{UnparsedPublicKey, ECDSA_P256_SHA256_FIXED};
use serde::Deserialize;
use std::error::Error;
#[derive(Debug, Deserialize)]
struct Claims {
sub: String,
name: String,
email: String,
}
fn parse_ec_public_key_pem(pem_str: &str) -> Result<Vec<u8>, Box<dyn Error>> {
let pem_doc = parse_pem(pem_str)?;
if pem_doc.tag() != "PUBLIC KEY" && pem_doc.tag() != "EC PUBLIC KEY" {
return Err("Not an EC public key PEM".into());
}
Ok(pem_doc.contents().to_vec())
}
fn split_jwt(token: &str) -> Result<(&str, &str, &str), Box<dyn Error>> {
let parts: Vec<&str> = token.split('.').collect();
if parts.len() != 3 {
return Err("Invalid JWT format".into());
}
Ok((parts[0], parts[1], parts[2]))
}
fn verify_alb_jwt(token: &str, public_key_pem: &str) -> Result<Claims, Box<dyn Error>> {
// Split into header/payload/signature
// TODO: check ALB arn!
let (header_b64, payload_b64, signature_b64) = split_jwt(token)?;
println!("{:?}", header_b64);
println!("{}", signature_b64);
let signature_bytes = URL_SAFE_NO_PAD.decode(signature_b64)?;
let pubkey_der = parse_ec_public_key_pem(public_key_pem)?;
let signing_input = format!("{}.{}", header_b64, payload_b64);
// Verify the signature
let unparsed_key = UnparsedPublicKey::new(&ECDSA_P256_SHA256_FIXED, &pubkey_der);
unparsed_key.verify(signing_input.as_bytes(), &signature_bytes)?;
// let claims: Claims = serde_json::from_slice(&payload_json)?;
Ok(Claims {
sub: "test".to_string(),
name: "test".to_string(),
email: "test".to_string(),
})
}
fn get_jwt_public_key(kid: &str) -> Result<String, Box<dyn std::error::Error>> {
let url = format!(
"https://public-keys.auth.elb.eu-west-1.amazonaws.com/{}",
kid,
);
let response = reqwest::blocking::get(&url)?.text()?;
println!("{:?}", response);
Ok(response)
}
fn main() {
let token = r#"eyJ0BUbw=="#;
let public_key_pem = get_jwt_public_key("c6fc5187-f1fd-4052-b2aa-b845ef225362").unwrap();
match verify_alb_jwt(token, &public_key_pem) {
Ok(claims) => {
println!("JWT is valid! Claims = {:?}", claims);
}
Err(e) => {
eprintln!("JWT verification failed: {e}");
}
}
}
I'm reading the token directly from the HTTP header and I don't really understand why AWS should not be compliant with standard libraries...
"Standard libraries are not compatible with the padding that is included in the Application Load Balancer authentication token in JWT format."
Hello, looking for some advice on solving a somewhat novel networking need in AWS. To put my cards on the table, I'm not a networking expert nor an AWS expert, though I'm a fairly experienced software engineer with familiarity with networking concepts. Just to give some context to my degree of experience and so forth on these topics.
I'm trying to implement a cloud-based application from a vendor which needs network line of sight to EC2 instances on our VPCs.
This is fairly straightforward if the networking configuration is sensible, but mine is not.
The network I'm working with consists of over 700 VPCs. Each of them may have overlapping subnets. Using cloudware I was able to determine that about 20% of them do, but coincidentally I found no actual IP address reuse.
These VPCs are totally isolated from one another and have no visibility from one to the other, meaning there is no peering.
I'm not sure this external cloud application will need to communicate with EC2 instances on all of the VPCs, but I'm moving forward with the assumption that it may.
Being new to AWS, I started out testing, and at this point have proved out that connecting via VPC and a site to site gateway is almost trivial in the simplest case, which is a single VPC with a single EC2 instance to manage.
I moved on to a more complicated test case, with two isolated VPCs and overlapping subnets. Using a transit gateway I was able to use static routes to route to VMs on the same subnets but different VPCs, but that doesn't solve the IP reuse case.
I'm looking for architecture that can handle this. What I want is to have my external application communicate via a site to site gateway to a sort of an NAT device. I want the NAT device to present a sensible subnet range to my cloud application. I want it to translate that sensible range to actual devices across my VPCS, And it needs to be two-way, meaning my EC2 instances need to be able to route traffic back through This device and it needs to be presented back to the cloud application with the untranslated IP.
After looking into NAT in AWS, I see that it's unidirectional so that's not the solution I need.
I've also poked around a little bit at privatelink, which seems to be the way to go. I Don't have it in front of me but I seem to remember that there is an AWS white paper on this exact use case using private link and a network load balancer to do the job, but from what I can understand, that service is intended to connect AWS endpoints and services in this exact situation, not to support connection to an outside application on the internet in this way.
Is there a native AWS solution to routing through this wacky environment I'm dealing with? I think the answer might be to reconfigure our network to something more sensible, but making that suggestion would almost certainly get me burned at the stake...
If you're still here, thanks for sticking through the long message 😂
Hey, I made an account a few years ago on AWS using my personal email and after a while I ended up leaving it aside and forgot about it, however these days I tried to log in again and it says that there is no account associated with my email , and when I try to create an account it says that I cannot create a new account using the same email as another account.
Could anyone help me resolve this??
Would you be able to send an email to support or something similar?
Note: sorry for any mistakes or something, I'm using Google Translate, I'm learning English but I feel like I don't know enough yet to communicate clearly.
Hello! Wondering how you go about making sure your subscription to this is cancelled— I bought it by accident thinking it was something else. They make backing out very confusing and convoluted, likely on purpose, and I can’t remove the attached card. Im a broke student. How do I know it’s cancelled and won’t charge me?
Every company I've been at has an overpriced CSPM tool that is just a big asset management tool essentially. They allow us to view public load balancers, insecure s3 buckets, and most importantly create custom queries (for example, let me see all public EC2 instances with a role allowing full s3 access).
Now this is queryable already via Config, but you have to have it enabled, recording and actually write the query yourself.
When Amazon Q first came out, I was excited because I thought it would allow quick questioning about our environment. i.e. "How may EKS do we have that do not have encryption enabled?". "How many regional API endpoints do we have?". However at the time it did not do this, it just pointed to documentation. Seemed pointless.
However this was years ago, and there's obviously been a ton of development from Amazon's AI services. Does anyone know if Q has this ability yet?
Guys I’m trying to use both client and server side sdk for js in my project Can someone share the link for the appropriate documentation
I Launched an Wordpress website on ec2 instance and connected the database to mysql database. The Wordpress website is all working fine. Then I created a AMI of the ec2 instance and launched another ec2 instance with that AMI. When I try to open the Second ec2 instance with the public IP, it is not working. Then I logged into the machine and restarted the apache server “sudo service httpd restart”. Now this public IP is also working. Then I created a target group placed it in a load balancer and also created Auto scaling group. When ASG created more ec2 instance from the launch template with my AMI, the public IP is not working. same as my second ec2 instance but when I manually login into the machine and restart the apache server of the particular instance, it is working. So how to resolve this? I want the new ec2 instance to start working as they are created not for me to manually start it. Also tried to add the user data “sudo service httpd restart” to the launch template. Checked the security groups and also added all traffic If I missed something and still it is not working. Need advice/solutions.
Thank you!
TLDR: EC2 instances created from an AMI in the Auto Scaling Group aren't automatically starting the Apache server. tried adding "sudo service httpd restart" to user data, but it's still not working. This issue could be related to the EC2 instance startup sequence, security settings, or missing Apache auto-start configuration.
I have a single user workspace requirement in a region where Simple AD is not available. The only option is to run a Microsoft AD which essentially doubles the workspace cost. We don't use any Microsoft AD features. Can anyone please suggest a way to work around this?
I'm going through this tutorial: https://upskillcourses.com/courses/essential-web-developer-course/2-turn-cloud9-into-a-server. However, on the part where it wants me to use AWS Cloud 9, I checked and I think Cloud 9 is discontinued. Is there a replacement I can use?
For years, I was Amazon S3’s biggest cheerleader. As an ex-Amazonian (5+ years), I evangelized static site hosting on S3 to startups, small businesses, and indie hackers.
“It’s cheap! Reliable! Scalable!” I’d preach.
But recently, I did the unthinkable: I migrated all my projects to Cloudflare’s free tier. And you know what? I’m not looking back.
Here’s why even die-hard AWS loyalists like me are jumping ship—and why you should consider it too.
Let’s be honest: S3 static hosting was revolutionary… in 2010. But in 2024? The setup feels clunky and overpriced:
Worst of all? You’re paying for glue code. To make S3 usable, you need:
✅ CloudFront (CDN) → extra cost
✅ Route 53 (DNS) → extra cost
✅ Lambda@Edge for redirects → extra cost & complexity
I finally decided to ditch Amazon S3 for better price/performance with Cloudflare.
As a former Amazon employee, I advocated for S3 static hosting to small businesses countless times. But now? I don’t think it’s worth it anymore.
With Cloudflare, you can pretty much run for free on the free tier. And for most small projects, that’s all you need.
Hey everyone,
I'm looking for the best way to expose a public API endpoint that makes calls to an LLM. A few key requirements:
Streaming support: Responses need to be streamed for a better UX.
Security & abuse protection: Needs to be protected against abuse (rate limiting, authentication with Firebase, etc.).
Scalability: Should handle multiple concurrent requests efficiently.
I initially tried Google Cloud Run with Google API Gateway, but I couldn't get streaming to work properly. Are there better alternatives that support streaming out of the box and offer good security features?
Would love to hear what has worked for you!
I applied for Startups Activate program on Jan 14 and it says 7-10 business days. How long did it take for you to hear back?
I know S3 costs are in relation to the amount of GB stored in a bucket. But I was wondering, what happens if you only need an object stored temporarily, like a few seconds or minutes, and then you delete it from the bucket? Is the cost still incurred?
I was thinking about this in the scenario of image compression to reduce size. For example, a user uploads a 200MB photo to a S3 bucket (let's call it Bucket 1). This could trigger a Lambda which applies a compression algorithm on the image, compressing it to let's say 50MB, and saves it to another bucket (Bucket 2). Saving it to this second bucket triggers another Lambda function which deletes the original image. Does this mean that I will still be charged for the brief amount of time I stored the 200MB image in Bucket 1? Or just for the image stored in Bucket 2?
Hey everyone,
I’m new to AWS and trying to understand IAM policies, but I’m a bit confused about some options in the Resources section when creating a policy.
For example, in this image when setting a resource for an IAM service, there’s an option called "Any in this account" – what exactly does this do?
Also, there’s an "Add ARN to restrict access" option. Why does this only let us restrict access? Why can’t we specify a certain number of ARNs directly instead of just restricting them? I don’t fully understand how this works.
and then how is it different from choosing actions in the first step? I don't get the difference.
I’d really appreciate any help! Thanks in advance.
Title. I was trying to deploy the latest VLM models as an endpoint for inference using a GPU instance on sagemaker. But i get this "Unsupported Model Type ... (Model Name)" error. I was able to deploy the Meta Llama 11b Vision model but not the latest florence/h2o mississippi models. Any workaround/suggestions?
I just wanted to ask, is the option to select the capacity type for db instances, deprecated in 2025? i was trying to create a serverless Aurora v1 ( for some testing purposes ) but there was no option to select the capacity type, rather the db instance that was getting created was type provisioned, is there still some way to create to serverless instaces?
Does Canada’s tariff response mean prices are going up by 25% soon for AWS customers in Canada? Or is it just for goods and not digital services?
Hello all,
I have a domain registered through Route 53. I've got my public-facing server set up and have created an A-record for my server, server.mydomain.com on IP XX.XX.XX.XX.
The problem I am seeing is that if I do a ping -a from a remote computer, the resolved name is this:
ec2-XX-XX-XX-XX.compute-1.amazonaws.com
Any ideas on what I'm missing?
I meant to put EBS (elastic block storage) in the title, not elb
So, I'm reading about how Ephemeral (instance stores) will be deleted when you stop an EC2 instance. This is because the VM may very well change, and Ephermerals are binded to the VM. Makes sense. And EBSs are not directly binded to the VM, so shutting down an EC2 instance won't destroy EBS data. Kinda makes sense.
If I start up an ec2 instance (let's say a t2.micro), ssh in, go to /home/ubuntu, and create a file, where is it going? is it going to an instance store that will eventually get wiped? Or an EBS where data will persist upon restarting? Reading through this SO discussion (amazon web services - How to preserve data when stopping EC2 instance - Stack Overflow) clears up the differences of EBS and Ephemeral, but it discusses root drive and temporary drive (Ephemeral). upon booting an ec2, what data is ephemeral and what is ebs? I have a server with code for a webserver, and for the sake of conversation let's say I also have a MySQL local db on the server, running the LAMP stack.
What data possibly becomes "lost" upon restart (the skillbuilder CCP course said a local mysql db can be lost upon stop and start)?
Is ebs "integrated" into the server so it "looks" like it's just available data on a server, or is it accessible via an API?
I understand the CCP cert probably doesn't expect this depth of convo, but I'm pretty confused and this relates to the work I do. Thanks for reading and any replies!
If you face an AWS outage and it affected multiple AZs. And the issue is from provider side. Not a human error. What’s the first thing you do ? Do you have a specific workflow or a an internal protocol for Dev Ops ?
I'm so stumped.
I have made a website with an api gateway rest api so people can access data science products. The user can use the cognito accesstoken generated from my frontend and it all works fine. I've documented it with a swagger ui and it's all interactive and it feels great to have made it.
But when the access token expires.. How would the user reauthenicate themselves without going to the frontend? I want long lived tokens which can be programatically accessed and refreshed.
I feel like such a noob.
this is how I'm getting the tokens on my frontend (idToken for example).
const session = await fetchAuthSession();
const idToken = session?.tokens?.idToken?.toString();
Am I doing it wrong? I know I could make some horrible hacky api key implementation but this feels like something which should be quite a common thing, so surely there's a way of implementing this.
Happy to add a /POST/ method expecting the current token and then refresh it via a lambda function.
Any help gratefully received!
I'm trying to setup canary deployments for a CloudFront UI, and am wondering if any of you have tried something like this. If you have, then please tell me if there are issues with this setup before I attempt it.
Current state:
What I'm trying to do:
Trigger a Canary deployment of a website when I run sam deploy
.
Setup:
Using a CICD tool, create a CloudFront staging distribution via bash script
Add a Continuous Deployment Policy to the CloudFront distribution via SAM
Attach SAM lambda which is configured for canary deployments. This lambda just adds a header (based on the build information) to the CloudFront request
Using CICD tool pass staging distribution to Continuous Deployment Policy via --parameter-overrides
Using CICD tool pass header value based on the build artifact ID to the SAM lambda and the Continuous Deployment Policy
After successful SAM deploy, use CICD tool and AWS CLI to promote the staging distribution
General idea:
At deploy time, generate a unique header that the lambda adds to the CloudFront request. Since the lambda is setup for a Canary deployment, the new header will only be on some % of requests so some % of requests will get directed to the stage website.
Possible anticipated problems:
No idea how the CloudFront stuff actually functions, so I'll possibly need a secondary S3 bucket to hold the stage website
I'm not sure if staging distributions get their own arns, so updating it via CLI could cause drift
At some points I may need to figure out which distribution and which S3 bucket are prod/stage
Do you see any problems with this setup? Have you tried this before?
I have a Wordpress instance on AWS lighsail where I am hosting a website. I had to reboot this instance and since then I am not able to login to wp-admin. I get Not found - The requested URL was not found on this server error. When I type the Static IP address it shows the Apache2 Debian Default Page that I have attached. How can I get my WP site back?
We use Dynamo as the only data store at the company. The data is heavily relational with a well-defined linear hierarchy. Most of the time we only do id lookups, so it's been working out well for us.
However, I come from a SQL background and I miss the more flexible ad-hoc queries during development. Things like "get the customers that registered past week", or "list all inactive accounts that have the email field empty". This just isn't possible in Dynamo. Or rather: these things are possible if you design your tables and indexes around these access patterns, but it doesn't make sense to include access patterns that aren't used in the actual application. So: technically possible; practically not viable.
I understand Dynamo has very clear benefits and downsides. To me, not being able to just query data as I want has been very limiting. As I said, those queries aren't meant to be added to the application, they're meant to facilitate development.
How can I get used to working with Dynamo without needing to rely on SQL practices?
I've been wrestling with this all day and tried a few solutions, so wanted to see if anyone here had any advice.
To give a quick rundown - I have some Python code within a Lambda, and a part of it is
from PIL import Image
, and I understandably get the error [ERROR] Runtime.ImportModuleError: Unable to import module 'image_processor': cannot import name '_imaging' from 'PIL' (/var/task/PIL/__init__.py)
due to the Lambda being unable to access this library.
I have tried:
This did not work, I assume because I am installing it on a Windows machine, while Lambdas run on Linux, so I think this didn't work as the dependencies are the same.
I added the layer from here https://api.klayers.cloud/api/v2/p3.9/layers/latest/eu-west-2/html (I also tried with Python runtimes 3.10 and 3.12) - this still however gives me the same error I mentioned above.
Does anyone have any pointers on what I can do? I can give more info on the setup and code too if that helps.
I am trying to sync a local directory to an S3 bucket and the set commands are taking me in an erroneous circle.
(I've scrubbed the personal directory and bucket names)
Command for the simple sync function I am using:
aws s3 sync . s3://<BUCKET NAME>
Result:
An error occurred (MissingContentLength) when calling the PutObject operation: You must provide the Content-Length HTTP header.
I added the "content-length" header in the command:
DIRECTORY=. BUCKET_NAME="BUCKET NAME"
upload_file() { local file=$1 local content_length=$(stat -c%s "$file") local relative_path="${file#$DIRECTORY/}"
aws s3 sync "$file" "s3://$BUCKET_NAME/$relative_path"
--metadata-directive REPLACE
--content-length "$content_length"
--content-type application/octet-stream
--content-disposition attachment
--content-encoding identity
}
export -f upload_file
find "$DIRECTORY" -type f -exec bash -c 'upload_file "$0"' {} ;
Result:
Unknown options: --content-length,1093865263
I try a simple CP command
aws s3 cp . s3://BUCKETNAME
Result:
upload failed: ./ to s3://BUCKETNAME Need to rewind the stream <botocore.httpchecksum.AwsChunkedWrapper object at 0x72351153a720>, but stream is not seekable.
Copying a single file:
aws s3 cp FILENAME s3://BUCKETNAME
Result:
An error occurred (MissingContentLength) when calling the UploadPart operation: You must provide the Content-Length HTTP header.
I am at a loss as to what exactly AWS S3 CLI is looking for from me at this point. Does anyone have any direction to point me to? Thanks!