News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
If you're posting a technical query, please include the following details, so that we can help you more efficiently:
Resources:
Sort posts by flair:
Other subreddits you may like:
Does this sidebar need an addition or correction? Tell us here
/r/aws
Hi there,
I am trying to simulate DR scenario where an AZ is completely lost. I thought of using Amazon Fault injection Service, however its not yet supported for Fargate based ECS tasks as mentioned here:-
https://docs.aws.amazon.com/fis/latest/userguide/az-availability-scenario.html
So what other options do I have? Is it somehow possible through scripting?
Thanks :)
Hey r/aws,
I'm an AWS Certified Solutions Architect - Associate with 4 years of experience, including significant DevOps/SRE and DataOps work in an early stage startup for whom i built the complete infra on AWS. I am looking for proven strategies to land a senior-level AWS Solutions Architect role. What specific steps have worked best for you in making this career jump? Share your success stories and advice!
I'm interested in advice on the below or anything else i missed:
Skill development: What specific advanced skills are crucial for senior-level roles?
Networking: Best strategies for connecting with recruiters and potential employers?
Portfolio building: How can I showcase my experience effectively to potential employers (beyond my resume)?
Interview preparation: What types of questions should I expect, and how can I best prepare?
Weird title but I just got an email from AWS for a bill which got me confused as I have not used AWS in years. Upon logging in and checking what am I being billed for, I saw 4 ec2 instances running. All auto log you in as admin, but on one of them outlook and several other tabs were opened and outlook was signed into some bogus reading email related to donations..
The email had plenty of PayPal notifications about random payments received, but they all look phishy anyways with nothing in the sent folder.
Recent activity of that outlook account show logins from all over the world so clearly someone using a VPN but my question is what should I do?
Open a regular support ticket with AWS? Try to get a hold of a real person over the phone? Is this a bigger issue to report to some agency? Do I need to involve a lawyer or something? I just want to sort this mess out with the least effort from my end.
I just found this out cause I didn't want to pay 600$ for whatever instances have been running for however long and I'm sure as hell not paying for that if someone's been hijacking it to run a scam under my account lol
We utilize the latest version of EMR, operate on Graviton machines, and deploy large machines whenever feasible.
What’s the best way to deploy a Django app?
Hey All,
I don't know Linux, or any form of machine coding. I want a wordpress account on AWS so I can move off godaddy for a personal website, and I just can't figure out what to do. I made a free account, got to EC2, made an instance, logged in, put in an arcane code I found on the AWS support page, and apparently I need to be a super user.
Anyone have a walkthrough guide? I don't care what the server type is, as long as I have a working wordpress on the front end.
TIA
I run a couple very small services in my personal AWS account. I usually reserve my rds instance and for a long time I've been on a t3.small instance.
Well today I got my bill and it was much more than I thought it should be. I look into it to find out there's no an additional service charge for being on an older version of MySQL.
I attempt to upgrade MySQL version 2 to MySQL version 3 only to find out my instance class isn't supported.
I go to see what instance classes are supported and to me it looks like there are no small instance classes supported.
I went from $.04/hr for my instance to $.14 and now there are no small classes that will be less than that for MySQL?
What gives? Am I missing some instance class or pattern I should be using here?
I have an AWS Lambda function that generates a presigned url each time it receives a request. This is the specific code that generates the url:
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import S3Client from '../../../clients/S3Client';
export const getS3PresignedUrlForUploadingProductListingVideo = async (
fileName: string,
bucketName: string,
contentType: string
) => {
const command = new PutObjectCommand({
Bucket: bucketName,
Key: `product-listing-requests-videos/${fileName}`,
ContentType: contentType,
});
// @ts-ignore
const url = await getSignedUrl(S3Client, command, { expiresIn: 60 * 5 });
return url;
};
The Lambda function has the correct permissions to generate the presigned url:
- Effect: Allow
Action:
- 's3:PutObject'
- 's3:GetBucketLocation'
Resource: ${param:s3-bucket-arn}
I've tested the function individually and the url is being generated correctly.
The Lambda function is requested by my front-end each time a user picks a file using the file input. Then when he submits the file the following front-end code is executed:
export const uploadVideoToS3 = async (file: File, uploadUrl: string) => {
await axios.put(uploadUrl, file, {
headers: {
'Content-Type': file.type,
},
});
};
Regarding my bucket configuration:
All public access is blocked (this is a requirement, so disabling this is not an option)
This is how my bucket policy looks like
I know there are several other posts related to this specific issue, but for some reason none of the fixes is working for me.
Thank you so much for the help you can provide!
I have looked into several AWS products that I have wanted to leverage on multiple occasions, but the cost of AWS for small teams pre revenue, is just not feasible. Services add up quickly and before you know it, you get a bill that hurts.
Again, I don’t know who their target market is, but as a matter of fact, I have even heard it from large organizations that AWS infrastructure is extremely expensive where many organizations look to leave to cut costs.
I have also had in depth discussions with several APP developers on IOS, that had products that were a hit, but had to get off AWS because the fees were out of control, in their cases, it was much cheaper to own their own infrastructure.
Hi everyone,
I’m having trouble configuring a Network Load Balancer (NLB) manually for my microservices running in an AWS EKS cluster. Here’s a quick breakdown of the situation:
service.yaml
file.targetType
is set to instance
.eksctl
with private subnets.
.yaml
---
apiVersion: v1
kind: Service
metadata:
name: ${NLB_NAME}
namespace: ${CLUSTER_NAME}
labels:
app: ${NLB_NAME}
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: ${NLB_NAME}
service.beta.kubernetes.io/aws-load-balancer-security-groups: ${SECURITY_GROUP_IDS}
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "HTTP"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "${PORT}"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/healthcheck"
service.beta.kubernetes.io/aws-load-balancer-subnets: ${VPC_PRIVATE_SUBNETS},${VPC_PUBLIC_SUBNETS}
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=300,stickiness.enabled=false,proxy_protocol_v2.enabled=false,stickiness.type=source_ip,deregistration_delay.connection_termination.enabled=false,preserve_client_ip.enabled=true
spec:
type: LoadBalancer
selector:
app: ${DEPLOYMENT_IMAGE_NAME}
ports:
- port: ${PORT}
protocol: TCP
targetPort: ${TARGET_PORT}
nodePort: ${NODE_PORT}
---
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
name: ${NLB_NAME}-tgb
namespace: ${CLUSTER_NAME}
labels:
app: ${NLB_NAME}
spec:
targetGroupARN: ${TARGET_GROUP_ARN}
serviceRef:
name: ${NLB_NAME}
port: ${PORT}
targetType: instance
nodeSelector:
matchLabels:
beta.kubernetes.io/instance-type: t2.small
alpha.eksctl.io/cluster-name: ${CLUSTER_NAME}
+-----------------+
| Gateway |
+--------+--------+
|
v
+--------+--------+
| Load Balancer |
+--------+--------+
|
+------------------------+-------------------------+
| | |
v v v
+--------+--------+ +--------+--------+ +--------+--------+
| Cluster 1 | | Cluster 2 | | Cluster 3 |
| +-------------+ | | +-------------+ | | +-------------+ |
| | Microservice| | | | Microservice| | | | Microservice| |
| | A | | | | B | | | | C | |
| +-------------+ | | +-------------+ | | +-------------+ |
+-----------------+ +-----------------+ +-----------------+
targetType: ip
instead of instance
for better pod routing?Any advice, guidance, or similar experiences would be greatly appreciated! Thank you in advance for your help 🙏
I tried using Bedrock's import model feature and I've been facing some problems. I imported Llama 3.2 11B Vision Instruct (I know it's already in Bedrock but just wanted to experiment with a multimodal model) and it returns really awful and hallucinated output (kinda like it just spits things out of its training data). The output can be somewhat stabilized with extensive prompt engineering techniques, but it definitely can't work with everyday normal inputs, also it doesn't generate or accept images and can only be used as a single prompt rather than chat in the playground. Let me know your experience with model import feature or maybe if I'm doing something wrong.
Hello, What would be the most convenient way to monitor COPY JOBS success/error on a Redshift Serverless? I don't see many monitoring options on the console, not even sure if the serverless version reports metrics to Cloudwatch?
Our organization is working to bring app teams and workloads into compliance for Standards, Governance and Security. I feel this can be achieved in a few ways and it seems like some things should be handled at the pipeline level (app architecture compliance), then some things at the platform (ex via SCP/RCP), then others Post Deployment (ex AWS CONFIG). Is this a sane strategy? One of our groups is saying it can all be achieved via our CI/CD pipeline but we don't have a solid single pipeline strategy yet, so some teams have their own pipeline and others are still deploying via click ops. Is there a model or framework that discusses a multi tiered strategy like this? What is everyone using to achieve compliance and governance?
Hello!
I would like to ask help in ways to reduce lambdas cold-start, if possible.
I have an API endpoint that calls for a lambda on NodeJS runtime. All this done with Amplify.
According to Cloudwatch logs, the request operation takes 6 seconds. However, I want to attach logs because total execution time is actually 14 seconds... this is like 8 seconds of latency.
However, on the client side I added a console.time and logs are:
Is there a way to reduce this cold start? My app is a chat so I need faster response times
Thanks a lot and happy new year!
In case this helps anyone in the future that uses DMS with potential configuration requirements at target endpoints.
Note that DMS tasks each create change tables at the target endpoint (Redshift in this use case) for CDC tasks in the public schema with the following format "awsdms_changes" + unique identifier appended to the table name based on each task's ARN.
For some DMS versions (3.5.2 from my use case although I don't have historical logs to ID if these are via previous versions), each task creates these unique tables with the identifier in lower case format while later querying (whether it's to select, truncate, drop, create, etc) with the unique identifier in CAPS.
Despite bad practice, this should be fine IF the target endpoint doesn't have limitations such as Redshift's enable_case_sensitive_identifier parameter being set to "true". Since AWS did release Zero-ETL general availability with Redshift being one of the supported services, this could affect some users since one of the requirements for Zero-ETL is to enable case sensitivity for Redshift.
Querying the Redshift DB (you can sort by schemaname 'public' or even '%awsdms%') should allow you to identify them even if you have not enabled CloudWatch logging for the task. Dropping the table and restarting the relevant task will recreate the dms change table in CAPS resolving this issue. Naturally, additional action is required based on your use case (ie. need for uptime, SLA metrics, etc).
Note: As briefly mentioned before, I'm aware that 3.5.3 does not have this issue, but I haven't identified release notes on any previous versions identifying this to track when the dms change table behavior was modified. Previous version could have already addressed this.
I just wanted to say they handled my issue wonderfully and I was lucky to have the help from them that I needed.
It seems like there's a lot of overlap between Microsoft Defender for Cloud and GuardDuty. Is there anything that GuardDuty offers for AWS accounts that Defender for Cloud doesn't have? Thanks!
So i'll clarify that I have very very little knowledge of AWS. I have however purchased a domain through route53 (It's like 50 cents a month).
I basically purchased one of the website templates (I know I know, but I suck at design so). But it's essentially just static files.
What's the best way to host it, it's mainly just going to serve as a resume holder/personal site but I may add some blog posts. Either way it's gonna be low traffic (But I don't wanna end up with some crazy bill if I got DDoS'd).
I've heard of Amplifi but i've also read about people having issues. Is an S3 bucket the best way? Is there a guide for how to do this? Since im very unfamiliar with AWS/Hosting in general.
i'm trying to build a serverless app that would consist of multiple lambda functions, one api gateway with multiple Apis and S3 bucket. so far everything is working smoothly in terms of the configuration and basic infrastructure. but the docs for SAM isn't that good and sometimes i find it a bit misleading. as an example there is nowhere in the docs (as far as i know) about how to define an S3 bucket in the resources section inside the template.yaml. but i found it briefly mentioned in one of the example on the github repo so my question is there a better resources to learn more about SAM or should i stick to the docs and stackoverflow
Hello! Currently my team is migrating from a EKS cluster to ECS, due to some cost limits that we had.
I've sucessfully migrated all the internal tools that were on EKS, the only thing left is the Docker in Docker github self hosted runners that we had.
There seems to be a lot of solutions deploying them to EKS but I can't really find a way to deploy them on ECS. Is it feasible? From what i've seen GitHub's Actions Runner Controller is limited to kubernetes.
Thank you!!
Hi, I’m looking for a Windows desktop with an NVIDIA GPU (RTX 2060 or higher, Compute Capability 7.5+) for approximately 90 minutes. I’m considering whether an AWS instance might be a suitable option for this purpose. Could you provide any advice or recommendations? Thank you!
I'm searching for a course about lambda that walks me through a project that uses SQS, CDK, API gateway, VPC, DynamoDb, and RDS, I prefer the language to be either Python or Java
Looking for the best way to separate dev from production. Is if using iam or utilizing "organization" or is it to just use entirely different master accounts for dev and production?
Want to make sure dev guys can't terminate production instances etc.
Hello everyone, like many others, I got denied prod access without explanation. I will be using SES in my SAAS, and here are the details that I provided to the support team (the actual submission was more detailed, I used GPT to summarize since it was too long, so sorry for that):
"The service is pretty straightforward - it sends verification emails when users sign up and reminder emails for deadlines they set. I've built in several security measures like email verification, CAPTCHA, DNS verification, and rate limiting to prevent abuse. For the actual reminders, users can set custom due dates and messages if a user sets our reminders to spam they are immediately blocked from sending any more reminders through SNS and webhooks, and everything's kept simple with just the essential info and action buttons.
On the technical side, I've implemented all the recommended security features like DKIM/SPF, and I'm using proper bounce/complaint monitoring. Every email includes clear unsubscribe options that will stop all past scheduled/creation of reminders, and the system automatically handles any bounces or complaints throw our own blacklist. I'm strictly sticking to transactional emails - no marketing stuff at all."
I also provided the verification and reminder email template, I just want to know what I'm doing wrong and if my use case is that bad (so I can give up and just move on).
I want to create a CICD pipeline that pushes a docker image of my portfolio to ECR and deploys with App Runner. Below is what I currently have in my CDK in typescript. The Bootstrap and Synth commands work but Deploy does not. I get an error with AppRunner My IAM user has administrative permission which I'm assuming includes the AppRunnerECR permission.
import * as cdk from "aws-cdk-lib";
import * as ecr from "aws-cdk-lib/aws-ecr";
import * as iam from "aws-cdk-lib/aws-iam";
import * as apprunner from "aws-cdk-lib/aws-apprunner";
import { Construct } from "constructs";
export class AwsLowTrafficPlatformStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const user = new iam.User(this, "myInfraBuilder"); // ECR requires an IAM user for connecting Docker to ECR
// IAM Role for App Runner
const appRunnerRole = new iam.Role(this, "AppRunnerRole", {
assumedBy: new iam.ServicePrincipal("tasks.apprunner.amazonaws.com"),
});
// ECR Repository
const repository = new ecr.Repository(this, "Repository", {
repositoryName: "myECRRepo",
imageScanOnPush: true,
}); // L2 abstraction
// App Runner Service
const appRunnerService = new apprunner.CfnService(this, "AppRunnerService",
{
serviceName: "StaticWebsiteService",
sourceConfiguration: {
authenticationConfiguration: {
accessRoleArn: appRunnerRole.roleArn,
},
imageRepository: {
imageIdentifier: `${repository.repositoryUri}:latest`,
imageRepositoryType: "ECR",
},
autoDeploymentsEnabled: true,
},
instanceConfiguration: {
cpu: "256",
memory: "512",
},
}
);
repository.grantPull(appRunnerRole);
}
}
Hello architects. I'm doing my best to utilize as many tools within AWS as possible, to reduce the extraneous applications as much as possible. One thing I wanted to do was attempt to diagram and map out my architecture without resorting to Visio, or Google Drawings, etc. So I learned that the AWS Infrastructure Composer was supposed to solve this natural step in planning architecture.
I don't see how. I can only drag rectangles of AWS components, but I can't draw rectangles, arrows, paths, etc., and there is to true way to save your visual work. The Composer tool doesn't have a cloud save (despite this being AWS), and instead you must designate a local folder on your desktop to sync your canvas. But this doesn't save your canvas visually, it just dumps the raw configuration of each "tile" you added, and doesn't even remember how you arranged them on the canvas.
So, am I just not using the Infrastructure Composer properly, or is this indeed some kind of half-baked Beta? Thanks for reading.
Hi all,
I have been using CloudFront with S3 seamlessly for a while now. But recently I've come across a requirement where I need to use CF with a custom origin, and I can't get past this issue.
Let's say the origin is - example.com and the CF URL is cfurl.cloudfront.net
I am trying to fetch cfurl.cloudfront.net/assets/index-hash.js
And this is the error page I am getting -
The response headers are -
Here's what I have observed so far -
Additional details -
Is there anything else I can check to figure this one out? Any help is greatly appreciated.
As AWS document specified, the Fargate ECS consumption can benefit from Compute Savings Plans. However, it's still not clear to me how the discounts are applied to the non-continuous consumptions.
For example, I have a dev TASK only runs on week days and auto-scaled down to 0 on weekends.
Let's say the task uses 2 vCPU and 2GB memory, The Fargate prices are as follows,
per vCPU per hour $0.04048
per GB per hour $0.004445
Then the on-demand weekly cost is $10.782
2 vCPU * 0.04048 Unit Price * 5 days * 24 hours per day = $9.7152
2 GB * 0.004445 Unit Price * 5 days * 24 hours per day = $1.0668
Assume this is the only thing applicable to the saving plan.
If I commit to a 3-year all upfront Compute Savings Plan, it gives 52% discount and brings the hours rates to
per vCPU per hour $0.0194304
per GB per hour $0.0021336
Based on my understanding, the weekly cost is $7.245504
2 vCPU * 0.0194304 Unit Price * 7 days * 24 hours per day = $6.5286144
2 GB * 0.0021336 Unit Price * 7 days * 24 hours per day = $0.7168896
Please confirm whether my calculation is correct.
Furthermore, I would like to know how savings plans are applied to multiple tasks.
Let's assume, there's another task with the same spec which is scaled down to 0 on weekdays but scaled up to 1 on weekends.
Will these two tasks consume the same savings plans credits and benefit each other?
Is this the recommended approach to upload multiple files to S3 from a web app (HTML input of type file):
getSignedUrl
from @aws-sdk/s3-request-presigner
for each file, creating a new PutObjectCommand
with Key
being the name of the file.Is this feasible for say hundreds of files? Each request happening sequentially... How else do others handle this simple use case?