/r/aws
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
News, articles and tools covering Amazon Web Services (AWS), including S3, EC2, SQS, RDS, DynamoDB, IAM, CloudFormation, AWS-CDK, Route 53, CloudFront, Lambda, VPC, Cloudwatch, Glacier and more.
If you're posting a technical query, please include the following details, so that we can help you more efficiently:
Resources:
Sort posts by flair:
Other subreddits you may like:
Does this sidebar need an addition or correction? Tell us here
/r/aws
We want to add a custom attribute to the access token. When a user authenticate itself, it uses our api call which communicates with cognito using the JS SDK. We initate the 'InitiateAuthCommand' and we return the authentication token to the user.
However, our user can be in multiple organisations. When it authenticate itself, it also needs to supply a organisationID. We want to add this to the access token so the token specify for which organisationID the user is authenticated.
I tried to add this to the ClientMetadata and use the Pre-Token Generation trigger. However, the clientmetadata is not available in this lambda so I'm unable to modify the token with the required information here.
Are there any other ways to solve this problem?>
Hey all, I have a Linux server at home which I've been using to host a web server and a couple of Discord bots, and I'm looking to upgrade it. It has quite an outdated CPU which isn't able to run some newer software so I've been looking at VPS options as a replacement for setting up another on-prem machine.
My current machine specs:
CPU: Intel Core 2 Duo E8400 (2) @ 3
Memory: 8GB
OS: Debian 12
I considered a few options (Linode, DigitalOcean, EC2) but I think I've settled on using Lightsail. I'm looking at running an instance with 2 vCPUs and 1GB of memory.
Has anyone done something a similar migration from a home server to Lightsail? If so do you have any advice about performance, automated snapshots, etc. I also ideally want to keep this server at a fixed cost per month so are there any tips for avoiding runaway costs
Hey, AWS newbie here!
I want to know how to deal with complex authentication models when using AWS cognito or more directly what should be done on the cognito site and what is on the application side.
What is the common practise there?
I can imagine two approaches here:
1st would be to use cognito as pure "user database" which only handles the authentication itself. Business logic is completely implemented on the application side.
2nd would be trying to reflect the business logic in cognito only which to be honest i dont know is possible.
Background: Our auth model has 3 levels. On the lowest level is the user itself. A user can be part of a lets call it company. And a company can be part of a company group. Users are assigned to a company (main company) but are able to "synthecial" login into different companies of a group. Besides there are also users who can login into any company regardless of the group.
I have endpoint written using: TS on NodeJS powered through API Gateway + Lambdas. I’m using serverless for deployment. Kindly suggest ways (preferably in some details) to implement endpoint versioning
Hello. I need help. I'm trying to edit the json of the AWS-AWSManagedRulesBotControlRuleSet to be to return a custom 403 message whenever someone is blocked from using my app. The only problem is because it is a managed rule I cannot edit the json directly...does anyone know if there is a way around this?
Posted On: Apr 17, 2024
Accessibility Conformance Reports (ACRs) for AWS products and services are now available on AWS Artifact, a self-service portal for AWS compliance-related information. ACRs are documents that demonstrate the accessibility of AWS services.
Through AWS Artifact, you can download ACRs on-demand to understand the accessibility of a specific AWS product or service. AWS ACRs utilize the ITI Voluntary Product Accessibility Templates (VPAT®) and reference various accessibility standards including: Section 508 (U.S.), EN 301 549 (EU), and Web Content Accessibility Guidelines (WCAG).
After signing into the AWS Management Console, you can access the ACRs by selecting view reports in AWS Artifact.
Source: AWS What's New post
Now I'm running MVC c# web in Elastic Beanstalk and I'm trying to edit instance type in Elasticbeanstalk but when ec2 is finished running but I can't access my website so I have to try redeploying my web let it work normally again.
How can I change the ec2 configuration instance type so that everything can run normally again without having to redeploy?
my type edit from : t2.large to t2.micro
I'm working on a slide deck for an interview. I have to give a fake QBR and I chose Planet Fitness as my client bc I've been working out there the last few months.
Can anyone give a guess as to how much it would cost to host a mobile app like theirs in AWS? Gym goers need to swipe in with a QR code on the app. It hosts workout information/plans, deals from random 3rd parties like Ray-Ban, and a bunch of other stuff I've never used.
18.7M gym members. Let's assume some go 5 days a week and others go only a few times a year (but never cancel their membership). I'll convert that into 1 visit to the gym per member per week - 18.7M weekly users.
I'm thinking $2-3M monthly. Is that way off?
do you have to backup your data on S3 to not lose data after reboot?
Hi all,
I'm Implementing SSO at my startup and deciding between Cognito and Auth0.
So far I've started with Auth0, and while the experience has been fine, I want to make sure I consider alternatives before I make the plunge.
Cognito has better pricing and it's my understanding Auth0 recently tripled their price.
But I've also heard a lot of hate for Cognito, that the documentation is lacking, it's not feature-rich, etc. What do you guys think? I'm especially curious how your experience with Cognito and MFA has been.
For context, much of our infrastructure is otherwise AWS, and we deploy our resources using CDK. Additionally, the use case is primarily for internal employees.
I have a YAML file that adds a couple ACLs to a couple NACLs. I want it to add ACLs for ipv6 as well simply allow all ::/0 both ingress and egress but I can't seem to figure out the right syntax.
This is the existing code is below and it works fine. I need it duplicated but want to substitute 0.0.0.0/0 for ::/0 and RuleNumber:100 for RuleNumber:101. Can someone help me out?
Resources:
NetworkAclBlue:
Type: AWS::EC2::NetworkAcl
DeletionPolicy: Retain
Properties:
VpcId: !Ref VpcId
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-nacl-blue'
NetworkAclBlueEgress:
Metadata:
cfn_nag:
rules_to_suppress:
- id: W66
reason: "subnets and stuff"
Type: AWS::EC2::NetworkAclEntry
DeletionPolicy: Retain
Properties:
CidrBlock: 0.0.0.0/0
Egress: true
NetworkAclId: !Ref NetworkAclBlue
Protocol: -1
RuleAction: allow
RuleNumber: 100
What's the best way to make sure I don't get code for version x running on runtime version y which might cause issues? Should I use IAC (e.g. CloudFormation) instead of AWS API via awscli
? Thanks!
Obviously I’m probably not allowed to store cp training data, unless I am working for the government and aws support knows what I am doing.
But what about publicly breached databases? Breached and leaked email addresses and passwords that are freely available on the internet. I am trying to see if I can make something similar to have I been pwned.
Is there a link to aws policy on what we can and cannot use the services for?
Like documentation or somewhere in the terms and conditions of how we can use s3 or RDS?
I'm trying to set up a load balancer on Lightsail. The instances I attach to it keep failing the health check because of the http->https redirect and the health check returns a 302 instead of a 200 that it is wanting.
I followed the directions on this page, but it didn't work. I don't know if I put the lines of code in the wrong spot or if I'm missing something else.
https://aws.amazon.com/tutorials/launch-load-balanced-wordpress-website/
Where do you suggest I investigate that I might be missing?
I have 2 questions about aurora.
Suppose master, read Replica, and write Replica exist.
Is it correct that all three replicas are adding user no matter where users are added among the three clusters?
If 1 is correct, i think it is possible that write privileges users or admin users can be added to read replica. how does aurora prevent write privileges in read replica (or write privileges user running query)?
I have an AWS S3 bucket s3://mybucket/
. Running the following command to count all files:
aws s3 ls s3://mybucket/ --recursive | wc -l
outputs: 279847
Meanwhile, the AWS console web UI clearly indicates 355,524
objects: https://i.stack.imgur.com/QsQGq.png
Why does aws s3 ls s3://mybucket/ --recursive | wc -l
list fewer files than the number of objects mentioned in the AWS web UI in my S3 bucket?
I'm having trouble accessing the 200k context size for the AWS Bedrock Claude 3 models despite it being listed as an option.
When I try to use anthropic.claude-3-haiku-20240307-v1:0:200k
as the model specifier, I get the following exception:
ERROR:__main__:A client error occurred: 1 validation error detected: Value 'anthropic.claude-3-haiku-20240307-v1:0:200k' at 'modelId' failed to satisfy constraint: Member must satisfy regular expression pattern: (arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:(([0-9]{12}:custom-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}/[a-z0-9]{12})|(:foundation-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|([0-9]{12}:imported-model/[a-z0-9]{12})|([0-9]{12}:provisioned-model/[a-z0-9]{12})))|([a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.:]?[a-z0-9-]{1,63}))|(([0-9a-zA-Z][_-]?)+)
I believe this is happening because anthropic.claude-3-haiku-20240307-v1:0:200k
is the model tag of the provisioned models.
However, when I use the default model ID anthropic.claude-3-haiku-20240307-v1:0
, the response indicates that it's using the 48k context size model. I'm also able to access the 28k context size sonnet model.
Is there something I'm missing in terms of accessing the 200k context size model? Or is this model not actually available despite being listed?
Any insights or guidance would be greatly appreciated. Thanks in advance!
Looking for some assistance here...
We are running a mix of the older Aurora serverless v1 and we want to upgrade them to v2. Main reason for this is to make use of the clusters (writer/multiple reader) abilities. We also want to be able to use the query performance insights.
I have taken a look at DMS and it doesnt appear to allow me to create a data source of our v1 DBs.
Another difference - not sure if relevant - is that our v1 are using 5.7 and all of our v2s are using v8.
Is anyone able to give a clear method on how to upgrade/transition the data. If we are going down the transition route it will need to be able to cope with live syncing if possible to avoid data loss in production.
Yes, I have read this: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html#aurora-serverless-v2.upgrade-from-serverless-v1-procedure
Is this really the only way? I am trying to avoid data loss and these jumps will almost guarantee data loss
Hi everyone, hope you all are having a good day! I got interesting experience. Today I finished my online assessment test and I could say that based on a feeling I did quite good. Both in technical and behavioural part. However, an hour after the assessment I got an email that I'm no longer under consideration. Does this make sense? Would it be worth to inquire? What is even more interesting is that this happened at the evening around 8pm.
I am running a static website on an S3 bucket. The website has a form component, which sends a POST request to a back-end running on an EC2 server. When I submit the form everything works as it should when I access the website through the S3 bucket's URL, but when I use the website's URL which I have linked through CloudFront, the POST request does not get sent.
I have already changed permissions to allow POST etc. in the CloudFront distribution, and I am at a loss for further ideas.
I've deployed a Fargate task using terraform, everything works fine, however usually locally you would extract Gitlab's initial password using a docker exec.. command, how can i do so on a Fargate container, i tried the execute-command: aws ecs execute-command --cluster Gitlab_cluster --task <task-name> --command "/bin/sh" --interactive
But it gives me: An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later.
I'm experiencing a 504 timeout on a Lightsail site that is exclusively IPv6. It appears to be caused by the load balancer not providing an internet facing address to the origin. Creating a load balancer for the instance does not provide the documented option for creating this type of instance.
Hey guys,
Ive been running puppeteer on AWS fine for over a year now, but in the last two days I had a major issue pop up. Code with no new lambda update works 100% fine, but immediately after updating the function for a new IP, I now get a "Error: Navigation failed because browser has disconnected! at new LifecycleWatcher". Reached out to AWS but no help so far with them. Has anyone else had this issue? 3 of my functions are now down to do this issue. Doesnt matter the website, it seems like chromium/AWS isnt connecting at all and crashes in the first 1100ms of running. Which again, I ran today and yesterday fine for the first call, but update for a new IP (With same code that worked previously) it now immediately crashes.
SOLVED: New Lambda update isn't compatible with some older versions of chromium and or puppeteer. Updated to Version 22.6.4 and sparticuz/chromium": "^123.0.1 solves the issue. Previous versions were : 20.1.0 and 113.0.1 respectively
Are your companies increasing their use of AWS services, maintaining their current level of usage, or are there instances where projects are being moved back on-premises?
I'm interested in understanding the reasons behind these decisions as well. Whether it's due to cost, security concerns, performance issues, or any other factors....
If you're comfortable sharing, I'd appreciate if you could also mention the industry your company operates in and the scale of your AWS usage. :-)
I'm seeking advice on the most suitable database solution for a matchmaking feature within my application. I've tried different solutions before but have always hit a roadblock before I can finish my stuff.
I need a database that has:
Note that data are short lived, if a user enters the matchmaking screen...the backend would register them in the database, once a match has been found both user shall be deleted in the table. Row level locking is also needed as to make sure that the user we're querying for is untouchable by different concurrent users.
Storage size isn't actually that important since data are short lived anyways, and we're only expecting <100k rows at most.
Here are the issues I have faced before:
I have an RDS cluster configured with a minimum of 2 and a maximum of 5 read replicas, which automatically scales based on CPU resources.
In addition to the current setup, I need to adjust the desired number of replicas at specific times. For instance, at 2:00 PM, I want to increase the replica count to 3, and then revert it back to 2 at 3:00 PM.
Unfortunately, AWS only allows me to create scheduled actions to adjust the minimum and maximum capacities, not the desired count. What I aim to accomplish is to set the desired count to a specific value at certain times and then return to the minimum capacity without altering the maximum policy throughout the day.
Is it feasible to achieve this using the AWS CLI or application-autoscaling?
Hey,
I've created some basic test projects via the AWS dashboard/CLI .... great...
Then..
I decided to try and embrace using the AWS CDK (v2) for my new project. Thinking I could do it all from the CDK and embrace the code as infrastructure approach. Which suits me as I prefer code anyway.
Simple Goal:
Create a single page web site with a Amplify React login/signup form, linked to cognito pool. Which I could then expand on.
Success & Failures: Success:
CDK successfully deploys and creates Cognito Pool.
Problems:
I did have it also create the Amplify App as well, but despite making the User/Client Pool IDs made available, it would not link it up to the User Pool.
I could try and enable it via the backend in the in the AWS dashboard but again that then risks drift and breaking the CDK approach.
Approach 2:
I use the Amplify CLI (amplify init) and create the amplify app this way.
But how do I get it to use the same pool as created by CDK. (I've tried and it just keeps creating new pools).
Failures:
It feels like, despite watching/reading and testing a thousand things. I'm still misunderstanding how this should work and it's probably bloody obvious. :-).
I could of course choose not use CDK and do it via the dashboard or CLI but that seems to defeat the point of getting to grips with the CDK approach and making something more future proof and learning some new skills along the way.
Been at this simple task for days... and despite learning lots, I'm still lost in the weeds/sea (banging my head against a frustrating brick wall of failures. :-)