/r/serverless
No Application Servers! News, articles, books, and tools related to building "serverless" web and mobile applications.
News, articles, books, and tools related to building serverless web and mobile applications. More resources here:
Frameworks
Tools
Books
Services
News / Websites
Conferences
/r/serverless
Something weird is starting to happen using Serverless.com where we are using v3 (non-paid version).
Two of our developers have started getting an issue with `Unzipped size must be smaller than 262144000 bytes` message when they try to deploy to their AWS sandbox accounts and looking at the ZIP size it's 93MB for every function in the package!
serverless.yml has:
frameworkVersion: "3"
package:
individually: true
patterns:
- '!volume/**'
- '!seeds/**'
- '!test/**'
- '!.github/**'
- '!aws/**'
- '!.serverless/**'
However, the two affected developers are running native Ubuntu and the other Windows 11 with WSL2 and Ubuntu.
Looking at the developers using Mac, they have a ZIP size of 28MB, but looking at GitHub Actions doing a deploy using Ubuntu the ZIP size at deploy is only 15MB (using `npm ci`).
So, the size difference between Mac and Ubuntu (using `npm i`, not `npm ci`) is really weird, but it would seem that Serverless is not capable of excluding dependencies as it should...
We've doublechecked the ZIP's and it `node_modules` that's massive in the ZIP's... E.g. containing `aws-sdk` which is a dependency of `serverless` even though we, in code, are only using AES SDK v3, s it's clearly dependencies being added...
Any clues on why this happens?
I'm just trying to work through a tutorial I found online and one of the first steps is to run this command:
serverless create --template aws-node.js --path my-lambda-function
However, I am greeted by the red message of:
Configuration file not found in directory "C:\Users\mypathhere"
I've done some googling & some binging, but I can't seem to find a solution.
If I run "serverless --version" I receive 4.4.19 so I know I have it installed (I did that some weeks ago, as well as installed/configured my AWS CLI)
what do I need to check/do?
Hi everyone,
We have recently implemented and released our first purely serverless project (all on aws: lambda, Api gateway, dynamodb, step functions, sqs and event bridge). As a devops team we are also responsible for running the system as well.
Now, our customer is interested in a higher service level requireing us to have someone on call 24x7.
Since we have not done this before, I was wondering that kind of issues we should expect being woken up for at night? Are there any typical aws related problems that we should be aware of?
I am thankful for any insight and tips :)
It's an autonomous DeFi agent designed to help guide you through the DeFi space with real-time insights, restaking strategies, and maximizing yield potential. They're also launching the DeFAI token soon! Super curious to see how this could change the way we approach DeFi. Check them out on their Twitter for more details.
Connection to postgres database using serverless framework and sequelize taking 4seconds but when connecting using pg pool takes 1second.Is there any trick to connect faster,need help!!
I work at a fairly large tech company and have been tasked with looking into several on-prem serverless solutions for some of our tools. (I was recommended openwhisk and Knative).
We have a cluster that we can deploy any of these solutions into.
Has anyone here ever dealt with either of these tools, or maybe have something else to recommend?
Ps. Please dont just tell me to go use a cloud solution. There are other factors that make cloud not possible for our usecase.
I am new to serverless, but I work in the RPA world. We have an RPA bot on a virtual machine that runs a browser automation (logs into website, enters some dates, downloads files, and uploads those files to a SFTP server). This runs about 5,000 times a day (different parameters for each run). This takes a long time on this one virtual machine.
Is it possible to do this on serverless architecture like Lambda, where it could run all 5,000 at once? Right now, it takes almost the whole day to run. Obviously we could scale with more virtual machines, but wondering if serverless might be a better option.
Does anyone have any experience with Val Town? Came across this recently as I'm currently self hosting a Fast API backend for a flutter app but would like to move to serverless so I don't have to worry about load balancing and scaling resources as we get more users. Seems like a good, straightfoward way to spool up serverless in TS?
We have AWS Lambda serverless architecture, we use serverless framework. In serverless.yml file we specify serverless-plugin-warmup
which deploys warmup lambda on AWS. Unfortunately this lambda is always using 16.3 version of node runtime, but this version is about to be sunset soon.
Is there a way how we can deploy warmup lambda to 20.x or 22.x?
Tried this GitHub issue "I had to disable the plugin for AWS to clean the lambda / logs etc, then I re-enabled it and it was finally able to pick up the upgraded package runtime, super weird" from https://github.com/juanjoDiaz/serverless-plugin-warmup/issues/168 but it still deploys warmup lambda as 16.x version.
I found this book and looks interesting, Any thoughts or other recommendation?
V4 will be a paid version for my company. What would likely be the challenges I would face if we stayed on V3 after it became end of life, given it's not used in real time in our applications? Over what timespan might we expect to see issues?
I would like to share some of my notes/experience about AWS Lambda, to make effective use of the service:
https://medium.com/@zizhao/effective-aws-lambda-part-i-3d188d15bf8a
This blog contains two main parts:
Looking forward to your read, and share thoughts in comments section!
Hey everyone,
I just noticed that my all lambdas had their zip size increase at least by 10x than what was before on v3. Has anyone ever experience this issue after switching to V4? On v3, I had serverless-esbuild
and everything was good without any special configurations. V4 has serverless-esbuild built in and i was expecting it to work the same, but it obviosly is not.
Tried using same config here, but no change in lambda zip size.
Any recommendations for change?
Edit: I don't think it's treeshaking properly and it's including libraries that aren't necessary for each lambda. How do you properly tree-shake with serverless v4?
In under an hour, I built a fully functioning site (chatgptprank.com) and used it to prank family this Thanksgiving.
Feels really easy these days:
Stack:
NextJS, pnpm (so good!)
Serverless: Vercel, Upstash (Serverless Redis)
Al: Claude, Vercel vO, Cursor, ChatGPT
Here's how I did it:
Backend:
Had a few unique constraints. I wanted something that was quick to make but could scale quickly! And we had a few traditionally tough requirements; users can generate + share their own custom links with no login. I wanted rate limits to avoid abuse. Luckily, new serverless offerings make this easy. In addition to Vercel, I used Serverless Redis from Upstash. The entire process felt so easy, for less than $20 a month.
Frontend/AI:
-Claude (brainstorming):
Claude is so much better than ChatGPT for brainstorming. I had 2 requirements a. center it around people this Thanksgiving, and b. use a proven design Al can easily replicate. Claude helped me land on a prank site that mimics Al conversations - it's like LetMeGoogleThatForYou, but it can lie.
-Vercel v0 (initial design):
l used v0 for the initial design+code of my Al prank site. The Ul was based on ChatGPT, so vo gave me something almost perfectly ready. At this point, we're almost done...without having written any code.
-Cursor (final code):
Cursor is great, and helps us finish the job. I pasted the v0-generated code into Cursor and asked it to integrate ChatGPT calls (using GPT-40 through Vercel Al SDK) and fix the styling. It did perfectly, all I had to do was click "apply."
Bonus: The Domain!
I even used an Al Domain Search tool to find "a domain for an Al prank site." It found chatgptprank.com, which I bought and deployed via Vercel. One vercel deploy command later, and we were live!
And there you go! I was live.
Going viral: after deploying, I sent the site to friends, who posted on twitter. Since then it's been getting some traction! Insane how easy Al has made it to build and ship a good site.