/r/homelab
Welcome to your friendly /r/homelab, where techies and sysadmin from everywhere are welcome to share their labs, projects, builds, etc.
Labporn Diagrams Tutorials News
Please see the full rules page for details on the rules, but the jist of it is:
Don't be an asshole.
Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
No memes or potato images.
We love detailed homelab builds, especially network diagrams!
Report any posts that you feel should be brought to our attention.
Please flair your posts when posting.
Please no shitposting or blogspam.
No Referral Linking.
Keep piracy discussion off of this subreddit.
All sales posts and online offers should be posted in /r/homelabsales.
Before posting please read the wiki, there is always content being added and it could save you a lot of time and hassle.
Feel like helping out your fellow labber? Contribute to the wiki! It's a great help for everybody, just remember to keep the formatting please.
/r/sysadmin - Our original home. Splintered off from this sub-reddit.
/r/networking - Enterprise networking.
/r/datacenter - Talk of anything to do with the datacenter here
/r/PowerShell - Learn Powershell!
/r/linux4noobs - Newbie friendly place to learn Linux! All experience levels. Try to be specific with your questions if possible.
/r/linux - All flavors of Linux discussion & news - not for the faint of heart!
/r/linuxadmin - For Linux Sysadmins
/r/buildapcsales - For sales on building a PC
/r/hardwareswap - Used hardware, swap hardware. Might be able to find things useful for a lab.
/r/pfsense - for all things pfsense ('nix firewall)
/r/HomeNetworking - Simpler networking advice.
/r/HomeAutomation - Automate your life.
/r/homelab
So I've been wanting to do a home lab for tinkering and running some services that I want and might look good on an application. This isn't my first time using VMs like this it would just be permanent now on a machine separate from my personal rig. my question is really just any thoughts or helpful things about my planned network.
Main Server
Xeon 2697v4, 128GB RAM, 4 12TB HDD , 2 512GB SSD NVMe Boot Drive, 2.5 2 port NIC, Tesla Card??? 2 cores, 8 GB RAM for host prosses, 2 512GB mirrored for boot redundancy Proxmox: Kemp Load Balancer: 4 cores, 16 RAM TruNas: 4 12TB drives, 4 cores, 32 GB RAM, Raid 1+0 for 24TB of usable storage
Gaming VM or local AI: which one I choose will effect if I get a P4 or M40 12GB
K3 HA Cluster (inside Proxmox cluster?)
3 \* (i7-9700 mini PC, 32-64 GB RAM, 256 GB Boot, 1TB SSD) Minecraft:in Containers Spread across all 3
Modded:Custom ModPack, Java Vanilla, Bedrock Vanilla
Personal/Resume Website
N100 powered router to run opnsense, PiHole, and WireGuard
2 8 port 2.5Gb switch
I've Been told that you should run clusters on separate switches especially with HA I might just Buy a used 16 port 2.5Gb switch instead
Also sorry if this is flaired improperly I didn't know if I should've had it as discussion or question but I feel it's less of a question and more of a how does this look
Edit: I did not know that reddit did that list thing with the black boxes
https://www.reddit.com/r/LocalLLaMA/comments/1igpwzl/comment/marp63b/
Anyone with such a system, please check out this thread. A developer of Llama.cpp has a branch with really great performance on DeepSeek R1.
He is looking for someone to test the system with dual CPUs. This could be huge for home labs wanting to run SOTA LLMs without spending 6-figures on GPUs!
Hello folks, hoping for some advice on getting my new-to-me Dell Precision 5820 spun up as my new homelab. Here's the deal: When the computer has the stock (Xeon 2123, 4x16gb ECC ram) platform config, I can boot into the BIOS just fine. From there, I updated to the latest bios (rev. 2.39.0). All my RAM is detected as expected, drives, gpu, etc.
From there, I wanted to get a bit more juice from the platform, so I checked out Dell's official website, and found this(link) page listing compatible CPUs, and this(link) listing compatible RAM. Here's where I suspect the issue is; I know that non-xeon chips of this socket type don't support Registered ECC memory, so I purchased an i9-9940x (listed as compatible), and official Dell-branded SDRAM (SNP983D4C/32G), such as this (link). With the new CPU and RAM installed, the system won't post; it starts up, flashes the amber light behind the power button once, spins up the fans and drives for a few seconds, then shuts down and repeats. I've tried removing the CMOS battery (tried booting without it, and waiting overnight before booting with it installed), no dice. The strangest part to me is that, when I re-install the default components, it posts with no issue. Because the manual lists that:
- DDR4 ECC RDIMMs - Supported only with Xeon W Series CPUs
- DDR4 Non-ECC UDIMMs supported with Core X Series CPUs
I thought this config would be correct? I've also tried the non-ecc memory with an i9-7820x, same behavior.
Anyone have any ideas/things I've missed here? In following the literal Dell manual for compatibility, I thought I could avoid this type of troubleshooting. Any help is appreciated, thanks y'all!!!
Hello everyone, I recently dove into a world I’m quickly realizing I know NOTHING about. I’m here asking for some good resources and 1st hand recounts that might help me make sense of all this.
Use case: NAS… obviously Game server, Mincraft, Ark, Palsworld, etc. PleX server. Immich server. I Specifically need mobile access to this. Somehow 3d print from it? Might be better off just using my personal PC I don’t really know. Anything else that y’all think is a good idea.
Hardware: CPU: Ryzen 5950x RAM: 128GB (crucial non-ecc) GPU: 2070super (for plex transcoding/VMs/Immich) Case: Define R5 max. PSU: Corsair 750watt. 1x 240gb SSD (boot drive) open to getting a 2nd for redundancy. 1x 4tb NVME (VMs/Game Servers) 2x 24tb HDD Ironwolf running as a mirror 1x 6tb HDD throwaway / catch all 1x 1tb HDD (probably gonna toss it when I get more drives)
As for networking I have a 10gig card in both this machine and my personal PC as well as a multi gig switch. Home internet is 2gbps.
As it sits now I realize this is probably overkill. I specifically got stuff I hopefully wouldn’t have to upgrade for quite some time, while also not completely breaking the bank.
I’m pretty set on using Truenas Scale as it seems the most user friendly and anything I can’t run as an app I’ll use in a windows server VM (i.e. ark server).
1st real post on here so sorry for any weird formatting and such. I know Reddit has a specific etiquette I’m just oblivious to it. (Posted on R/Homlab & R/Truenas)
It's time to upgrade the old router Requirements Sfp cage Between 4 max 10 rj45 ports Min 1gbe Hardware appliance Around $100 usd The router is simply used in the home Currently the network is comprised of only 26 clients plus up to 3 remote clients
Here is three I believe should be decent for the task due to location rack mount is not needed wifi is not necessary but if it's there it's there Open to other better suggestions in the similar priced
UBIQUITI ER-X-SFP Edgerouter X sfp https://a.co/d/4yx8JjP
MikroTik hEX S Gigabit Ethernet Router with SFP Port (RB760iGS) https://a.co/d/aX6vJUB
MikroTik L009UiGS-RM https://a.co/d/bo07DW0
The title says it all I currently have an r730 that I'm attempting to spin up a Debian 12 VM on.im running KDE Plasma,The hypervisor is proxmox. The GPU is a GTX 1080.
The reason for the VDI is I want a random box I can connect to that has some 3D help for smoothing out display and stupid desktop effects like wobbly windows (Yes you read that Judge away)
I was getting close and I've gotten an external monitor to work a few times but can never get 3d acceleration working or nvidia-smi working.
I used to be an Ubuntu dude but I'm just looking to try some new things.
Hello all!
I just got a 2gig fiber connection from fidium fiber. The isp supplied router only has 1gig lan ports so I just got a Ubiquiti UCG-MAX with 2.5gig lan ports. I am using 2 moca adapter to run the signal up to 2 bedrooms. I plan on connecting a ugreen nas and maybe a pie hole. Also going to use an unused wifi router as the wifi access point.
What should I do to optimize my setup and get the most from my unnecessarily fast internet?
Hey Team! I’m looking for a quiet-ish solution to add additional 3.5” drives.
I have a 12 Bay JBOD right now, but the PSUs are very loud.
I’m not opposed to normal fan noise, but I can’t do enterprise grade high pitched PSUs or fans.
Are there any decent Dell / Supermicro chassis that I can make quiet, or a custom JBOD solution?
So I want to preface that I am very new to all of this and just want something to get started into home labbin. I just want something to get me started and wanted to see if there was any benefit or if I am missing something when considering either of these devices to run a home server on.
I am looking at either the Zimaboard 832 or the Lenovo ThinkCentre M710q. Please let me know what I should go with and if you have any other suggestions on an easy machine to get going on. Thanks!
Hi guys,
Coming from Australia here,
So this is my home lab that I've slowly created over the last two years since getting into this addictive hobby.
Top to Bottom:
Ubiquiti EdgeSwitch 24 Port (ES-24-250w)
Silverstone 2u (RM23-502-Mini)
Running Proxmox as Hypervisor
Motherboard : Erying Tigerlake 11th gen ES ATX equivalent to 11600H (Ali Express)
Storage: 2 x 256gb nvme in ZFS mirror
PSU: 600w bronze out of an old PC build
Cooler: Stock intel cooler from Ebay
Ram: Crucial 32gb DDR4 3200 out of old PC build.
Services running on machine:
Proxmox Backup Server
Amp game panel for hosting game services e.g Minecraft, valhiem, ATS etc.
Pihole (Backup DNS)
The motherboard was an interesting choice, I was sold after seeing them in a Craft Computing video. Ive had zero issues so far with compatibility and reliability with Proxmox, Sips power, good price and has more then enough juice to handle the tasks I've given to it. I ended up changing the VRM heatsink thermal pads, and the thermal paste under the copper plate they put over the mobile CPU.
WD Cloud NAS (Left):
WD Cloud NAS (Right):
Eufy Homebase 3:
-Security Cameras for home
Unifi Cloud Gateway Ultra:
Minisforum MS-01 (Main Server):
Running Proxmox as Hypervisor
1tb nvme in ZFS mirror
Services running on machine:
Portainer for docker managment
Arr Stack (Arr's, DelugeVPN, Sabnzbd, Jellyseerr)
Bitwarden
Traefik for internal DNS resolving and certificates
Nginx Proxy Manager for External Applications (Looking at changing to traefik eventually)
Pihole as main DNS running unbound and synced to 2nd DNS using gravity sync.
Jellyfin and Plex
Watchtower for updates
Homepage for dashboard
Silverstone 4u (RM41-H08)
Running Unraid
Motherboard: Machinist x99 MR9A (Ali Express)
CPU: Intel Xeon 2560v4
Cooler: Noctua NF-F12
Ram: 32gb 2400mhz ECC Ram
Graphics Card: GTX 1070 out of old pc build, not in use at the moment,
Storage: 3 x 2TB WD Red, 4 x 6TB WD Red and 2x250gb SSD for cache
LSI card for connecting the drives.
Network: 10gb mellanox card running at 1gb currently due to switch.
PSU: Corsair 650w Gold - out of another pc build
For services I'm just running syncthing for phone camera backups for my partner and I.
Cyberpower 1600va
Amplifi Alien
Some goals this year would include migrating to Truenas Scale and repurposing my current storage for an offsite backup, Upgrading my switch and also getting an aggregation switch.
I am looking into creating a home lab to run some VMs on ESXi 8 because it is what my company uses.
I was thinking of getting an ASUS NUC to use because it is small and can sit on my desk. Would that work or am I missing something? If it comes with Windows on it, how would I install ESXi?
Thanks for the help!
Everyone shows off their neat homelabs. And they're impressive as hell!
But what would you say to someone who really wants to have a reason for a homelab, but can't decide what software to install and why?
It's one thing to know what you want and build a homelab around it. And obvs if you've got a work project or a specific skill you're trying to learn, it's a great reason to have your own Homelab.
But it's another thing for someone to not know what they want because they either don't have a need or aren't aware of solutions existing for things they do/could want to do.
Do any of you have a good list of "Every person should want solutions to XYZ and that's why you want a homelab?". Beyond "personal cloud" and "pihole"... What else justifies the common IT nerd to want their own setup.
I took some RAM out of an old blade server and I installed it in the server with the first one being in A1 and have a spot in between each stick. I am supposed to have a total of 64 GB of RAM, but it only displays as 4 GB in iDRAC.
My iDRAC is updated to iDRAC 7. I have not messed with the BIOS.
This is the RAM I am using, https://imgur.com/a/RM1ukcu
I took some RAM out of an old blade server and I installed it in the server with the first one being in A1 and have a spot in between each stick. I am supposed to have a total of 64 GB of RAM, but it only displays as 4 GB in iDRAC.
My iDRAC is updated to iDRAC 7. I have not messed with the BIOS.
I took some RAM out of an old blade server and I installed it in the server with the first one being in A1 and have a spot in between each stick. I am supposed to have a total of 64 GB of RAM, but it only displays as 4 GB in iDRAC.
My iDRAC is updated to iDRAC 7. I have not messed with the BIOS.
I am planning on creating a 4 node ceph cluster with proxmox ha, using 4 x n100 mini PC (single NIC) with 2x ssd each and a usb hdd for backups.
Firstly, I was thinking about making it 3 node, but then I saw a lot of opinions on failure tolerance and self healing limitations in that setup so I figured I should try 4nodes with 2/2. I’m a total noobie with ceph, so would anyone mind explaining to me how does the 4 node setup operate if there are only 3 mons? Is the 4th node kind of like a hot spare in RAID analogy? Also, is there anything I should know about before I start going down the rabbit hole?
For networking, I plan on using my UFW Flex mini (10gbps switching capacity) and adding external NIC to each PC for ring connection.
My main goal with this thing is purely practice and learn more about managing distributes storage with some HA-ish capabilities… without ruining my wallet
I've been messing with various methods of running my lab, in preparation for an unavoidable change in approach later this year. I've had the advantage of having surplus solar power and a large enough office with rack space to run my current lab from older enterprise gear, mostly 2 and 4U rack servers. Most of the resources are dedicated to my data hoard, which I've made arrangements to co-locate for a while, but for security reasons I want to keep my actual services LAN only. This has led me to a bit of a conundrum, and I'm looking for a sense of consensus on what's the best way to finish. Any commentary is appreciated, especially when it gives me things to bounce my head against and know that I'm doing something that isn't going to inadvertently hurt me in the long run. Hopefully by around this time next year, I'll also be able to finish having all of my hardcore equipment moved over, and I can operate like I would normally, but obviously that would be difficult until I get at least a good 6-7kW of excess solar.
Scenario: Cross-country move, to bare land where I'll be self building a new home with my own personal data cave. Estimated time without solid accomodations for serious hardware = 9 months give or take.
Desired end goal: transition non data-intsnsive Homelab services (ex: no 'arr stack or web archiving) to a hybrid cloud model, or ideally to a cluster of small and energy efficient systems carried with me to achieve high availability.
Current thought:
would like to make proxmox cluster to switch anything that has a system package that will be in singular LXC containers. I already have some 1 liter PCs that could do that job so it wouldn't be an extra cost either.
Docker isn't conducive to high availability run in proxmox even in swarm mode because of live migrate possibly overprovisioning a node, but admittedly is the easiest to deploy and keep updated, also most homelab type services gravitate towards it and means some things I'd have to manually build for a local package and need to check for updates the long way.
Perhaps it's a complex of mine, but I don't necessarily trust my skills with cloud hosting to be able to properly secure all of my services, especially for something sensitive like a password manager. So the only way I'd want to really use it would be for backup purposes, since I know you can do client-side encryption for S3 buckets. Not using cloud hosting also means I need to depend less on reverse proxies since I'll just have easily accessible separate IP addresses for all known services that can be tied to my internal DNS. Being trapped behind CGNat using Starlink also provides me a extra layer of abstraction from people being able to break into my local network so security becomes easier to set up in this way.
Hi guys, I live in BC, Canada. I wanted to build my own computing server for AI (mainly for inference). I'm thinking of using the AMD 9004 series CPUs and the latest NVIDIA GeForce GPUs (5090 presumably). For GPUs, I know that I just need to wait. But I failed searching all other necessary components on Amazon, NewEgg and Canada Computers. I have also tried SuperMicro, but the price is too high for me and they don't provide consumer GPUs. Where can I buy all the hardware at affordable prices? Any recommendations? I'd be much appriciated.
About to beef up my server with 128gb RAM and an intel Xeon e5 2676 v4 16 cores. What to do with all that “power”. Running a kubernetes cluster is number one and only on my list right now
Hi I currently have a Lanovo m920q and love it. it runs proxmox, ubuntu images and opnsense.
my one gripe is the storage and expandability. I run a few docker containers and seem to run out of space every now and again.
I was looking at getting a second one to proxmox cluster or use as a proxmox backup server.
I was also looking at adding a dual 10gb card to the new machine as my ISP provides >1gbps now.
I don't have a space limitation but want the machine to be power efficient as it would become my router and be on 24/7.
is there something inbetween the tiny PC and a full desktop PC which will give me some more expandability while not eating every power pixie out of the wall?
my other consideration was to get something like a 1u server which has the 10gb nic built in but feel like it goes in the face of my energy plan.
the m920q that has 32gb of ram, nic and riser card came to ~£330 so anything in that ball park would be great.
as the tiny PC would be used I don't mind shopping for used parts.
Thanks for your help
Hi,
I recently picked up an hp proliant dl380p gen8 from a local recycler to mess around with and have been trying to install windows on it for the past couple days. The main issue I'm running into is the couple drives that i got for it are formated with gtp and windows wont install to them. Another issue is that secure boot isnt on this machine so i have to use MBR. I cant find any guides on how to install windows server 2016 from a usb onto this thing and i am looking for help.
A bit of an update:
I absolutely hate when things randomly start working without any meaningful changes but that has happened. I randomly booted back into the windows setup and it just let me install it even without the driver's commenters were very kind to provide. Thanks to all the help!
I am building my first router and trying to find an os to use. Other than routing ethernet I'd like it to be able ti handle wifi and run pi-hole. I know pfsense and opnsense are based on bsd and can have trouble with wifi. I would prefer not to have to use a hypervisor and multiple vm's. Any suggestions?
Also check out this build - I won't post the same comprehensive pics. I'm posting this in case anyone else decides to try this case, so that they hopefully have an easier time of things. I'll be posting shots to replies to myself, because I'm too lazy to deal with imgur. Same with verbiage.
Finished product:
I decided to mostly copy Wolfgang's build, which is posted to youtube here. However, I wanted to have the following:
While I achieved this, I found that both the L2ARC and SLOG drives were not helpful (likely due to the interfaces and drive types that I use
First, the TL;DR stuff that you should know before using this case, or going with a cheap AMD consumer APU/CPU build with ECC:
The results are great. It can definitely soak a 10GbE connection for sequential reads and writes, pretty easily. Not sure I need to muck about with tuning all that much given how fast it seems to be. I guess having a ton of RAM makes up for a lack of finesse.
If I had to do it over again, I'd stick with a straight CPU, since I don't really need video transcoding. I'd have three PCIe4 NVMe drives in the x16 slot (two for SLOG, one for L2ARC), which would likely make things faster, even with synchronous writes enabled, and still keep the data drives configured as-is.
Hi all
Today I tried to add 2x 2.5" SATA HDDs to my Intel® Server System R2208WTTYSR.
It already has 8 bays at the front and is connected to the motherboard through (the only) 2 Mini-SAS connectors. I wanted to expand my storage a bit more and saw that the motherboard (an Intel S2600WTTR with dual Xeon E5-2699v4 cpu's) had 2 additional internal SATA ports (SATA 4 and SATA 5 as mentioned in the official documentation, 10.3.1). It also has 2 additional 4P 12V Power ("Optional_12V_PWR" in docs, 10.1.4) outlets that I thought I could convert to SATA power using a 4P->SATA power adapter.
Turns out, you're not supposed to do that. Although it physically fits, I managed to kill 2 HDDs (HGST Travelstars) by probably supplying them with too much power (the 12V header is apprently rated for extra GPU power, up to 225W). It was also doomed to fail as I naively assumed SATA runs on 12V (it also needs 3.3V and 5V, and 12V is usually not even used, smh). That's my fault and I luckely had nothing on those drives anyway (they were cold spares).
After a chat with ChatGPT (yes) and consulting the official documentation of the board, there's apperently a "Peripheral_PWR" header that supplies 3.3V, 5V and 12V (see 10.1.3). ChatGPT says it's an Molex Mini Fit Jr. based on the pin diagram (the official documentation doesn't mention anything of it). A visual inspection of the connector reveals it's a very small sort of 6-pin PCIe connector of some sort.
My goal now is to connect 2 SATA drives with power to that Peripheral pwr header. However, I can't find a single adapter cable that converts said connector to (preferably) 2x SATA power. The only thing i can find is 'make it yourself', but after just killing 2 drives, I don't feel comfortable doing that.
There is an unconnected front expansion bay 4 pin male connector, probably for an expansion bay (I looked into that, but my current mobo doesn't have any more mini-sas hd connector anymore)
So, anyone here have experience with a Mini Fit Jr. connector? Or maybe someone that did something similar on Intel Server boards? I find it weird that the mobo has 2 additional SATA data headers, but no power for them whatshowever.
If you need any extra info, I can happely provide some!
Thanks in advance!
Hey all, I've been looking to deploy a little GPU server at home to run ollama and possibly for games. I found Dell PowerEdge T430, but it seems like lots of outdated documentation. Officially it says I only can have about 400GB Ram with 2 CPUs, but it seems like 1.5TB is actual number. How about GPUs? I was thinking about few(2-4) 3090s, are there any limits on GPUs?
Specs: Dell PowerEdge T430 Dual Xeon E5-2680 v4 (28 Core/56 Thread) 128GB RAM. (8x 16GB DDR4, expandable) Dual 750watt Redundant Power Supply 1x Dedicated IDRACC Ethernet 2x Gigabit Ethernet LAN NVIDIA Quadro K2200 4GB GPU 1x 512GB SSD (new) - Boot Drive 1x Dell PERC H330 RAID Controller 8x 3.5” SATA/SAS Drive Caddies 32 TB Storage (8x 4TB Enterprise SAS Drives). Dual 10gbe SFP+ NIC (10 gig networking)