/r/homelab
Welcome to your friendly /r/homelab, where techies and sysadmin from everywhere are welcome to share their labs, projects, builds, etc.
Labporn Diagrams Tutorials News
Please see the full rules page for details on the rules, but the jist of it is:
Don't be an asshole.
Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
No memes or potato images.
We love detailed homelab builds, especially network diagrams!
Report any posts that you feel should be brought to our attention.
Please flair your posts when posting.
Please no shitposting or blogspam.
No Referral Linking.
Keep piracy discussion off of this subreddit.
All sales posts and online offers should be posted in /r/homelabsales.
Before posting please read the wiki, there is always content being added and it could save you a lot of time and hassle.
Feel like helping out your fellow labber? Contribute to the wiki! It's a great help for everybody, just remember to keep the formatting please.
/r/sysadmin - Our original home. Splintered off from this sub-reddit.
/r/networking - Enterprise networking.
/r/datacenter - Talk of anything to do with the datacenter here
/r/PowerShell - Learn Powershell!
/r/linux4noobs - Newbie friendly place to learn Linux! All experience levels. Try to be specific with your questions if possible.
/r/linux - All flavors of Linux discussion & news - not for the faint of heart!
/r/linuxadmin - For Linux Sysadmins
/r/buildapcsales - For sales on building a PC
/r/hardwareswap - Used hardware, swap hardware. Might be able to find things useful for a lab.
/r/pfsense - for all things pfsense ('nix firewall)
/r/HomeNetworking - Simpler networking advice.
/r/HomeAutomation - Automate your life.
/r/homelab
I need a nice rack.
I have many of the standard Costco CyberPower UPS boxes around the home (1350VA).
Overall, they work fine. However, recently several boxes have failed during short power outages or blinks. Further investigation reveals that the batteries are spent. I can replace batteries as needed, no problem.
The problem is that the bad-battery UPSs report ample run-time just like the good battery units. Therefore, I don't know when batteries need replacing until I manually visit every single UPS and pull the plugs and witness the units instantly doing dark.
Is there a better way to work around the CyberPower's inability to detect its own bad batteries?
I'm not sure what's happened. For reference. I've been using this server for years to host my groups Minecraft servers. I just had a normal Ubuntu server on it. Recently due to a couple of events I became concerned that it had been compromised and decided to just wipe it and start fresh since I dont keep any mission critical data on it.
So I pull the drives, wipe em, put them back in, ServeRaid yells at me about my drives being gone, I set up a new raid 5 virtual disk using them and boot to my USB to reinstall Ubuntu.
Ubuntu install goes mostly painless, it sees the raid disk and installs no problem, I did go WITHOUT using an lvm however, not sure if that's relevant or not.
And now, there's absolutely no option to boot to the raid disk in the bios. No grub option either which is what I had last time. It still boots off of the USB just fine into the live environment but nothing I do will boot my into my raid array. I'm not sure what happened or how to fix it, I've spent days trying to fix this to no avail. I've reinitialized the array multiple times, I've tried different distros of Linux. Nothing
I've also tried not using a raid array. But then Linux doesn't see anything to install into and aborts the installer, which that part kinda makes sense, as I have a hardware raid controller.
Oh and I did set it as a boot drive in the raid controller too.
I'm somewhat new to server architecture but I feel like I understand it pretty well, but this is absolutely baffling me. when I installed it the first time it was just like installing on a desktop, I'm not sure what changed. Any help is appreciated thank you.
Hello together!
I currently have a Synology DS214Play with 2x 4TB WD Red, everything being 10+ years old. I recently bought 2x 8TB WD Red Plus drives and i'm now looking into possible solutions. These are some scenarios and i'd love to have a opinions and maybe alternatives or ideas.
- Keep the DS214Play, this is not accessible from the internet but uploads backups into a cloud drive. Put the 8TB in there and i'm good to go
- Additionally i'm thinking about using one of my Dell Wyse or Fujitsu Futro as a kind of always-on solution. One of the currently not used Wyse has a 2.5G Network card installed. I thought about putting a 1TB SSD into that and have that for my most recent and most accessed files. Once a day or maybe once a week the NAS would boot up, the files will be backed up onto the NAS and from there into the cloud drive. Then the NAS shuts off again, being available with WoL if I need files.
- Maybe ditch the DS214Play and buy something like a QNAP TS-262 which offers M2 slots. But I would need to buy it. The Wyse with 2.5G is already here.
Does that make any sense at all? Goal would be to drop power consumption and have a faster storage at hand.
If that makes any sense, what would be good solutions, software-side for the Wyse? I already have another Wyse running where Proxmox is installed which is hosting Containers and VMs. Maybe even put them on one device?
Hope this is the right place for this question. Thanks :)
Hello everyone,
I’m planning to purchase the ASRock Rack GENOAD8X-2T/BCM motherboard but haven’t been able to find clear information about whether it supports PCIe bifurcation. The only detail I found in the manual mentions the default settings for PCIe x16 slots, but it doesn’t specify whether the bifurcation mode (e.g., x4x4x4x4) can be adjusted.
Does anyone here have experience with ASRock Rack motherboards or know if this particular model supports bifurcation? My assumption is that most modern boards should support this feature, but I’d like to confirm to avoid potential compatibility issues.
Thanks in advance for your help!
i need to make Ipxe build working completly offline with windows and linux iso on my server ( ive used netboot.xyz wich downloads linux from web but i want locally transfered windows iso ) 2 separate files of course, .ipxe for legacy and.efi for uefi. instructions on ipxe web page are unclear to me.
I recently moved from the UK to Oregon. Back home there's a great ISP called Andrews and Arnold which is run by and for tech people, and one of the services they offer is a L2TP tunnel service which provides a static IPv4 address.
Is there something like this located on the US west coast (Portland - Seattle area), or would I be better off setting up WireGuard on a VPS? I'm concerned about getting an IP that's in a blacklisted block, since I will be running a mail server.
Hello !
I am really hesitating between the two version especially because the N100 is capable of HW transcoding and I will mainly be streaming content from Plex for like 4 persons at the maximum. I would take the N100 normally but I saw that it only has one RAM slot and I was wondering if it's worth to put 100$ more in the Ryzen 7 5825u one even tho I will mainly be doing Plex and store my files.
Thanks in advance !
I’m thinking about creating a NAS server at home using HexOS and I want a NAS that’s capable of using 2.5 inch SAS enterprise ssd’s.
Advice on Building My First Homelab—Budget: $5000
Hey everyone,
I've posted about this before, but now I have a clearer idea of what I need and hope you can help me out.
Given all this, I'd really appreciate any advice on building my homelab within my $5000-$8000 budget. And I hope you can focus on my specific needs. I’m not looking for advice about Proxmox or other alternatives, I’ve already tested everything, and this is what I’m looking for <3
AMD EPYC 7702P
Cores/Threads: 64 cores / 128 threads
Price: Approximately $4,300 / Used: $1,400
Link: https://www.amd.com/en/products/cpu/amd-epyc-7702p
Should i go with a different CPU? maybe EPYC 7763 new: $1,489? or any other option?
Motherboard:
Supermicro H11SSL-i
Price: Approximately $500
Link: https://www.supermicro.com/en/products/motherboard/H11SSL-i
Memory (RAM):
512 GB (16 x 32 GB) DDR4 ECC Registered Memory
Price: Approximately $2,400
Link (Example memory modules, such as Samsung 32GB DDR4 ECC Registered DIMMs):
https://www.samsung.com/semiconductor/dram/module/M393A4K40CB2-CTD/
Primary Storage (for OS and VMs):
2 x 2 TB NVMe SSDs (e.g., Samsung 970 EVO Plus)
Price: Approximately $400 ($200 each)
Link: https://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-970-evo-plus-nvme-m-2-2280-2tb-mz-v7s2t0b-am/
Secondary Storage (for data and backups):
4 x 4 TB HDDs in RAID 10 configuration
Price: Approximately $600 ($150 each)
Link (Example HDDs, such as Seagate IronWolf 4TB NAS HDD):
https://www.seagate.com/products/nas-drives/ironwolf-sata-hdd/
Graphics Card:
NVIDIA Quadro P1000
Price: Approximately $400
Link: https://www.nvidia.com/en-us/design-visualization/quadro-desktop-gpus/
Power Supply:
EVGA SuperNOVA 1200W P2, 80+ Platinum
Price: Approximately $350
Link: https://www.evga.com/products/product.aspx?pn=220-P2-1200-X1
Case (Chassis):
Supermicro SuperChassis 846BE16-R920B (4U Rackmount)
Price: Approximately $800
Link: https://www.supermicro.com/en/products/chassis/4u/846/sc846be16-r920b
Cooling:
Dynatron A26 4U Active CPU Cooler for AMD EPYC
Price: Approximately $100
Link: http://www.dynatron.co/product-page/a26
Networking:
Onboard Dual 10 Gigabit Ethernet Ports (included with motherboard)
Operating System Drive:
500 GB SATA SSD (for Host OS)
Price: Approximately $60
Link (Example SSD, such as Crucial MX500 500GB):
https://www.crucial.com/ssd/mx500/ct500mx500ssd1
Guten Tag zusammen
Bei meiner OMV 7 habe ich eine SMB Freigabe erstellt. Nun bin ich mit meinem Benutzer in Windows eingeloggt und möchte gerne einen falsch generierten Ordner namens "Control" löschen.
Jedoch kommt immer diese Nachricht, dass die /nobody Berechtigung fehlt.. ich komme nicht darauf wie ich dem Benutzer diese Rechte geben kann.
Weiss da jemand Bescheid?
ENG:
Hello everyone
I have created an SMB share on my OMV 7. Now I am logged into Windows with my user and would like to delete an incorrectly generated folder called “Control”.
However, I always get this message that the /nobody authorization is missing... I can't figure out how to give the user these rights.
Does anyone know?
Hello everyone,
Could you please tell me if there is a server case with an ATX PSU that draws air from outside and can also fit an RTX 4090 with its side power connector? I’m thinking about building a gaming PC in a server case, using a liquid cooling, but I want to make sure it has the best cooling possible. My PSU is a Corsair HX1000i.
Thanks in advance!
I love having storage(addiction), I also like having my ENTIRE steam library downloaded.
I found a NAS/Server for a decent deal. Is it possible to somehow put all my games on the NAS, and play off of it on another computer?
I also am unaware if this is the correct sub for this but it popped up with a Google search so yeah.
I'm looking to play around more with Kubernetes storage. I use Kubernetes extensively at work (DevOps engineer) and often try new tools on my local cluster, which I also use for developing my iRacing telemetry software. The cluster I have is usually overpowered for my use case, but is very helpful for stress testing, and occasional game server hosting. Currently I have a cluster that consists of the following:
* 2x EPYC 7401 24c - Proxmox - Dual 10G SFP+
* 3x Control Plane nodes
* 3x RPi 4GB worker nodes (used for low-performance client testing)
At the moment the storage for control plane nodes uses Linstor on the NVMe drives in the EPYC server, but I am interested in trying out some solutions for network storage, moving most of the NVMe drives out to another server where all Kubernetes nodes can pull volumes from.
I just picked up an AM5 x670e board for cheap from an auction, and was thinking of using that along with a EPYC 4004 chip (although they are hard to find in stock) and a 25/40/100GB NIC. I have looked at switches, and there are some reasonably priced 40GB options (~£450) but they are likely very loud, which I would like to avoid. I am considering getting 2 dual-port 100GB NICs and connecting them directly between the new server and the existing EPYC server.
My question is - does this sound like a reasonable idea? What tools should I be looking at for having volumes network attached in K8s from another machine? Will 40GB be sufficient or is it worth to jump straight to 100G.
I've included pictures of the current server for anybody who is interested. It runs in my old PC case alongside my NAS - the SATA card is attached in a not-so-sensible way, it works for me but I apologise... The NAS is built mostly from old PC hardware, so mini-ITX motherboard, i7-8700 and an AIO (cooler height clearance issues)
Can u call using fiber connection with your pc without the need of an old school telephone so full story is i have a airfiber connection isp have made an app in which you connect to wifi and open their app it'll detect it is the right wifi and then it sends an otp to your phone number and then you can place a call with your phone from your fixedvoice number But that app is such a piece of shit it never works whenever i need it badly. So i'm asking can't i just place a call from my pc it is connected with ethernet and i'm pretty sure my isp allow fixedvoice calling if you have an telephone.
i have an extra pc laying around and i plan to use it as a server. but i am practically new to this. I want the server to do a few things, Home automation, game server (Avorion, minecraft) and some small NAS for backups for personal or maybe for family as well. Also i want to be able to access the NAS outside as well.
i have questions but i am not sure where to start asking with. If u have any resources that i can use to start reading or watching it would be greatly appreciated.
I have a 3 node proxmox cluster, one of the nodes has a usb-c enclosure connected and a vm running truenas with usb passthrough
Has anyone got this to work reliably? My pool gets marked as degraded in truenas and I get the errors in the image
Now my 10" rack is ready.
Everything is 3D printed to keep everything in place.
From top to bottom:
Cloud Gateway Ultra
Cable pass thru
Lite 16 PoE
Cable pass thru
6x Raspberry Pi 4 4gb
Cable pass thru
3 u guard
Power strips
Hey all. I just finished wrapping up a TrueNAS instance on an old office computer work had given me and I started thinking how to make the NAS into a compact powerhouse of a machine, also running arr suite, so I had purchased an m920x tiny to start off, only downside is I'm not sure which pcie card I can buy/look out for that contains m.2 slots for me to add onto the computer. My plan was to buy 2x2TB m.2 ssds for the provided m.2 slots the computer already came with and potentially add 2-4 more with a pcie card. I already purchased a riser as well but just need help finding the last piece of the puzzle. Thank you!
See I have total host 5, each host holding 24 HDD and each HDD is of size 9.1TiB. So, a total of 1.2PiB out of which i am getting 700TiB. I did erasure coding 3+2 and placement group 128. But, the issue i am facing is when I turn off one node write is completely disabled. Erasure coding 3+2 can handle two nodes failure but it's not working in my case. I request this community to help me tackle this issue. The min size is 3 and 4 pools are there.
I have a m2 wifi slot but not sure which card is compatible. Any help is appreciated? My mobo manual says Intel 9560ngw E type. Wanted to avoid returning hassles. #wificard #m2
I currently have a few RPis, USB HDs, a switch, AP, router, and some misc RISC-V stuff. Basically just running NextCloud, Grafana, nginx, and a few game servers, nothing crazy.
Right now I'm looking at the 10" GeeekPi 8U for $144, or the 19" Vevor 12U for $84.
The 19" rack is cheaper, but it really does take up quite a bit of space, space that would be mostly empty with my current setup.
The 10" rack looks way nicer, is a lot smaller, but more expensive and it might be harder to find shelves for it.
There are a few things I might want to get in the future that are full size rack:
DIY GPU server. Throw a few Tesla P40s into a server motherboard that has a few PCIe slots, use it for Local LLM, Stable Diffusion, TTS.
DIY NAS.
Rackmount UPS.
Am I going to regret getting the smaller one? What has your experience been? Anything cool I might want to build that would require a full size rack?
Plan to use: -Proxmox, with Jellyfin (with QSV transcoding from 1080p (collection consists of 98% H.264/ 2% HEVC) to lower res), Adguard Home, Nginx Proxy Manager, maybe Tailscale/Cloudflare Zero Trust. Will run 24/7. -1x 120GB SATA 2.5" (or M.2) SSD for boot drive -1x 1 TB SATA 2.5 HDD (existing) -In the future, maybe additional 1x 2 TB HDD/more.
The only thing that concerns me right now is how idle power consumption on i3-7100T is, and with long-enough research I can't find any conclusive information of the idle power draw. My goal is to have the lowest power draw if possible, as it'll running 24/7 here. Also, the space provided for the server is really tight here, only on a 1m2 'room' as on the picture.
The server itself will just do of what's above, I already have an old PC that's using E3-1225 v2 for more heavier workload as another node of Proxmox later.
For this home server, the plan is using either of these:
PC/SFF PC prebuilt with i3-7100T, H110/B150 mobo. Will getting secondhand prebuild 80+ bronze PSU like Actel/LiteOn/else.
Mini PC like Fujitsu Futro S940N/Wyse 5070 + M.2 to SATA adapter.
Also, yes, I will bother to take on the hassle of janking the SATA HDD power source from USB header if it does worth the power saving even if its 10-20W (well, my parents like to complain if I plug so much things). Like, I do love to modify things either.
Well... What do you think? Any advice would be appreciated. Thanks in advance.
Hi everyone, I am new to Reddit as far as posting goes. I wanted to introduce myself and what I have going on.
My name is Brian, live in Orange Park Florida. I am a wire technician for AT&T installing and repairing fiber internet as well as old copper based VDSL.
I have been into computers since I was a kid so to put that into perspective I am 47 years old so I started out with a IBM 80886/87 desk top but I have tinkered with other systems as well. I have a spare bedroom in my house that houses all my toys. I have 2 PC's & one imac. One is a PC I built for gaming and the other is one I use for everything else and the iMac is a project.
Over the last year I have been putting together a home lab, I already had a server running TRUENAS which has been in service since the FreeNAS day. At the first of the year I did a complete rebuild, case and all, as well a fresh set of SAS drives.
I wanted to built a Proxmox system so I bought a old HP z620 workstation for cheap on ebay with dual Zeon 2650's and upgraded to 96gb of ram with 3 2.5" 2 TB SSD'S for storage. I found that ebay has a lot of enterprise gear being sold cheap. So far my experience with Proxmox has been good and I find it very easy to use.
For internet I have ATT fiber @ 5 gig that I have setup in a smart panel that I retro fitted in a unused coat closet in my living room. At the moment I have 2 switches, one runs a 2.5 gb network and the other a 1 gig. The 2.5 side runs through cat 6 ethernet to my PC room and the 1 gig is used for simple things like TV'S, ect. I have plans to swap out the 2.5 gb switch to a 10 gb later on but of course I will have to swap out the 2.5 gb NIC'S in my PC's to 5 gb. Since I have access to fiber rolls I am concidering running fiber to fiber jacks in the PC room.
Since I am just starting out with the home lab experience I want to buy a server rack/case to house my two units so don't knock my current server setup location too much, it will change.
I won't let me add but the one Pic attachment, this is my smart panel setup, I have 7 total 2.5 gb ethernet jacks installed in the PC room alone. I have ran cat 6e to every room in my home including the garage which runs off of the 1 gb switch, including 2 wireless access points at each end of the house.
The PC room has a good sized closet that I converted into a "server storage" room complete with two deticated power outlets on their own breaker and 3 2.5 gb ethernet jacks.
Long post, sorry about that. I am having fun and wanted to share my experience.
Brian
Hello,
I was offered to buy an NVME PCIe SSD (not M.2, actual PCIe slot) but it needs 4x4 bifurcation to work. My BIOS supports it but I only have x8 slots left. Is my assumption correct that 4x4 bifurcation needs a PCIe x16 slot to work?
Planning to build my own server and willing to spend for a solid base of components to allow for future flexibility but I don't know what I don't know in terms of what the future may hold in this hobby. Needs now are relatively simple: 24/7/365 power efficient server that at minimum hosts a Bitcoin node, some sort of basic web server to host my websites (so I can stop paying monthly hosting fees), maybe Nextcloud, PhotoPrism, etc. Plan to upgrade/expand the system as needs arise but don't want to realize later that I should have added more to the base specs before starting. Anything worth being aware of?
I'm looking to use it as a single-drive NAS and maybe run some docker containers. Reason I don't want to use a real NAS is because I want to back the drive up to Backblaze using my personal plan.
I have looked at the Dell and HP and Lenovo sites and I get kinda confused as to what models have space for a 3.5 HDD
Thanks
I have a rack with a bunch of networking equipment and a single server right now - a Lenovo RD450 with 128GB DDR4 Ram, Dual Xeon E5-2630 v3 @ 2.40GHz.
I got it for free, and it has served me really well for my needs. I'm running Proxmox with ~10 different containers. I'm using it for Home Assistant, ZoneMinder (planning to switch to Frigate), Samba/SMB, media/plex, etc. Nothing too crazy, Zoneminder is probably the single biggest hog of CPU and memory, but otherwise I doubt I'm using more than half the available resources on average.
That brings me to my problem, I need to drastically reduce the depth of my rack, and this server is huge. Its roughly 30" and ideally I'd like to have something < 20".
I have 2 empty slots in my rack so my first though was to look into doing a 4U shallow build, but I quickly got overwhelmed on the hardware choices and its starting to feel like what I have is overkill.
Unfortunately I also have a lot less free time now to tinker with my homelab, and with increasing WAF a lot of the tasks this server does are now "mission critical" in my household..
I'd like to keep it all contained in one machine, and I need at least 5 drives (RAID 1, an SSD, and a disk for the NVR), but now I'm wondering if I should just buy something off the shelf like a dedicated NVR for Frigate/Zoneminder and some HP elitedesks, or some small form factor computer. Or maybe a consumer build in a rack-mountable case?
I guess my questions boil down to this:
- Would pretty much any i5/i7 consumer grade CPU be a massive improvement over my current Xeons? Any reason to go for a newer Xeon vs an i5/i7?
- Given a budget of ~$1000 and 4 available units of space in the rack, what approach would you take? One big box? A consumer grade build? Any other off the shelf options worth looking at?
TIA
Built this project in my garage, been working on it for a while now thinking of wrapping it up, all opinions appreciated for new features to add to the project, here is the website link.
https://readymag.website/u2481798807/5057562/image-n-hotspot/