/r/homelab
Welcome to your friendly /r/homelab, where techies and sysadmin from everywhere are welcome to share their labs, projects, builds, etc.
Labporn Diagrams Tutorials News
Please see the full rules page for details on the rules, but the jist of it is:
Don't be an asshole.
Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
No memes or potato images.
We love detailed homelab builds, especially network diagrams!
Report any posts that you feel should be brought to our attention.
Please flair your posts when posting.
Please no shitposting or blogspam.
No Referral Linking.
Keep piracy discussion off of this subreddit.
All sales posts and online offers should be posted in /r/homelabsales.
Before posting please read the wiki, there is always content being added and it could save you a lot of time and hassle.
Feel like helping out your fellow labber? Contribute to the wiki! It's a great help for everybody, just remember to keep the formatting please.
/r/sysadmin - Our original home. Splintered off from this sub-reddit.
/r/networking - Enterprise networking.
/r/datacenter - Talk of anything to do with the datacenter here
/r/PowerShell - Learn Powershell!
/r/linux4noobs - Newbie friendly place to learn Linux! All experience levels. Try to be specific with your questions if possible.
/r/linux - All flavors of Linux discussion & news - not for the faint of heart!
/r/linuxadmin - For Linux Sysadmins
/r/buildapcsales - For sales on building a PC
/r/hardwareswap - Used hardware, swap hardware. Might be able to find things useful for a lab.
/r/pfsense - for all things pfsense ('nix firewall)
/r/HomeNetworking - Simpler networking advice.
/r/HomeAutomation - Automate your life.
/r/homelab
Hello,
I recently got a free Dell PowerEdge R720 and I'm trying to install Alpine Linux to it, but I’m having some problems with the USB ports and the iDRAC.
Here’s what’s going on:
The iDRAC should be at 192.168.1.100, but I can't access it.
It’s not pingable and doesn’t show up on the router’s list of devices.
I tried both the dedicated iDRAC port and the regular network port and changed the network settings, but nothing worked.
When I switch to DHCP, the front LCD shows an IP of 0.0.0.0.
Additional Info:
Questions:
Thanks for any help!
Hey everyone,
So I'm building out a NAS and have decided I want to go with the HL15 from 45 Drives. I'm stuck on trying to figure out how to spec it and would love some insight if anyone has any.
The HL15 will be populated with 4x20TB HDD's starting out, would like a board that supports m.2 NVME so I could install the OS on one NVME drive and use the second NVME as a larger cache drive.
I've never used a "NAS" so my experience with network storage has always been unraid. At the moment I'm planning on going TrueNAS.
I also have a requirement for 10Gigabit. I don't mind having to drop in say an Intel X710 if there's not a board you recommend with built in 10gigabit.
So basically I'm asking what CPU/Motherboard/RAM/LSI HBA do you recommend? I'm assuming something like a 750w PSU will be good enough. Oh it will be used purely for storage. Won't be running any apps other than storage. It'll house the majority of my media for my plex server, plus system backs ups and what have you from other devices/servers on the network.
Oh I should say I'm trying to spend no more than around $2k (not including hard drives) on the build. The HL15 barebones is ~$900 so that leaves me about $1100 to spend on the rest of the hardware.
Hey sub! I've come across this site that sells rack mounts for various machines/network equipment. I currently have one raspberry pi 4, and 2 optiplex machines (one is micro ff and the other is small ff). At the moment they're just all stacked on top of each other in my rack which isn't necessarily "bad" I don't think, but I just would like them to be properly racked instead of having a shelf and possibly use it for the monitor I use when I switch between the outputs when I have to troubleshoot without SSH (I don't have a KVM yet but it's on my list).
Anyway, I've found this site that seems like pretty good mounts but they don't have any reviews and I just wanna see if anyone else has either heard of them or know someone with their stuff and other details that could be discussed? They're based in Austria so it'll have to ship overseas and all that jazz.
My concern for the Pi setup is that it's currently primarily serving as a NAS so it has an SSD hooked up to it So, I want it to still be able to be plugged into the SSD unless I get a M.2 hat which I'm 90% sure doesn't exist for the 4, but does for the 5. It would definitely be a good project and great addition to my lab if/when I pull the trigger on the mounts I'm looking at.
What I'm planning on doing is getting one at a time, make sure everything fits first and set up, then continue for my other machines.
Any thoughts/information greatly appreciated!
So while I am doing a pretty big redesign of my home infrastructure I was thinking about backups.
I currently use a PBS vm that is hosted on a machine separate from my main cluster that sits on a hardware RAID 10 array.
I am in the middle of changing from a Windows cluster to a PVE cluster so I also have my dedicated physical VEEAM box for the Windows cluster.
My idea was to have my PBS server take the place of my VEEAM server physically but once this migration is finished i will have 1 physical server leftover. its the same spec as my backup box so i was thinking maybe i ship it to a colo since its 1u and have it as a offsite backup to get my magic 3 copies. Now getting into the nitty gritty.
I have always been a proponent for using RAID 10 when you have 4 or more disks and these 2 backup boxes have 4 3.5" drive slots so storage capacity cost isnt a big issue.
I was reading a blog where someone was stacking ZFS on top of RAID 10 to add checksumming but I don't want to go down that road so I figure if I go with ZFS-RAIDZ2 I can achieve my RAID 10 performance while adding in some extra protection.
So I would have the OS installed on a PCIE NVME drive booting with clover and then 4x 20 tb drives as storage.
In the professional world I have always either used dedicated backup appliances or separate backup servers and always Windows no Linux. Would love to hear what people think though so send it!
Heres a full system spread of my setup:
3x HP DL360G9 being chassis swapped to Dell R730s - this is the Windows/PVE cluster
2x HP DL60G9 being chassis swapped to Dell R430s - one is a dedicated veeam box and the other is being used as additional iSCSI storage since my DL360s only take SFF drives. I wont need the second DL60 since these R730s are all LFF drives and capacity is no longer a issue for me so this would make a great secondary backup machine.
2x HP DL360G8 - These are my homelab machines that I am using to help convert VMs from HyperV to PVE and once I am done with the migration I will retire them.
FYI:
Mom would like to buy a homeserver as a gift for dad since he wanted to buy one and asked me for help. Now the problem arrives here: I got no knowledge on this whole topic and Im kinda in a hurry. Ive watched multiple videos and read a few articles but I didnt get smarter.
She said that he wanted something that our family can save pictures/videos/movies/music on. I dont (know) if there are more things that he wants the server to do but thats besides the point.
Now I thought that this might be the case for an NAS (from watching videos) but I honestly got no idea. And then the question comes: Prebuilt or make it yourself? I theoretically got an hold gaming pc but I dont know if we want to go that far. I feel like this whole homeserver topic can be so deep and thorough, judging by reading some reddit posts and watching/reading about it.
So my question is: What should/could I buy/do for the things we want to do with it.
Heard some good things about Sinology but I really dont know what to buy.
I'd appreciate any advice!!
Edit: As another piece of information: I got no idea what for example OS we want or what OS is good. Its just supposed to be something family friendly for pictures, videos, movies and music. Maybe other small stuff that I do not know of.
I just got my hands on 4 Lenovo M720q. Currently running all my services on another computer running proxmox. Not sure having a cluster would be useful for my case as there's nothing too intensive and don't really need HA. What would you guys do with these things?
So I have a basic server right now. I5-3570 16gb ram Samsung ssd 850 evo 250gb os drive 2tb harddrive Amd radeon R7 200 series
When I set it up, i was stupid and chose windows 10 home rather then windows server or any other server os but right now I have too downloaded much to feel comfortable switching the os
It currently runs as a local nas, minecraft server, and a steam game "cashe".
I attempted to host plex and a website but both of those failed.
Any fun projects I cam run on windows 10 home?
Recently I've been investigating approaches for running a NAS at home, and ran across TrueNAS Scale on various YouTube videos, which also exposed me to things like Proxmox, Unraid, etc,
My base case is simple. I've been considering a NAS build for several years, and though initially thinking about something like a QNAP or Synology, my recent investigations have been pointing more in the direction of a TrueNAS Scale setup for my primary use cases. My wife and I currently pay for Google Drive storage for our always-increasing collection of videos and photos of our family, among other documents and such, and I'd like to move away from solely relying on cloud storage for this sort of thing, while also consolidating various collections currently spread across several home devices.
While investigating this, I came across videos describing how TrueNAS Scale can both be our NAS server with ZFS Raidz(2/3) redundancy, but also be used to run Docker images, and VM's complete with passthrough hardware (say video card) capabilities, which got me thinking about how a powerful new machine could take over the duties of many of my existing devices.
Currently, I have the following home-made devices:
I have quite a bit of experience managing custom PC builds, Linux servers, and homebrew routers and such. My day job is software engineering, so I'm technically inclined, and I've been building PCs since I was in high school, so I'm not worried about getting my hands dirty.
My main question is, having watched many videos of TrueNAS Scale and its capabilities, I feel like if I were to build a beefier 16 or more core PC with several large WD Red+ hard drives, lots of RAM, etc. I could run TrueNAS Scale / ZFS baremetal as my NAS solution, and possibly turn one or more of my devices above into a VM running on that same machine. I would likely leave my router alone, as it makes sense in its current form, but my Windows PC could potentially just become a VM with dedicated PCI passthrough for a better video card to allow for VM gaming via Parsec or equivalent.
Both my PC and my Linux server are serviceable but the hardware is long in the tooth, and it seems as though some of the Linux services I currently use (Home Assistant, Sickbeard, minidlan) could be replaced by either a VM running Arch linux, or Docker instances natively running the same services or alternatives via the TrueNAS Scale GUI. (I did see something about TrueCharts no longer being supported, so maybe that's a problem?)
If I were to move those devices into VMs or Docker equivalents, I could repurpose the old hardware as "thin clients" to access said VMs, or part them out for the new build (the case of my windows PC, for example, could be good for a TrueNAS build if there are enough drive bays in it).
I've seen many discussions about people trying to run Proxmox with TrueNAS as a VM either with hardware passthrough, or other mechanisms, but most people say it's better to run TrusNAS on baremetal and then use Scale's VM/Docker capabilities on top of that. Has anyone done this and used it as a viable VM-based light gaming / windows desktop system? I've also seen people suggesting the opposite configuration, so it's hard to tell which is better. I think I'm inclined to run TrueNAS baremetal, though, since my primary concern is data storage / protection.
Basically, I think I'm sold on TrueNAS Scale as a ZFS-based NAS, and that's my #1 priority (but I'm open to alternative/better suggestions if they exist). My secondary goal is to run at least 1 Windows VM with PCI Video Card passthrough for remote access / remote gaming to laptops/lower powered devices in the house. Third goal would be to migrate my Linux services to either Dockers or another VM on the same machine. Does this sound feasible, and does anyone have a similar setup who could make some suggestions for good hardware to do this? I'm also open to alternative OS/Hypervisor suggestions, as this isn't my area of expertise, and I've been out of the loop on developments in these areas. Thanks in advance!
Is it worth buying a TrueNAS Mini X+ in 2024? The CPU was released in 2017 is really my only concern. I do have 5 servers at home. 3 PowerEdge R630's and 2 PowerEdge R430's.
I recently got the PowerEdge R630's from a client, they upgraded. I have a Dell Percision 5820 running TrueNAS Scale. Runs great, and houses my ESXi/xcp-ng backups, etc.
I could also just take one of the R430's out of my ESXi cluster and turn it into a TrueNAS Scale box.
Thoughts?
Thanks!
Mandatory English is not my first language.
I have researched for a minute. Trying to figure out best price/performance of AMD vs Intel. I'm looking for a processor/MB combo that will give me best bang for the buck. I want to have a NAS that supports Plex/Jellyfish and the *ARRs. Still debating between UNRAID, TrueNAS and Proxmox as the OS. The system will have an NVIDIA 1080 GPU. I know. Not the best, but enough to transcode. I will be upgrading in the future.
I have an old tower case that can support up to 12 x 3.5" drives and 2 x 2.5" drives. The Proc/MB combo I am looking for is 2-3 generations old. I don't need anything current. The main functionality will be media storage for streaming in network, out of network and NAS functionality. I can add PCIe SAS/SATA expansion cards if needed to get extra SAS/SATA ports. Current network is 1GB. My plan in the future is to upgrade to 2.5GB or better yet 10GB.
Thank you for your advice and tips.
I have a power edge r620, I want to run proxmox on it, but it’s my first enterprise machine that I’ve ever messed with. I have NO clue where to even start. I’ve put in the USB, I got to the proxmox boot screen, and I clicked graphical UI, and it did the little console thing, BAM black and nothing else.
I have a GIGABYTE GA-G41MT-S2 with an Intel core 2 Quad Q9650 @ 3.0 GHz and 8 GB DDR3 ram. It’s just sitting in storage for a while and I’m not so sure what to do with it.
As for my current setup, I have an ASUS P9X79 Deluxe with an Intel core i7-3930K, 44 GB of DDR3 Ram and an NVidia Quadro 600 (going to replace it with an M400 once I get a new power supply) as my main home server. I’m planning to use my P9X79 for AD-DS, Hyper-V, and a Media Server (Jellyfin)
Any suggestions on what I can do with my Gigabyte computer? All support of any kind is appreciated thank you
Here is a link to my part spreadsheet. I didn’t go with Xeon because of idle power consumption and heavy VM/Transcoding. I may put a GTX 1650 Super in there, drives aren’t included. Suggestions are always welcome: https://docs.google.com/spreadsheets/d/1BHoEAEXt6RIWHcVgrplZTn9mFtA5kTaAn5DVnpz8Uvk/edit (the affiliate links are just there because they are shorter :))
Hello, I am looking at a Dell R210 with a Xeon 4-Core X3430 2.40GHz CPU to make a pfSense or OPNsense machine. I am a little afraid that this machine is way to noisy for me. I have some computers/servers in my rack already, but it's still surprisingly quiet. Can we throttle down the fans in BIOS on this server? Do we have some sort of "quiet boot mode" on it so that reboots do not wake up the house? Can we use this server with less than the standard number of fans in it?
Yeah - that were many questions at once, i know, sorry xD.
Thank you for reading this far :)
Hey guys. I’m new to the homelabbing scene.
I am trying to make a homelab to get some experience with what I have been learning. Got my CCNA so I know the basics and I want to build off of that. I want to keep learning and working toward becoming a network engineer.
So my question is, what are some projects I can start working on that can give me experience working toward my goals? I want a home lab for funsies too, but I do want to use it to further my knowledge and career.
I have a Zimaboard coming in the mail to tinker on. Got the 8gb ram version. I have an old alien ware laptop that guzzles power but it’s got 64g ram and a 1tb NVME. I have proxmox loaded bare metal on my alien ware and have been messing around with that. Making VMs and some LXCs. But I need some direction from guys who knows what’s up in what I can do to keep learning and working toward becoming a network engineer.
I do watch quite a few YouTube videos on projects and such I can work on but I want a little more direction than “hey add this cause it’s random and cool”.
I apologize if this post isn’t as details as it needs to be. Let me know if I need to add any details to help y’all out. Thank you so much for your help! Glad to be in a community like this.
Hi everyone,
I'm looking for a quick CPU upgrade from the crusty 2699 v3 that I got as a placeholder for dirt cheap, nothing too major, budget is around £200.
My main use is simple, Proxmox running gaming VMs with dedicated GPUs.
Since I'm on x99, I'm looking to go to the ultimate end of what this platform can do, and 2 options seem to be available, the 2679 v4 and 2699A v4. My board is Asus X99-E-10G WS which I think can handle both these CPUs just fine.
I know there's the i7 6950X but I've got 256GB of ECC Registered DDR4s so not sure how well that'll work out, also 10 cores seem a bit low for running multiple gaming VMs.
2679 v4: 20 cores, all core turbo of 3.2GHz, 50 MB L3 cache, 200W TDP.
2699A v4: 22 cores, all core turbo of 3.1GHz, 2.7GHz with AVX2 load, 55 MB L3 cache, 145W TDP.
It seems they are roughly equal for gaming. I think the 2679 will be a bit faster due to the higher TDP limit and it's better AVX2 performance. But in exhange it has 5MB less of L3 cache.
They both seem roughly the same price, around £200.
What do you think? Do you guys have experience with them, which one do you recommend?
I’ve been setting up my homelab using LUKS for disk encryption, and it’s made me think about the challenges that come with it. Encrypting everything isn’t exactly easy, and I’ve realized that if someone were to steal my disks, I wouldn’t be overly concerned about the data on them.
Do you still choose to encrypt your data on your systems, or do you think it’s unnecessary? I’d love to hear your thoughts!
With this backplane will the 5v lights and drive status light illuminate without all 6 molex connectors to PSU? I have been troubleshooting connecting a used 846-EL1 for a few days and am running PCIE to Molex splitter cables. I so far have managed to connect up to 2 splitters (4 molex) with the system able to power on. Unfortunately when I add the third splitter lights come on for a millisecond and psu shuts down. I've tested with many different cables and it appears to be localized to this last splitter to one of these specific molex connectors on the backplane. With 4 molex connected I only receive 1 green light on the backplane for 12v power but no other lights are illuminated. My system detects the backplane but no drives operate.
Updated: Just use SATA, all lights operating. lol
Hey folks,
I'm not sure where this is going, I think I just need some ideas an opinions.
The heart of my homelab is a small computer with a SoC board with an Intel J4105. I created this about six years ago and it's been pretty much running perfectly almost 24/7 with very low power consumption. It started as a file server(OpenMediaVault as the OS) with just a few small docker containers but over time more and more selfhosted stuff was added. Unfortunately the system is limited to 8GB RAM and I am now hitting the RAM limit and also CPU load is becoming more and more of an issue. Storage is okay so far but I could need more of that as well... of course
I was almost ready to buy an entirely new rig based on a Topton N100 board with a ton of storage when I realized that used j4105 mini pcs are relatively cheap and available for me right now. Upgrading from J4105 to N100 doesn't really feel like a big step forward, so why not add two more dirt cheap J4105 and use everything like a cluster? It would add CPU and RAM but little to no storage, so it would solve my current problems but I would probably have storage issues then next year or so.
But that's new ground for me. How would I cluster the resources? Is that were Proxmox comes to play? But creating a virtual machine with combined CPUs from several nodes is something to avoid, isn't it?
Or would I just manually distribute the containers over all three nodes?
If so, I use a reverse proxy for putting SSL infront of every service, but that would mean that the traffic between the reverse proxy on one node and a service on another node would be unencrypted, right? Keeping everything on the one host made me feel pretty good about that.
Or would you rather go with the new rig or maybe a totally different approach?
Thanks for reading I hope this isn't too trivial and maybe someone has some cool ideas for me.
I’ve been curious about the feasibility of running a 90B Llama model at home. I’m not looking for ChatGPT-level speed—just solid text generation, no vision, fine-tuning, or API calls. I want it completely offline.
I’m guessing the hardware costs might be pretty steep, and I doubt I’ll be able to afford it, but I can’t help but dream a bit! What do you think? Rough, shoot-from-the-hip numbers for the hardware I’d need?
I found this motherboard in a PC on the side of the road a few weeks ago. The PC looked to be in perfect condition other than some cut wires coming from the power supply. I decided to upgrade my motherboard by using the motherboard from this PC. I identified it as an HP Thimphu IPM17-TP. It has this really bizarre 2x6 pinout and I couldn't find a technical manual of this motherboard anywhere after a few hours of searching. I also tried a good number of common pin combinations but the PC would never boot. I don't have a multimeter so I can't find it out that way. It feels like I have run out of ideas, so I come here for help.
New homelab
Got my eye on a new homelab
I currently have 1x DL380 Gen10 2x gold 384 gb 6x R740 2x gold and 756gb mem 2x nvidia m10 1x unity 300 1x vnx 3x brocade 6740t 2x fc switch 1x 7515 1x epyc 512gb
What would you guys run on this is thinking about -
-vmware horizon environment -Plex server
Anyone use these in their homelab setup? And if so, running what? I'm debating picking one up but i cant find really any posts anywhere about people using them at home.
Hello! I'm hoping to get some suggestions on how to proceed with an issue I'm having, I would appreciate any advice or suggestions for further debugging. :)
Basically, I was recently given a new 2x SFP+ NIC (this is the model) and I have some RJ45 and fiber transcievers:
2x 10Gtek 10GBase-SR SFP+ LC Transceiver
2x H!Fiber 10Gb SFP+ RJ45 Module
1x SFP+ LC 10GBase-SR Multi-Mode Transceiver from... somewhere
Keep in mind I'm just messing around, I have no need for a 10GbE connection right now, but some of this stuff was basically free so why not dip my toe in it.
Anyway, my Synology NAS has the 10GbE expansion card, so I plugged one of the RJ45 transceivers into my switch (Ubiquiti Pro Max 48) and moved the NAS from a 2.5 GbE port to the transceiver. It instantly reported the 10GbE connection. Everything is good on both ends.
I installed the NIC into my desktop and started it up (Windows 11). Installed the latest drivers from the Intel site. The two ports both register in the Device Manager as Intel(R) Ethernet Controller X710 for 10GbE SFP+
and Intel(R) Ethernet Controller X710 for 10GbE SFP+ #2
. Both are enabled. Both show "This device is working properly."
I added an RJ45 transceiver to one of the two ports on the NIC. Plugged the Cat6 cable from there to the transceiver on the switch (I temporarily removed the NAS from it). Then I put the two matching fiber transceivers on the other NIC port and on the switch. Connected using this cable.
In Network Settings -> Ethernet, I now see my existing connections plus two new Ethernet connections, both DHCP as desired, both listed as "Not connected". No lights on the card at all.
I tried the NVM Update tool from the Intel site, but all I got was this:
NVMUpdate version 1.42.8.0
Copyright(C) 2013 - 2024 Intel Corporation.
...
Num Description Ver.(hex) DevId S:B Status
=== ================================== ============ ===== ====== ==============
01) Intel(R) Ethernet Controller X710 N/A(N/A) 1572 00:022 Update not
for 10GbE SFP+ available
Tool execution completed with the following status: Device not found.
Press any key to exit.
The switch just gives me the same generic Rx error on both of the SFP+ ports, but I don't think it's the switch anyway since the NAS connects just fine to the RJ45 transceiver on it.
I'm tempted to just chalk it up as a bad NIC, but I'd hate to make that assumption without exhausting any other possibilities. However, I'm basically at the limit of my network problem-solving knowledge, so I come to you all for help. :) Thanks in advance!
My parents live less than 300 feet from my house with line of sight. A cable would be impossible though.
Our local internet is a bit crap (10 Mbps up) so he can’t really view my Plex that well.
What is love is for him to be able to watch 4k direct streaming.
I’d like somehow for us to be able to fail over our internet to each other.
My home network is all UniFi.
What’s the cheapest way to achieve this?
Hello, I have my proxmox installation on a zfs pool of 2x 1tb SSDs. (Boot drives) Should be possible to replace both disks with 2tb drives without reinstalling pve?
I've been looking for a chassis to house a small hypervisor with the following features:
2U, Short depth (<15-18")
Hot swap front drive bays
Hot swap PSU
Space for GPU
I've finally found a chassis that checks all the boxes (linked below), but it's sold as a pre-built system, and I would rather pick my own internals and build them. I can't find any details about the chassis brand or model, but I'm wondering if I can purchase it standalone from somewhere else.