/r/homelab
Welcome to your friendly /r/homelab, where techies and sysadmin from everywhere are welcome to share their labs, projects, builds, etc.
Labporn Diagrams Tutorials News
Please see the full rules page for details on the rules, but the jist of it is:
Don't be an asshole.
Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
No memes or potato images.
We love detailed homelab builds, especially network diagrams!
Report any posts that you feel should be brought to our attention.
Please flair your posts when posting.
Please no shitposting or blogspam.
No Referral Linking.
Keep piracy discussion off of this subreddit.
All sales posts and online offers should be posted in /r/homelabsales.
Before posting please read the wiki, there is always content being added and it could save you a lot of time and hassle.
Feel like helping out your fellow labber? Contribute to the wiki! It's a great help for everybody, just remember to keep the formatting please.
/r/sysadmin - Our original home. Splintered off from this sub-reddit.
/r/networking - Enterprise networking.
/r/datacenter - Talk of anything to do with the datacenter here
/r/PowerShell - Learn Powershell!
/r/linux4noobs - Newbie friendly place to learn Linux! All experience levels. Try to be specific with your questions if possible.
/r/linux - All flavors of Linux discussion & news - not for the faint of heart!
/r/linuxadmin - For Linux Sysadmins
/r/buildapcsales - For sales on building a PC
/r/hardwareswap - Used hardware, swap hardware. Might be able to find things useful for a lab.
/r/pfsense - for all things pfsense ('nix firewall)
/r/HomeNetworking - Simpler networking advice.
/r/HomeAutomation - Automate your life.
/r/homelab
I don't know who needs to hear this but - dc12v industrial poe switches are a really bad idea. I tried about 4 to 5 different ones from aliexpress or amazon and none worked as stated and no seller would stand behind their product when i tried to get support. Most of them didn't even seem to understand the amperage requirements etc when adding POE devices. I could maybe get one POE device online on some of the better switches.
It seems far better in a 12v DC environment to just supply power and ethernet seperately and a lot of cameras support this.
Was this a good deal?
Looking to move my home server to this.
Does it support RTX? I use my home server for AI and Plex.
Any good upgrades for this?
I have been using a (used) GTX 1070 for my workstation VM for over 5 years but I think it's towards the end of its life. The plastic heatsink frame has spontaneously broke on 2 of the 4 screw attachment points and it still works with some duct taping but obiously not a long term solution.
I'm hoping to replace it with a used RTX model. This VM is almost exclusively used for photo and video editing and very mild gaming (think Emberward and Bounty of One). I reckon it's the video editing (Adobe Premiere) that would dictate which GPU I should get.
The GPU must be 2-slot width or less, please (my research yieded a lot of RTX that are 2.5 slots, even for the mid-low range models which frustrated me quite a bit). And preferably Nvidia because passing through AMD has traumatised me for life.
Thanks.
Hey everyone! I'm trying to find the best solution to connect two computers to my dual monitor setup. Here's my current hardware:
Mac Mini M4 Pro
Alienware X16 M2
Main Monitor (27" 4K)
Secondary Monitor (16")
With my previous laptop (Dell G5 5590), I could connect the 16" monitor via a single USB-C cable, and it worked perfectly. I suspect this was because it supported DisplayPort Alt Mode. However, I'm having trouble achieving the same setup with my current devices.
I'd like to find a solution (preferably a Thunderbolt KVM switch) that would allow me to:
Any help or guidance would be greatly appreciated. I'm open to alternative solutions as well!
Hello! I'm a video editor and currently trying to build myself portable NAS to go with my upcoming MacBook Pro.
Currently have a Mini-ITX solution with i3-7100 and hard drives in 20L but it's way too bulky for what it is really. Trying to get something 4-6 bay really tiny like CM3588 NAS Kit but was wondering if there's maybe solution like that but for 2.5 SATA? I've seen some videos about RP5 solution but apparently it lacks in transfer speeds, not to mention that I'd like some sturdy case for it
Everything would be connected to 2.5 gig network. I aim at 16 TB (or more if there'll be more than 4 bays) and backup into offsite hard drive from time to time.
Has anyone seen this?
https://youtu.be/yxlAfS9mh2E?si=oAYEEqsYT7XIbdPo
I don't know much about Azure or cloud instances really...but would this allow to have a local version of Azure in the homelab? Allowing us to learn a cloud service without paying for a cloud subscription?
I have an staging server which is Dell PowerEdge M630. The OS it has is Windows Server 2016 Hard Disk type RAID. Issue is, the machine has crashed I guess.
I am a developer, we dont have any sys admin as of now in our team Last week I installed Postgres 15.8 on my Windows server 2016, I installed the pgadmin thats comes packed with postgresql installer. I already have postgres 12.1 with pgAdmin installed on my server. After installing Postgres 15.8, the respective pgAdmin was not working, it showed error 'Postgres server could not be contacted' But the old pgAdmin(of Postgres12) was working fine, but was not displaying the postgres15 server on server list on pgAdmin. But, I wanted to work on the new pgAdmin (which was installed with postgres15). So I tried to find the solution on internet, and I got the solution. It was just to delete the pgAdmin.bak file from 'C\Users\Username\AppData\Roaming\pgAdmin' And I did that and It worked, the bew PgAdmin started to work, it also displayed the Postgres15 connection se server list. But after 2 hours, my Windows Server 2016 crashed, and I am not able to boot my system. So on hard restart the system and it restarted well. And I quickly uninstalled the Postgres15 with its pgadmin from the machine. But the system crashed again after uninstalling the postgres 15. Now I am not able to boot the system. Its been 2 days the system is not able to boot.
I want atleast want the data within that servers hard drive (Raid type).
The hardware logs gives any of these errors:-
2)CPU 1 machine check error detected
3)CPU 2 machine check error detected
Current Bios version:- 2.6.0 Dell PowerEdge M630
What is the solution? Without any data loss or hard drive crash
Its is on EOL, so no support from DELL
My current setup uses a router for intervlan routing, and with the fact that my router is relatively low end, I'm experiencing slow through put when I perform intervlan routing. (It was annoying to see any traffic to and from my NAS is so slow, slower than the disk itself ;-;)
I did some researches. I found that layer 3 switches can do routing just like routers. So, I decided to purchase L3 switch to be use as my intervlan routing path and let the router route the traffic that will head out for the internet.
Additionally, In the future, I suspect that I might need to put in place an ACL.
TL;DR
Share your L3 core switches or recommend me some!
Thanks in advance!
Looking to get hands-on experience with stuff like VLANs. I also wanna fuck with NAT. (Not in a raw-dog the Internet kind of way, but more just observe how packets move in real-time). Incidentally, my WiFi setup is shameful and I'd like to put actual thought into designing it.
I'm opting for a Mikrotik router for its advanced features. I've seen recommendations for buying an Ethernet (wired) model and an Ubiquiti AP, but that's a bit expensive off the hop for me. I'd like to stay under $300 (CDN) for all gear.
Would a combination like the L009UiGS-RM* and the TP-Link Omada Business WiFi 6 AX1800 make sense? Or is it better to just get a Mikrotik model that also handles WiFi (though I've read these boxes suck really badly at WiFi)?
My apt is two level. No bricks but lots of wood, drywall, and too many houseplants. Is the difference in hardware quality between Ubiquiti and TP Link really so pronounced that it would have a noticeable impact?
*Fibre is unavailable in my current area but I'll move elsewhere eventually.
I am looking for some recommendations for a cheap and lower power homelab Mini PC that I can buy (third world country - India, so every dollar counts here).
I am planning to use Proxmox and host a few web APIs, and docker containers that combined serve about a <5 million requests per day total at max. Currently each app is hosted on an individual ec2 t2.micro (1vCPU, 1 GB ram) mostly idle. Also, have a single postgres instance that runs on the cheapest Azure Postgresql Burstable Instance (B1ms - 1vCPU, 2 GB ram), and uses about 10 GB of storage.
Something with idle power <=35 W.
I have no plans to host PleX/jellyfin or run a media server but that would be a plus, if it's possible.
I have no idea what equivalent Mini PC can handle these requirements. But, I have been researching, and these seemed to be within my budget, (I'll try to save your time as much as I can and list out most major details),
All of these are refurbished products. I looked into N100 Mini PCs but the only major seller here is "SKULLSAINTS" (well, idk about that name), the base model (12th gen N100) for most of these with no ram and storage costs about >$150. A raspberry pi 4/5 8gb ram base costs about $95 here.
Do you guys have any recommendations or thoughts on this? Thanks. <3
Hi,
I'm building two NAS, one in ITX and one in MATX, both with 14500 processors.
I need to buy the motherboard and the ram.
I wanted to ask you for advice on both, since I don't know which models to choose (I want something reliable from a brand, I don't want something Chinese)
I need the motherboard to have:
Intel Vpro
2.5G ethernet
DDR5
4 or 8 SATA
3 x m2 (if I have 4 sata I need to use one to get another 4 sata)
It shouldn't be too expensive :) I mean, I'm not going to pay 400-600€ for a server board or something similar.
As for the memory, I don't know what the maximum is that it supports 14500, maybe 7000-7200mhz?
Thanks in advance
I know this is a feature for Llama and ChatGPT, and I have found it really useful to use ChatGPT's API to use computer vision to do a certain task within an operation. However, it's expensive. I'd rather reduce the money I've been spending on these subscriptions by bringing that costly operation home. This is similar to the Optical Character Recognition (OCR) technology that microsoft has made, however ChatGPT is much more advanced at this task and makes less errors during the scan.
If anyone knows a downloadable model that can do this, please let me know and I will greatly appreciate it. Thank you!
Hey everyone, I'm new to homelabbing, but I want to get a rack for the equipment I have. I only have an hpe proliant dl360 g10, and a cisco 3850 switch. Now I'm considering getting a rack, because I know I'll end up adding more, but I'm trying to decide how big to go. I don't have a big budget, but I've narrowed it down to either buying a 12u, an 18u, or I can also get a free 48u rack from my school (they're giving it away). (I think it's 48u, but it could be 52u or more, I'm not sure it's just huge) Now, 48u feels kinda overkill, and I doubt I'd ever fill it, and I also doubt I could ever fit it in my house.
I don't have much budget and I'm not too excited about shelling out nearly 300$ for a rack, but it's far more practical than a giant one. What do I do?
TL;DR: help me pick between spending 250-300$ on a 12u, 18u or picking up a 48u for free
Was a bit of a struggle, but finally finished the hardware side of my NAS Build. Still deciding on TrueNAS Core vs Scale, but just happy that it all came together and it POSTs. I'm new to NAS, I'm new to AMD, and I'm new to server boards, so feel free to comment on some improvements that could be made.
Case - Jonsbo N3
I've always liked, and have only ever built, in Mini ITX form factor. N3 Got good reviews, and it looks clean, so I went with it. Tried to make pathways for air flow as best I could.
Motherboard - ASRock Rack > Server Motherboard > X570D4I-2T
Searched around for a while to find a Mini ITX mobo which would support 8 drives and also offer 10Gbe. I will admit I don't have a use in mind for 10Gbe, but at least I have the option now. Liked the idea of the Oculink -> 4x SATA, so wanted to give it a go. Installation wise, the motherboard caused me to have to purchase a different cooler than I originally ordered, despite being an AM4 socket, the board is designed for LGA 115x coolers. End of the day it's a nice little unit with a few extra bells and whistles that I could try to utilize later on.
CPU - AMD Ryzen 7 5700G Benchmark
This is my first AMD build, so not super familiar with their processors. Just wanted something that had native graphics for some light transcoding if the need arose, and also had low TDP to extend life. Also wanted low TDP so I could downsize my cooler for noise.
CPU Cooler - NH-L9x65
Noctua has never done me wrong. Wanted to try and keep the noise down while still getting a bit of performance and not needing to cut holes in the case. I accidentally mounted the cooler in the orientation I didn't want, but I didn't want to re-paste it, and the power is so low I didn't think it would be an issue. If it does turn out to be an issue, I'll re-mount it.
RAM - 32GB OWC 32GB (2X16GB) DDR4 2666MHz PC4-21300 CL19 ECC Unbuffered SODIMM
Not super familiar with SODIMM RAM, read some reviews and reddit posts, OWC seemed like a good bet. Will add more later if needed.
SSD - 250GB Kingston NV2 250G M.2 2280 NVMe SSD | PCIe 4.0 Gen 4x4 | 3000 MB/s
I'm typically a Samsung guy, but ended up going w/ the Kingston because it was a good deal. I've never had an SSD fail on me, and Kingston has a decent reputation, so I thought this was an okay decision.
HDD - 4 x 16TB Seagate Exos X18 Hard Drive
Refurbs from eBay. Reviews were good. Haven't decided on RAID setup yet, but I wanted some flexibility, hence the 16TB. Spaced them apart in the chassis for air flow and weight balance until the remaining 4 slots get filled up.
Power Supply - SilverStone Technology SX500-G 500W SFX Fully Modular 80 Plus Gold PSU
This part actually took me the longest to spec out. I wanted to get as small a PSU as I could so I could stay in the efficient range with such a low power system. It seems like the trend for PSUs in general is to go balls out on watts. I was hoping to get a PSU that had PMBus support so I could play around with monitoring, but the only thing I could find was the Corsair HXi1000 which was way too big. Sure I could monitor power, but all I'd see is how poor of a decision I made with oversizing so big and killing efficiency.
Case Fans - NF-R8 redux-1200
Noctua again. Self explanatory. They are case fans and they fit. Set up for exhaust, pull in from the sides and the front of the case. The N3 came with a couple case fans with a brand I don't recognize, might change them out to Noctua's just for consistency. Also might add another case fan on the front grill near the PSU for a push/pull setup, I don't think it will be necessary though.
I have 2 sets of 5PX rails for 2 out of 4 of my 5PX ups units, however I’ve just gone and tried to mount them into my square holed rack and realised that the screw used to secure them to the rack are smaller than the thread in a standard cage nut? Is there any specific size cage nut I need to purchase? What should I do?
I am attempting to make a multi gpu gaming server with Proxmox, using the nvidia M40 gpus. Apon setting up my first VM, everything has gone smoothly until getting to installing the m40 gpu's drivers. I've discovered no matter what driver I install for it, the error "Insufficient system resources exist to complete the API" appears in device manager. Any idea's why this issue is occurring, and how to fix it?
Details:
Followed this tutorial for passthrough: https://www.youtube.com/watch?v=391GUL5sVy8
Server specs:
Ram: 64 GB (16x4) DDR4 2133MHZ
CPU: Intel Xeon E5-2696 V3
Mobo: Asus Rampage V Extreme
GPU: 4 Nvidia Tesla M40 12GB
Any and all responses are greatly appreciated!
I'm coming into this as a newbie to UPS's in general and mostly I'm looking for a sanity check and guidance. If I'm heading in a terribly wrong direction if someone could wave me off and tell me what I should be looking for instead I'd appreciate it.
I have a simple rack at home and I really want to put the whole thing on a proper UPS. Ideally I'd like one UPS to handle it all but that may not be possible. Here's what's on the rack right now:
Total watts: 800-900 watts if everything is at full draw.
It's worth noting I'm going to look at a 5080(ti?) with the new generation which I imagine will bump my power consumption. No immediate other plans but a bit of headroom wouldn't go amiss. Also the PC and the laptop are never being used at the same time, but I'm operating under the principle of if I assume worst case I should always be OK.
So my goals for a UPS are:
Current frontrunner is the Eaton 5PX G2 1440VA based solely on the fact that people here (and elsewhere) seem to swear by Eaton and it looks like it gives me more than enough power to run my entire rack with plenty of extra if I need it. Is this overkill? Underkill? Just... Right... Kill?
Or would I just be better off with a 1000W just for the PC and say a 500W for everything else?
So I am looking at moving my 2 desktops that are servers into a rack with a rack case.
Ummm I know nothing of racks help...
I am thinking of this https://www.titanav.co/collections/av-racks/products/titan-av-10ru-19-adjustable-open-rack
Plus 2 4u cases hence having room for maybe a shelf for future switch install..... does this work, any hints, tutorials etc. I want it semi compact to fit under a sit/stand desk or live under my 3d printer
What're we running version wise these days on these machines, I got a new server and want to update it but back in the day updating an HP server would make ilo and it's other features worse for homelabbers. Thanks guys.
I just finished setting up my new D-Link DGS-1520-28MP switch and I guess I wasn't expecting it to be SO LOUD. I knew it was going to be somewhat loud but man this thing is very loud. And that's without any PoE cables even plugged in yet.
It sits directly under my workstation PC so it makes it very difficult to think about anything else other than the noise. Is there some type of quiet box I can put it in? Or would it be worth it to replace the fans? I'm pretty new to all this so hearing it for the first time made me feel a bit discouraged and asking myself "what did I get myself into?"
However I do need it so what are my options?
First off, I promise I’m not farming content for a “gifts for nerds” listicle!
I don’t really need anything for my other hobbies, and thought it might be fun to get something that can tie into the homelab.
I have a NAS and a Proxmox cluster in the closet. I host some services in containers to make my household a little less dependent on big tech. My hardware is fairly locked in for what I need - the only planned addition is a NAS expansion unit, and that’s happening next weekend hopefully.
So far I’ve got:
I’m curious as to what other ideas y’all might have. Thanks in advance!
I'm still setting things up and learning, but I can already tell I'm starting something awesome.
raspi 4 running octoprint
raspi 5 w/ 128gb nvme running Home assistant
netgear gs305
athlon II running truenas (not shown)
patch cables are a bigger pain in the ass to build then I expected.
Hi all,
I'm not an IT guy, I just play make believe in my spare time. I know Unifi is a very clear split between love/hate here in homelab but as a hobbyist who has neithe the budget nor the knowledge to work with 'proper' gear I've found unifi perfect. It's helped me to gain a real undersanding of how things work and build stuff that I never even knew existed - at a price point that's JUST within my financial means (assuming I avoid the 'enterprise' stuff).
However it obviously has its quirks and today I may have hit a 'show stopper' that means I need to plan for moving away and would be grateful if aanyone with more experience could offer any suggestions or advice on where I go from here.
I am moving from a single proxmox to a cluster with the aim of properly separating my main 'production' network from my playground, my kids/guest network and an internet facing DMZ type zone and adding some HA to key services.
Today I switched to a new ISP who provide me wih a /29 public subnet. Unforunately it did not occur to me that unlike previous provider, they use PPPoE. Nor did it occur to me to find out that this is a problem for a UDM Pro. I wanted to be able to have a differenngt public IP for various parts of the network and a basic googe suggeted this is possible with NAT or Masqueradding. However after failing to get it working it seeems that the NAT function cannot be used with a PPPoE public ip. This "might" be fixed at "some point" but a lot of people seem to suggest the work required to do this is beyond the capability of the UDM pro.
I have only just built my 2 node cluster (NUC 11 i9 /64gb, Supermicro X10 dri with ONE cpu / 64gb) and current plan was to replace the r-pi q-device with a 3rd node.
Both nodes have 4 NICs each - one each for LAN, Web GUI, Coroync and migration. The corosync nics are connected to a separate (non unifi) 1gb switch. The migration NICs are plugged to another separate switch. This is also 1gb but the nics are 10 gb. When I add the third node I plan to have the same 4 nics, upgrade the migration switch to 10gb and experiement with ceph. This may not be a unifi switch as I don't see much beneft.
The LAN nic on the second node is also a 10gb (the one on the NUC is 2.5 and upgrade no feaible) but I was planning to then get a unifi 10gb "agregating" switch to use as the core switch. As my router is the UDM pro it has 2 10gb ports so this seems a solid waay to giive some imporant VM access to a fast network connecion and create a path for a futue expansion of a 10gb backbone.
The issue with the use of my /29 wan subnet is obviously a big spanner in the works. I chose not to have the ISP's router as they charge for It - but is it possible to put a modem on the ONT to do the PPPoE thingamyjig and do the old bridge mode trick? And if so caan the UDM then colleft those as statc ips? And would the maqueradiing then work?
If not, what would you suggest I do? Preferably keeping as much of my unifi gear as possible but that's negoiiable -espcially if the answer is an 'enterprise' device that costs more than my car!
One thought was a router VM on the cluster but that seems very brittle and prone to failure,
Ulimateky if there' no good solution I will just have to give up. I'm aware that uiinig a differen public ip does little to increas security between the lans but I am willing to put in a fair ammount of effort if neccessay as it wll make me feel like an absolute badass and ive got 354 days to try fixing it....
I'm looking to run Plex and host game servers for my friend group. I have 0 experience with Linux, TruNAS, or Proxmox.
I'm looking for recommendations on what OS to run and if I should run VMs or just install SteamCMD stuff direct onto Proxmox/Linux. It seems running TruNAS as the main OS could be wasteful for RAM?
My server will be unattended for ~5mo every year when I leave the state, so I plan to pack a backup drive with me when I leave, as I'm located in a hurricane-friendly area. I'm not sure if I'm better off running two 12tb in raid 1 for this purpose, and bringing one, or just doing a software backup on the stuff I want and bringing the separate 12tb with me.
Additionally, I'd like recommendations on what to use for my Plex & storage raid/unraid (4x 12tb drives). I'd like to be able to add drives in the future, if needed... zfs1, raid5?
My hardware:
Fractal Define 7
i7 7700k
32gb ram
LSI 9300-16i HBA
256gb SSD - OS/Game servers
2x 1tb WD Black - Working files (Design/Coding) - raid1/mirror
4x 12tb HGST - Mostly for Plex but also for my personal backup - zf1/raid5
1x 12tb HGST - Hot Swap backup of important stuff for when I leave town
1x 12tb HGST - Cold swap for drive failure
Earlier this year I performed a significant storage upgrade on my server. I replaced 10 8TB WD white label drives with 11 18TB WD HC550's. My case is the Fractal Design Define 7 so I literally have it capped out at 11 HDD's right now. Thing is while I did upgrade my storage, my old 8TB disks are still totally functional with no early signs of failure. I'd like to keep using them. Rather than get a different PC case, I'd like to build an external JBOD enclosure for the 10 8TB disks.
The easiest and jankiest solution I've thought of would be to use some bracket like this and pair it with a used SATA backplane from ebay. I would install a SATA/SAS HBA in my server with external SAS ports (some LSI 16e card perhaps). I did see a few JBOD enclosures that could host 4-5 drives, but they were all wildly overpriced for what they had to offer. Basically a plastic box with some passthrough sata ports for the cost of a brand new PC/Server chassis.
I'm hoping to find some inspiration here, or see if anybody else already has a solution for this problem. I really don't want the solution to be much larger than the footprint of the drives, backplane, and fans. Meaning buying another PC case with enough HDD slots wouldn't be a great option for me.
Here's the situation. I have an older Supermicro rack-mount server that, once upon a time, was my primary NAS running FreeNAS. Later I migrated to running a Synology DS916+, which is my current primary NAS.
My big question is whether or not it is worth investing any energy and/or money in using some or all of the Supermicro build in the present. I've low key got the expansion/tinkering itch at the moment. have this thing on hand and want to potentially do something with it like either make it a mostly-cold storage box to backup the Synology and other devices, or maybe even rebuild it with a new board and processor setup.
Here's a breakdown of the case and contents of the server I have currently collecting dust in the basement:
In addition to that, I have all sorts of random disks ranging from 1-4TB laying around and I'm not opposed to taking advantage of sales on refurbed server disks to stock up the machine with larger disks. I'd prefer to do a JBOD approach and not need to buy 10 matching disks at a time, for what it's worth.
But my big question is this: is it even worth toying around with or is the hardware so old and inefficient that I'd be better of investing my energy in something totally different? One of the reasons I moved away from it was simply that it sucked down like $30 a month in power.
If it's not a "brother, why are you even spinning up something that old?" situation then let's move onto the question I posed in the title. What would you do with the box? And what would you recommend I do with the box if my primary interest is data backup, simple home media server stuff (running an -arr stack), spinning up dockers for small projects like Home Assistant or Minecraft servers, and that kind of thing? Again, I have that DS916+ but it's feeling a little cramped and underpowered these days.
I'm open to any kind of advice based on the great experience you guys have. Do I just let it continue to rust? Do I turn it into a boot up once a month JBOD rig to use as a secondary backup? Do I throw a new board and chip in there and breathe completely new life into the machine? No idea what my best-next step would be if I want to get use out of this 16-bay case (and, potentially, the hardware in it).