/r/selfhosted

Photograph via snooOG

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web services, and online tools.

Welcome to /r/SelfHosted!

 

Google Photos Mega Thread

 

While you're here, please Read This First

 

 

And why not Visit the Official Wiki Github?

 


 

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

 

For Example

 

  • Service: Dropbox - Alternative: Nextcloud

  • Service: Google Reader - Alternative: Tiny Tiny RSS

  • Service: Blogger - Alternative: WordPress

 

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.


 

What Is SelfHosted, As it pertains to this subreddit?

 


 

The Rules

 


 

Read about our Chat Options

 


 

Related Subreddits

 

Useful Lists

Relevant Podcasts

  • Insight, information, and opinions
  • Relevant Interviews
  • Self-hosted tool debates

/r/selfhosted

415,710 Subscribers

1

Looking for Recommendations for My Media Server Setup (Jellyfin, Jellyseerr, arr suite, Usenet, Docker, ...)

Hi all! I’m new to the selfhostedworld, and working my way around building a media server setup, mainly for me and family, and could use some advice from those of you who’ve been down this road. I've done a fair bit of research so far, taking inspiration from YAMS (YAMS), Mediastack (Mediastack Guide), Trash Guides, the *arr wiki and a lot of posts on this and other subreddits (like the recent post about captainarr). I also work in IT, so I’m comfortable with containers, Linux, and server management. What I have so far: a simple setup with the *arr suite (Prowlarr, Radarr, Sonarr), sabnzbd, Jellyfin, Jellyseerr, all running in Docker containers.

Here are my main questions:

  1. Dual Version Downloads (1080p and 4K): I’d like to download both 1080p and 4K versions of content to avoid transcoding in Jellyfin (not all my devices are 4k). From what I’ve gathered, it’s recommended to set up two Radarr/Sonarr instances and sync them with lists (per Trash Guides), saving completed media in separate folders like media/movies-hd and media/movies-uhd.
    • Is my understanding correct? Is this the best way to go about it?
    • Should I configure the HD Radarr instance to handle 720p–1080p upgrades, while the UHD Radarr instance is set to only grab 4K when available? Any other advice for this approach?
  2. Merging Versions in Jellyfin, is it a no-no: Initially, I wanted Jellyfin to merge the two versions of a movie in a single library, so users can select which version to play (and not have to go in two different libraries). I understand that to do this, files need to be in the same subfolder (per Jellyfin docs).
    • Is it possible for both Radarr (hd and uhd) instances to write to the same folder (e.g., without using -hd and -uhd) to allow merging by Jellyfin like the docs describe? I read that this isn’t recommended, any advice on this setup?
    • Has anyone tried the Jellyfin Merge Versions Plugin? Would this be helpful here?
    • Would you recommend sticking with separate libraries instead? Am I overthinking this approach?
    • Any other suggestion from people who have been where I am now?
  3. Automatic Cleanup Rules: I am not a big hoarder, so I’d like to automate the deletion of older or less-watched media with some rules, such as keeping content that’s been watched by X users, protecting favorites, deleting items older than X weeks/months, ...
    • Has anyone had success with any plugins for this, like MUMC or Media Cleaner? Would love any pointers!
  4. Hardware Setup: I’m also fairly new to the hardware side (currently using a spare PC I had lying around) and liked what I saw in SpaceInvader One’s YT videos. I’d like a setup where I can add HDDs incrementally, ideally of various sizes (e.g., starting with 3x12TB, then adding a 20TB next year, and so on).
    • Unraid seems like a solid choice for this approach, but are there other open-source recommendations? (I know, kind of a broad question, really open to suggestions!)
    • I'm thinking going with a simple setup, no RAID really needed I guess, as my main goal is not to really to collect/hoard, so I won't mind if I lost "everything" someday
  5. This one is not really a question, but the next steps I have in mind is to provide external access to Jellyfin/Jellyseerr for when I am not at home (I already have a domain name for other stuffs, but not sure if I should use CloudFlare, or Traefik, and not clear about SSL certificates).

I’ll likely have more questions down the line! I enjoy diving into the details and experimenting, so feel free to drop any links or resources, happy to read and learn from knowledgeable folks. Thanks in advance for any guidance you can offer!

And sorry for the big post ;)

0 Comments
2024/11/04
15:54 UTC

0

Warning: STRATO AG hosting - Aggressive debt collection despite automatic service termination

Hey fellow selfhosters,

just wanted to share my terrible experience with STRATO (german hosting company) so you dont make same mistake I did.

So here's what happened:

My credit card expired and like any normal hosting service, they should just terminate the service right? NOPE. Instead these guys:

  1. kept the service running (without telling me)
  2. sent some emails I probably missed
  3. started adding fees
  4. sent actual physical letters to my home threatening with debt collectors???

The best part? All this drama for 3,08€ of actual service that they turned into 9,09€ with their bs fees!!

Timeline of this joke:

  • Service cost: 3,08€
  • Random fees they added: 6,01€
  • Final amount they demanded: 9,09€

Im not making this up guys, they actually sent debt collection threats TO MY HOME for 9€ lmao

Look, im not saying their hosting is bad (it was fine i guess), but what kind of company does this? Every other provider I used just stops the service if you dont pay. Simple. These guys instead decide to play debt collector for pocket change.

Better options that wont try to scam you:

  • Hetzner
  • Contabo
  • OVH
  • (add yours in comments, just not STRATO lol)

yes I paid them (today actually) just to make them go away. But seriously, save yourself the headache and go with literally any other hosting company

TLDR: STRATO will keep your service running without payment then act surprised you didnt pay and threaten with debt collectors. Just dont.

ps: sorry for any grammar mistakes, english isnt my first language!

4 Comments
2024/11/04
15:50 UTC

0

What private-use top-level domain do you use?

I wonder which of these domains is the most popular for local/private use on this sub.

I know you really shouldnt use .local. But some people use it so I added it as option.

I didnt add "custom registered TLD" on purpose. This poll is only about those "special-use" domains.

If you use other, feel free to write in the comments.

View Poll

16 Comments
2024/11/04
15:32 UTC

1

Help with Fail2ban to Jellyfin via NPM

So I am trying to protect my jellyfin instance bruteforce, i am using fail2ban but keep running into the issue were it blocks access from the client machine that is trying to access Jellyfin, the IP is being blocked but it's blocking access to the reverse proxy as well. the output below is the config am using,

[jellyfin]

backend = auto

enabled = true

port = 8096

protocol = tcp

filter = jellyfin

maxretry = 3

bantime = 12hr

findtime = 43200

action = iptables[name=jellyfin, port="8096", protocol=tcp, chain=DOCKER-USER]

logpath = /srv/docker/jellyfin/library/log/*.log

0 Comments
2024/11/04
15:31 UTC

1

how to port forward

so i was hosting minecraft java on my pc and iwas haing mobile hotspot to use internet and i was having problum port forwarding anyone help

1 Comment
2024/11/04
15:24 UTC

2

Is specific GPU needed?

Hey guys, I’ve got myself into a corner. We have been talking with our colleagues about asking the firm to spare some budget on machine-learning research. They have approved it and given us a budget of around 1000$ for a small “server”. And I have been chosen to pick HW components. My question is, I can choose a decent GPU for transcoding and some small ML-related stuff with a small budget, but I have no idea about options with this budget. Can you recommend some available GPUs? We would use it to train a model on specific types of documents, try to deploy an LLM, maybe some Whisper, Piper, and other things to test it out

1 Comment
2024/11/04
15:23 UTC

3

Can't torrent with protonvpn wireguard + ubuntu server 24.04.1

I need some help getting my desired functionality working with ProtonVPN, specifically Wireguard UDP. Allow me to describe my network topology: I run OPNSense as my network firewall and gateway. I run a dedicated VM (let's call this "deliverance"), which I would like to utilize as my dedicated BitTorrent client. It runs Ubuntu Server 24.04.1, and deluged as the torrent software. I have to use wireguard manually, because the protonvpn-cli client does not seem to work on ubuntu 24.04.1. I am using the configuration wizard to generate a configuration to a p2p server in us-az, with a low load (sub 30%). I'm not entirely clear on what material changes the "Moderate NAT" and "NAT-PMP" settings apply when downloading this configuration and whether or not they're relevant to my specific use-case, but given that my use-case is anonymous torrent usage, I'm assuming that I need a relatively permissive configuration, so I'm enabling both of these settings. The IP designated for my wireguard config is "10.2.0.2/32".

Now my question: I'm observing a behavior where my torrents establish some kind of initial connection when I initially start deluge, but they pretty much immediately drop to 0. I'm assuming this is because I'm unable to establish p2p connections, but I don't know what the problem is. I suspect that I may need to configure something in my OPNSense firewall to allow for this, but I don't know what that might be, or if that's even the correct place to look. I know that I can use wg-quick up <config> to establish a connection with the proton servers, and I can update the machine while connected, and reach external services, so general connectivity is established. However, I suspect it's the p2p traffic that is the problem. The machine is not running a firewall itself, to my knowledge.

Can you please advise me or point me in the right direction here? I'm unsure what to even look for or verify.

2 Comments
2024/11/04
15:03 UTC

2

Self Hosted Pure Code "Landing Page" System with WYSWIG If Needed

We do a lot of landing pages with custom code. What i would like to find is something that does the following

  • Allows you to have one "platform or software"
  • Provision Projects
  • Each project is a landing page with a different URL or hosted on the master URL for preview.
  • Can contain completely custom code in multiple files OR can be edited with a WYSIWYG for quick changes
  • This is almost like a webflow but for actual developers giving me the ability to manage everything from one place, make quick changes, provision or duplicate projects etc etc.

Does anything like this exist?

0 Comments
2024/11/04
14:47 UTC

7

Receipt Wrangler November Update

Hello self hosters!

For those of you that are new, welcome! Receipt Wrangler is a self-hosted, ai powered app that is meant to make managing receipts easy. Receipt Wrangler is capable of scanning your receipts from desktop uploads, mobile app scans, or via email, or entering manually. Users can itemize, categorize, and split them amongst users in the app. Check out https://receiptwrangler.io/ for more information.

Another whirlwind of a month for me in my personal life, but hey some stuff got done! Let's jump into what got got finished.

Development Highlights

Currency Formatting: In System Settings, Admins are able to configure how formats are displayed across the app, this goes for display, as well as input masking in both desktop and the mobile app (v1.7.0)

Documentation Updates: This month a big chunk of documentation was added about Receipts, https://receiptwrangler.io/docs/category/receipts; though I felt receipts are kind of straight forward it was much needed since there is a lot going on. I also added contribution guidelines for those who may be interested in contributing.

Display Version Number (Desktop only): This is a small update, but on desktop there is now an about section in the avatar menu. This will display some links about the project, and also a version number if it exists, as well as a build date if there is one.

Versioned Monolithic Docker Images: For those of you who use stick to using versions, instead of latest, Receipt Wrangler's docker images now support versions from now on. The only version available is v5.4.1 currently.

Decreased Monolithic image size: The size decreased from ~4gb to ~2gb since moving to some smaller base images. The image size is really only bloated due to EasyOCR.

Microservice Docker Images deprecated: Since looking at the download statistics in Dockerhub, it is pretty clear that the microservice images are not used very much. As a result, they are being deprecated and no longer updated. If these images are actually needed, then they can be reimplemented albiet in a cleaner way. For those of you who are using microservices images, head over to https://receiptwrangler.io/docs/category/configuration-examples to find the database you are using, then copy pasta the monolithic docker-compose.yaml file and transfer over the environment variables. There will be no data loss as a result.

That's it for the highlights. As mentioned last month, I will be inactive most of November. There are leftover development items to work on that carried over from last month, and those will be continued in December. November's development will consist mostly of random bug fixes that I have time to get to.

Thanks for reading as always!

Noah

1 Comment
2024/11/04
14:35 UTC

0

Help with Nginx proxy manager routing Plex

I am using Nginx Proxy manager that routes plex.mydomain.com to my server's dockerized Plex @ 192.168.0.0:32400. It works fine.

However, to access Plex without going through their servers, you need to go through 192.168.0.0:32400/web. plex.mydomain.com/web doesn't work – it redirects to Plex's login page, even though I have my local IPs and dockerized NPM IP added to the List of IP addresses and networks that are allowed without auth both via mask and explicit: 172.24.0.0/24,192.168.50.0/24,192.168.50.73,172.31.0.2

Any ideas where I went wrong? Thanks in advance!

2 Comments
2024/11/04
14:30 UTC

1

need feedback regarding personal dashboard

Hey!, I'm building a self hosted personal dashboard, do you guys have any feature recommendation you would like on a dashboard, i have already built the base app , i just need more features to work on

2 Comments
2024/11/04
13:34 UTC

1

TLS handshake issue when using internal DNS?

I'm trying to get my selfhosted services to work properly with TLS certificates. I'm using a combination of:

  1. Pihole/Unbound: For DNS (incl. pointing *.mydomain.page to my servers local IP)
  2. Caddy: As a reverse proxy
  3. Cloudflare: As domain registrar, where I'm using a proxied connection.

My router is pointing all connected devices to use the Pihole/Unbound DNS combination, which works fine. All my *.mydomain.page queries are resolved to the correct IP of my home server.

The issue is that, when a client accesses abc.mydomain.page on the local network, I get a certificate error (ERR_SSL_PROTOCOL_ERROR). This does however not happen when I recently visited abc.mydomain.page from an external connection. Within x amount of time, the client can reconnect to the local network, visit abc.mydomain.page, and not get the certificate error.

This leads me to believe that there is an issue with the way that the clients validate the TLS certificates the first time they are accessed.

If possible, the goal is to keep it so that clients that are connecting from the local network will access the server without leaving the local network (access the server via the 192.168.xxx.xxx address). Clients from outside the network will access through cloudflare (proxied connection). Both to be using the mydomain.page address.

Caddyfile

...
*.mydomain.page {
        tls {
                dns cloudflare {env.CLOUDFLARE_API_TOKEN}
        }
}

abc.mydomain.page {
        reverse_proxy :13378
}
...

/etc/dnsmasq.d/98-selfhosted.conf (in use of pihole)
...
address=/.mydomain.page/192.168.xxx.xxx *local IP of the server*


Caddyfile
...
*.mydomain.page {
        tls {
                dns cloudflare {env.CLOUDFLARE_API_TOKEN}
        }
}

abc.mydomain.page {
        reverse_proxy :13378
}
...

EDIT:

I tried visiting on my desktop for the first time (to inspect the cert), and I get no errors! It seems to be an issue on mobile only (tried both Firefox and Chromium-based)... Just assumed it would work the same.

6 Comments
2024/11/04
13:18 UTC

1

Nginx Proxy manager and different subnets

Hello there!

I've recently foud out about the beauty of reverse proxies and I've been poking around with them for a few days, until I faceplanted into a problem wall.

Here is my problem:

I have a minecraft server in subnet 192.168.10.0/24 with the dynmap plugin that exposes a web server with the map of the game world on port 8123

I have nginx proxy manager on the same physical server in subnet 192.168.0.0/24, that is exposed on the internet on ports 80 and 443.

I setup the host forwarding and it works flawlessly, but only with serivces on the SAME subnet, like my jellyfin for example.

As soon as I tried to forward to the afromentioned minecraft server, it manages to find it, but the speed is so incredibly low that the website goes on timeout and It's impossible to use. I get the page title, and if I'm very lucky one or two tiles of the map (wich are simple images at the end of the day)

I tried moving the minecraft server in the same subnet as NPM and the problem disappears, but I would very much prefer to have my gameserver in a different subnet for various reasons.

So Is there something I'm missing? Is a reverse proxy server not supposed to work with different subnets?

I can share more information about the setup if needed,

Thanks in advance.

8 Comments
2024/11/04
12:55 UTC

14

Proxmox or Linux?

TLDR: When almost every selfhosted app is docker friendly / made to run with docker, does proxmox still make sense if you are running only docker containers

Hi Everyone,

I'm in the process of migrating the "compute" off my NAS to new server. Main reason for the new server was to run some local AI apps like ollama so did a new build with 2 4090 GPUs.

First thing, I jumped on the promox boat and installed it. Right away created 2 lxcs for open webui + ollama (with tteck script) and stable diffusion webui (did this one myself). Though a bit complicated with the GPUs, and having never used proxmox before, figured it out. All good, but now got to the part of wanting to migrate everything else off the NAS.

The setup on my NAS was running everything with docker (using portainer), to include nextcloud, all *arrs, jellyfin, authentik, home assistant, watchtower, and many other containers, exposed to the internet with traefik.

Looking at how to do this in proxmox, I came to the conclusion I probably should do the same, meaning a VM with docker and portainer (I have seen you can do lxc instead of VM, but proxmox recommends VM and from what I've read, especially when exposed, it provides better security).

The main reason for not using an individual lxcs for each application is:

  • maintenance: with watchtower can keep applications up to date and know there has been some testing by maintainers. With lxc automation and keeping everything up to date seems harder and riskier
  • Almost every selfhosted app these days seems built to run on docker and setting them up in a lxc seems more complex

But now I've come to the conclusion, if I run every application in docker (could also move the current lxcs to docker), what is the point of 1 VM in proxmox running everything (proxmox > Linux VM > docker), rather than directly installing linux and running docker on it (Linux > docker)?

Before I format again, am I missing something? I do want to still use the NAS, but for data only. Does it make sense to instead use ubuntu or rocky linux, or even something else? or is proxmox still the option for some reason I'm missing? I am not super techy but know enough to get everything running somehow.

22 Comments
2024/11/04
12:48 UTC

0

accessing home samba shares with cloudflare tunnel

there is smb in the tunnel options, I tried but it didn't work. they said we need to run cloudflared in the samba client, but there is no such option for android. is there another way?

1 Comment
2024/11/04
12:40 UTC

0

Blog CMS recommendations?

Hello. I am looking for a fast and reliable CMS to start a s blog. I don’t really want to use Wordpress as I had bad experience in the past. What else can you recommend? Thanks

24 Comments
2024/11/04
12:30 UTC

1

Container resource monitoring and insights

I was looking for a way to monitor the resources used by my containers. I read that I could do that with Grafana, Prometheus, and Cadvisor, but I was interested in more off-the-shelf open-source solutions. Commercially, there are a lot of software solutions available, but that goes against my open-source mentality.

I am currently using Netdata, but I didn't find a straightforward way to see how the available resources per container are used. I only see that my Docker VM is using 90% of the processor load, but not how that is divided between the containers.

7 Comments
2024/11/04
12:26 UTC

525

Self-hosting my blog on a 10 year old raspberry pi

I've self-hosted my blog on a raspberry pi with 174MiB ram and BCM2835 (1) @ 700MHz cpu, I've covered it in a blog, Read_ it and tell me your reviews also, follow the blog and self host something yourself and share it with me.

https://blog.kanishkk.me/?action=view&url=self-hosted-101

67 Comments
2024/11/04
11:39 UTC

1

accessing Jellyfin container from WSL

hello, I decided to have a go at self hosting with jellyfin running in a docker container. Im on windows 10 but out of habit I ran docker compose from inside WSL (windows subsystem for linux) as a result I see that I can access the application by typing server IP + port on a browser but only on the laptop that is running the container. On any other computer in the same wifi network it doesnt work. I went through the usual troubleshooting: firewall inbound/outbound rules; check jellyfin settings... im wondering if the issue is that im using WSL. did anyone have a similar experience?

3 Comments
2024/11/04
11:19 UTC

4

Reducing power consumption

So my homelab has expanded over a year or two and now it consumes 3kWh of electricity a day or about 125W continuous. I know this isn't the highest figure out there but it's the highest energy consumption in my house so I'd like to try reduce it.

https://imgur.com/a/RTnotb6
Here's a picture of my rack. I have a Draytek VDSL Modem, N100 mini PC router/firewall, POE Gigabit 24 port Switch, 2.5Gbe 5 port switch, Reolink NVR, Synology NAS, and Framework Laptop mainboard as a server.

I will be getting fibre soon so the Draytek Modem can go. I may well use my NAS as an NVR and use Frigate for my cameras rather than the Reolink app, so the NVR can go.

It does seem a little wasteful to me to have the Synology NAS, Framework board, and N100 PC. I did used to run my containers on the NAS but I moved them to the Framework board. The N100 PC is just a bare metal router, no virtualisation.

What would you do to reduce power consumption?

4 Comments
2024/11/04
11:13 UTC

0

Which is the best hosting for starting a new blog?

5 Comments
2024/11/04
11:12 UTC

0

Reduce dependency on pfSense, haProxy, Traefik or [insert other]?

I've run pfSense for years (probably 10+), and I'm perfectly satisfied with the way it works.
There is however 1 caveat, and that is performance.
I have a 4Gb symmetric FTTH Internet connection, with which pfSense struggles a lot.
I'm running an Minisforum MS-01 (with the 13900H), with Proxmox and using virtualised pfSense with VirtIO.
This is better for power usage than running BSD native, since power saving features of pfSense are pretty bad.

VirtIO is ok-ish during download (I can get my max download, with fair CPU usage), it fails during high upload scenario's. pfSense starts to throttle because of IRQ broadcast storms. This is probably not going to be fixed in the near future, since VirtIO Multiqueue support is probably not going to be implemented in the near future.

I've tested OpenWRT, which works way faster with a lot less CPU (probably due to PPPoE being properly handled).

One of the things I'm going to miss a lot in pfSense is the haProxy plugin. I used that for my local k8s cluster's API loadbalancer and the internet facing part of my k8s cluster.
I use the Let's Encrypt integration, and have strict SNI validation on (which means you can't get past my haProxy without the proper public DNS / Hostname in the request.

This has worked fine for serveral years, and I'm not really trying to come up with something else, but...
When I switch to OpenWRT, I have to, or at least, it seems the right choice, since running pfSense just as a loadbalancer next to OpenWRT seems... I dunno, weird.

I'm thinking of running haProxy native in an LXC container on the Proxmox node, next to the OpenWRT system, but it would require a lot of manual Let's Encrypt integration.
Another option would be to run Traefik on the node, which I already run on my k8s cluster, which would also be fine.

Just run a loadbalancer in your k8s cluster (which I already do), and forware the HTTP & HTTPS ports to the cluster. But Traefik requires an Enterprise License for builtin Let's Encrypt validation.
I could try to connect Cert Manager to Traefik for the Let's Encrypt validation, but that might seem overkill.

TL;DR; Any good tips on running a Let's Encrypt enabled LoadBalancer in Proxmox which kind of resembles the HaProxy Plugin in pfSense?

5 Comments
2024/11/04
11:03 UTC

2

Linking Self-hosted Calibre to local Calibre client

Is it possible to link my self-hosted Calibre library to the Calibre client on my laptop, allowing changes to be made and synced on both ends?

1 Comment
2024/11/04
10:33 UTC

91

I created shflix.com a website to browse self hosted applications

Dear self-hosted community,

I created https://shflix.com

Based on the awesome-selfhosted.net project, I created a user interface to browse apps and get to visualize apps and get tips on how to deploy them.

I hope you like my effort to make self-hosted apps more accessible to a broader audience,

Happy to get your feedbacks and support here or in Producthunt
https://www.producthunt.com/posts/shflix

This is a free website and the business model should rely on partnerships with hosting providers,

To visit the website: https://shflix.com

Happy self-hosting!

14 Comments
2024/11/04
09:17 UTC

10

selfhosted Guacamole Docker

Hi,

I love guacamole, as a way of remoteaccess to my homelab, but always thought its a bummer to run it on a virtual machine. Last week I was a bit bored, so I created a small github repo, that you can clone to get guacamole up and running in a minute or two.

https://github.com/GerryCrooked/guacamole

I hope this helps someone and please let me know what you think about it (its well my first public repo, so I would appreciate any feedback :) ).

8 Comments
2024/11/04
08:36 UTC

8

Dell Optiplex 3050 SFF - mainboard burned out

Hi, I planned to set up a used Dell Optiplex 3050 SFF as an Homeserver for Paperless-ngx and smaller projects like PiHole.

It ran just fine while I was waiting for the HDDs to arrive and I was able to install ubuntu, docker, etc. on the installed m.2 SSD.

After connecting two 4TB Seagate Ironwolf HDDs with an 6pin-to-2xSATA Cable I bought from amazon and the 2 Data Cables to the Ports SATA1 and SATA0, the PC wouldn't turn on. (No light on the power button, no fans, nothing) I disconnected the SATA power - still nothing. Disconnected all peripherie and the power cable. After reconnecting just the power cable I saw a flame and smelled burned electronics - the mainboaed burned out as shown in the picture.

Was this just a unlucky freak accident or could the HDDs be the cause of this damage? Does anyone have any experience of this? I would like to know if it is worthwhile just replacing the motherboard or if I should just switch to a Synology NAS.

Thank you very much in advance.

5 Comments
2024/11/04
08:18 UTC

0

Black Friday - What to Look for?

Hi all,

I’ve been self hosting on some junk that I inherited from various sources, all the usual suspects: *arr, plex, home assistant and paperless. I absolutely love it and am hooked.

Having said that, I never really bought dedicated hardware (beyond the odd peripheral here and there).

I want to improve my setup, as per the following (roughly in order of importance):

  • set up proxmox on a dedicated machine and put home assistant OS on it. I want to benefit from the advantages of a non-docker version, as well as use the backup manager from proxmox. HA is now officially mission critical in my house and needs to be more recoverable than in the current docker setup.

  • potentially rebuild my media server setup and move to jellyfin. I’m agnostic as to if this should be on a VM or not, but as much as I love plex I want to be a little more open source reliant

  • gain more experience with local LLMs. I have a server with an OK processor, internal GPU and 32 GB of RAM, but all my LLM stuff is super super slow

Obviously I also just like the tinkering aspect and who doesn’t want an excuse to buy more hardware??

Price is a factor as I really don’t have a huge amount to spare, this is why I’m waiting for Black Friday. Based on my current very limited experience it looks like I’d need one power efficient lower spec machine for HASS and media serving (if I do that via proxmox) and potentially one higher spec one for LLM - is that roughly correct? If so, the higher spec is really a nice to have and not super important.

What sort of processors/brands/memory/specs should I look for on eg AliExpress? Also I’ve watched some videos saying that a lot of the external drives on AliExpress are scams, so that true? If so, what’s the best or cheapest way to upgrade storage on my media server? I currently have 5 TB which is honestly fine, but I do need to delete some series every so often.

Thanks a lot for your help, this is always a nice and pleasant community (my wife refers to it as my online server friends which is awesome).

21 Comments
2024/11/04
08:17 UTC

1

How to backup my folder in ntfs drive to another ntfs drive using debian 12?

Hi all,

I have a pi4 sharing some folders in my main ntfs drive via smb.

I would like to backup certain content of the folder to another ntfs drive by setting up filters.

The backup needs to have restore points in case if my main ntfs drive fails and some important files got corrupted, I can still restore the broken files.

The restore point feature is something I'm not sure what to do here, since I don't use linux very much and the integration of linux + ntfs drive is making me nervous.

I found some articles saying restic could do the backup with restore points, but I'm not familiar with restic, could someone help me to research it in right direction?

Thanks.

2 Comments
2024/11/04
07:10 UTC

2

First site deployment using Apache2 on Raspberry Pi

Hey everyone!

I just deployed my first site, developed with PHP and JavaScript, using Apache2 on my Raspberry Pi (running Raspberry Pi OS Lite 64-bit). I’m interested in learning more about Apache2, which is why I chose it as my web server.

Here’s the setup so far:

The site is accessible via the public IP provided by my ISP. For this, I had to set up port forwarding on my router, configured the firewall on my Raspberry Pi, and adjusted ports.conf in Apache2 along with custom .conf files in sites-available. This setup allows my website to load at http://public-ip:80. When accessing the http://public-ip:80 the browser removes the :80 in the end (as expected). However, if I configure the application to use a different port, accessing http://public-ip redirects me to my router's settings login page. In all cases, canyouseeme.org shows port 80 as closed with a "Connection timed out" error, even though it works fine when I expose my website on other ports. My ISP confirmed me that ports 80 and 443 aren’t blocked. For non-standard ports (e.g., 8080), I have to specify the port in the URL.

Next, I bought a domain from GoDaddy, set it up on Cloudflare, and updated the nameservers on GoDaddy. I’m trying to avoid Cloudflare’s zero-trust tunnels because I want to point my domain directly to my public IP using the traditional method of DNS records.

I'm finding it challenging to configure DNS records along with a custom self-hosting environment with Apache2. Since DNS records don’t allow specifying ports directly, the setup relies on serving the website over default ports. This means configuring the DNS to point to http://public-ip and allowing the server to handle redirection through standard ports for web traffic, but this approach isn’t working as expected.

The thing I cannot get my head around is why exposing my website on port 80 and accessing it through http://public-ip works, but the DNS records and canyousee.org do not.

At this point, I’m stuck. Has anyone experienced something similar or have suggestions on what to try next?

Thanks in advance!

P.S.: I am planning to add SSL once I figure out DNS.

8 Comments
2024/11/04
06:42 UTC

0

Could tteck's Plex LXC script be used for Frigate LXC?

I was just reading this guide which explains that tteck's Plex LXC script installs all the drivers, etc. that are needed for using Intel iGPU passthrough for Plex transcoding, and I was wondering if it would make sense when creating a Frigate LXC that uses Intel iGPU passthrough for object detection, to use that script as a base and then just install docker and Frigate on top, to save time doing everything from scratch?

https://www.derekseaman.com/2023/04/proxmox-plex-lxc-with-alder-lake-transcoding.html

2 Comments
2024/11/04
05:22 UTC

Back To Top