/r/selfhosted
A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web services, and online tools.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Service: Dropbox - Alternative: Nextcloud
Service: Google Reader - Alternative: Tiny Tiny RSS
Service: Blogger - Alternative: WordPress
We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.
What Is SelfHosted, As it pertains to this subreddit?
/r/selfhosted
Hi all! I’m new to the selfhostedworld, and working my way around building a media server setup, mainly for me and family, and could use some advice from those of you who’ve been down this road. I've done a fair bit of research so far, taking inspiration from YAMS (YAMS), Mediastack (Mediastack Guide), Trash Guides, the *arr wiki and a lot of posts on this and other subreddits (like the recent post about captainarr). I also work in IT, so I’m comfortable with containers, Linux, and server management. What I have so far: a simple setup with the *arr suite (Prowlarr, Radarr, Sonarr), sabnzbd, Jellyfin, Jellyseerr, all running in Docker containers.
Here are my main questions:
media/movies-hd
and media/movies-uhd
.-hd
and -uhd
) to allow merging by Jellyfin like the docs describe? I read that this isn’t recommended, any advice on this setup?I’ll likely have more questions down the line! I enjoy diving into the details and experimenting, so feel free to drop any links or resources, happy to read and learn from knowledgeable folks. Thanks in advance for any guidance you can offer!
And sorry for the big post ;)
Hey fellow selfhosters,
just wanted to share my terrible experience with STRATO (german hosting company) so you dont make same mistake I did.
So here's what happened:
My credit card expired and like any normal hosting service, they should just terminate the service right? NOPE. Instead these guys:
The best part? All this drama for 3,08€ of actual service that they turned into 9,09€ with their bs fees!!
Timeline of this joke:
Im not making this up guys, they actually sent debt collection threats TO MY HOME for 9€ lmao
Look, im not saying their hosting is bad (it was fine i guess), but what kind of company does this? Every other provider I used just stops the service if you dont pay. Simple. These guys instead decide to play debt collector for pocket change.
Better options that wont try to scam you:
yes I paid them (today actually) just to make them go away. But seriously, save yourself the headache and go with literally any other hosting company
TLDR: STRATO will keep your service running without payment then act surprised you didnt pay and threaten with debt collectors. Just dont.
ps: sorry for any grammar mistakes, english isnt my first language!
I wonder which of these domains is the most popular for local/private use on this sub.
I know you really shouldnt use .local. But some people use it so I added it as option.
I didnt add "custom registered TLD" on purpose. This poll is only about those "special-use" domains.
If you use other, feel free to write in the comments.
So I am trying to protect my jellyfin instance bruteforce, i am using fail2ban but keep running into the issue were it blocks access from the client machine that is trying to access Jellyfin, the IP is being blocked but it's blocking access to the reverse proxy as well. the output below is the config am using,
[jellyfin]
backend = auto
enabled = true
port = 8096
protocol = tcp
filter = jellyfin
maxretry = 3
bantime = 12hr
findtime = 43200
action = iptables[name=jellyfin, port="8096", protocol=tcp, chain=DOCKER-USER]
logpath = /srv/docker/jellyfin/library/log/*.log
so i was hosting minecraft java on my pc and iwas haing mobile hotspot to use internet and i was having problum port forwarding anyone help
Hey guys, I’ve got myself into a corner. We have been talking with our colleagues about asking the firm to spare some budget on machine-learning research. They have approved it and given us a budget of around 1000$ for a small “server”. And I have been chosen to pick HW components. My question is, I can choose a decent GPU for transcoding and some small ML-related stuff with a small budget, but I have no idea about options with this budget. Can you recommend some available GPUs? We would use it to train a model on specific types of documents, try to deploy an LLM, maybe some Whisper, Piper, and other things to test it out
I need some help getting my desired functionality working with ProtonVPN, specifically Wireguard UDP. Allow me to describe my network topology: I run OPNSense as my network firewall and gateway. I run a dedicated VM (let's call this "deliverance"), which I would like to utilize as my dedicated BitTorrent client. It runs Ubuntu Server 24.04.1, and deluged as the torrent software. I have to use wireguard manually, because the protonvpn-cli client does not seem to work on ubuntu 24.04.1. I am using the configuration wizard to generate a configuration to a p2p server in us-az, with a low load (sub 30%). I'm not entirely clear on what material changes the "Moderate NAT" and "NAT-PMP" settings apply when downloading this configuration and whether or not they're relevant to my specific use-case, but given that my use-case is anonymous torrent usage, I'm assuming that I need a relatively permissive configuration, so I'm enabling both of these settings. The IP designated for my wireguard config is "10.2.0.2/32".
Now my question: I'm observing a behavior where my torrents establish some kind of initial connection when I initially start deluge, but they pretty much immediately drop to 0. I'm assuming this is because I'm unable to establish p2p connections, but I don't know what the problem is. I suspect that I may need to configure something in my OPNSense firewall to allow for this, but I don't know what that might be, or if that's even the correct place to look. I know that I can use wg-quick up <config> to establish a connection with the proton servers, and I can update the machine while connected, and reach external services, so general connectivity is established. However, I suspect it's the p2p traffic that is the problem. The machine is not running a firewall itself, to my knowledge.
Can you please advise me or point me in the right direction here? I'm unsure what to even look for or verify.
We do a lot of landing pages with custom code. What i would like to find is something that does the following
Does anything like this exist?
Hello self hosters!
For those of you that are new, welcome! Receipt Wrangler is a self-hosted, ai powered app that is meant to make managing receipts easy. Receipt Wrangler is capable of scanning your receipts from desktop uploads, mobile app scans, or via email, or entering manually. Users can itemize, categorize, and split them amongst users in the app. Check out https://receiptwrangler.io/ for more information.
Another whirlwind of a month for me in my personal life, but hey some stuff got done! Let's jump into what got got finished.
Development Highlights
Currency Formatting: In System Settings, Admins are able to configure how formats are displayed across the app, this goes for display, as well as input masking in both desktop and the mobile app (v1.7.0)
Documentation Updates: This month a big chunk of documentation was added about Receipts, https://receiptwrangler.io/docs/category/receipts; though I felt receipts are kind of straight forward it was much needed since there is a lot going on. I also added contribution guidelines for those who may be interested in contributing.
Display Version Number (Desktop only): This is a small update, but on desktop there is now an about section in the avatar menu. This will display some links about the project, and also a version number if it exists, as well as a build date if there is one.
Versioned Monolithic Docker Images: For those of you who use stick to using versions, instead of latest, Receipt Wrangler's docker images now support versions from now on. The only version available is v5.4.1 currently.
Decreased Monolithic image size: The size decreased from ~4gb to ~2gb since moving to some smaller base images. The image size is really only bloated due to EasyOCR.
Microservice Docker Images deprecated: Since looking at the download statistics in Dockerhub, it is pretty clear that the microservice images are not used very much. As a result, they are being deprecated and no longer updated. If these images are actually needed, then they can be reimplemented albiet in a cleaner way. For those of you who are using microservices images, head over to https://receiptwrangler.io/docs/category/configuration-examples to find the database you are using, then copy pasta the monolithic docker-compose.yaml file and transfer over the environment variables. There will be no data loss as a result.
That's it for the highlights. As mentioned last month, I will be inactive most of November. There are leftover development items to work on that carried over from last month, and those will be continued in December. November's development will consist mostly of random bug fixes that I have time to get to.
Thanks for reading as always!
Noah
I am using Nginx Proxy manager that routes plex.mydomain.com
to my server's dockerized Plex @ 192.168.0.0:32400
. It works fine.
However, to access Plex without going through their servers, you need to go through 192.168.0.0:32400/web
. plex.mydomain.com/web
doesn't work – it redirects to Plex's login page, even though I have my local IPs and dockerized NPM IP added to the List of IP addresses and networks that are allowed without auth
both via mask and explicit: 172.24.0.0/24,192.168.50.0/24,192.168.50.73,172.31.0.2
Any ideas where I went wrong? Thanks in advance!
Hey!, I'm building a self hosted personal dashboard, do you guys have any feature recommendation you would like on a dashboard, i have already built the base app , i just need more features to work on
I'm trying to get my selfhosted services to work properly with TLS certificates. I'm using a combination of:
My router is pointing all connected devices to use the Pihole/Unbound DNS combination, which works fine. All my *.mydomain.page queries are resolved to the correct IP of my home server.
The issue is that, when a client accesses abc.mydomain.page on the local network, I get a certificate error (ERR_SSL_PROTOCOL_ERROR). This does however not happen when I recently visited abc.mydomain.page from an external connection. Within x amount of time, the client can reconnect to the local network, visit abc.mydomain.page, and not get the certificate error.
This leads me to believe that there is an issue with the way that the clients validate the TLS certificates the first time they are accessed.
If possible, the goal is to keep it so that clients that are connecting from the local network will access the server without leaving the local network (access the server via the 192.168.xxx.xxx address). Clients from outside the network will access through cloudflare (proxied connection). Both to be using the mydomain.page address.
Caddyfile
...
*.mydomain.page {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
}
abc.mydomain.page {
reverse_proxy :13378
}
...
/etc/dnsmasq.d/98-selfhosted.conf (in use of pihole)
...
address=/.mydomain.page/192.168.xxx.xxx *local IP of the server*
Caddyfile
...
*.mydomain.page {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
}
abc.mydomain.page {
reverse_proxy :13378
}
...
EDIT:
I tried visiting on my desktop for the first time (to inspect the cert), and I get no errors! It seems to be an issue on mobile only (tried both Firefox and Chromium-based)... Just assumed it would work the same.
Hello there!
I've recently foud out about the beauty of reverse proxies and I've been poking around with them for a few days, until I faceplanted into a problem wall.
Here is my problem:
I have a minecraft server in subnet 192.168.10.0/24 with the dynmap plugin that exposes a web server with the map of the game world on port 8123
I have nginx proxy manager on the same physical server in subnet 192.168.0.0/24, that is exposed on the internet on ports 80 and 443.
I setup the host forwarding and it works flawlessly, but only with serivces on the SAME subnet, like my jellyfin for example.
As soon as I tried to forward to the afromentioned minecraft server, it manages to find it, but the speed is so incredibly low that the website goes on timeout and It's impossible to use. I get the page title, and if I'm very lucky one or two tiles of the map (wich are simple images at the end of the day)
I tried moving the minecraft server in the same subnet as NPM and the problem disappears, but I would very much prefer to have my gameserver in a different subnet for various reasons.
So Is there something I'm missing? Is a reverse proxy server not supposed to work with different subnets?
I can share more information about the setup if needed,
Thanks in advance.
TLDR: When almost every selfhosted app is docker friendly / made to run with docker, does proxmox still make sense if you are running only docker containers
Hi Everyone,
I'm in the process of migrating the "compute" off my NAS to new server. Main reason for the new server was to run some local AI apps like ollama so did a new build with 2 4090 GPUs.
First thing, I jumped on the promox boat and installed it. Right away created 2 lxcs for open webui + ollama (with tteck script) and stable diffusion webui (did this one myself). Though a bit complicated with the GPUs, and having never used proxmox before, figured it out. All good, but now got to the part of wanting to migrate everything else off the NAS.
The setup on my NAS was running everything with docker (using portainer), to include nextcloud, all *arrs, jellyfin, authentik, home assistant, watchtower, and many other containers, exposed to the internet with traefik.
Looking at how to do this in proxmox, I came to the conclusion I probably should do the same, meaning a VM with docker and portainer (I have seen you can do lxc instead of VM, but proxmox recommends VM and from what I've read, especially when exposed, it provides better security).
The main reason for not using an individual lxcs for each application is:
But now I've come to the conclusion, if I run every application in docker (could also move the current lxcs to docker), what is the point of 1 VM in proxmox running everything (proxmox > Linux VM > docker), rather than directly installing linux and running docker on it (Linux > docker)?
Before I format again, am I missing something? I do want to still use the NAS, but for data only. Does it make sense to instead use ubuntu or rocky linux, or even something else? or is proxmox still the option for some reason I'm missing? I am not super techy but know enough to get everything running somehow.
there is smb in the tunnel options, I tried but it didn't work. they said we need to run cloudflared in the samba client, but there is no such option for android. is there another way?
Hello. I am looking for a fast and reliable CMS to start a s blog. I don’t really want to use Wordpress as I had bad experience in the past. What else can you recommend? Thanks
I was looking for a way to monitor the resources used by my containers. I read that I could do that with Grafana, Prometheus, and Cadvisor, but I was interested in more off-the-shelf open-source solutions. Commercially, there are a lot of software solutions available, but that goes against my open-source mentality.
I am currently using Netdata, but I didn't find a straightforward way to see how the available resources per container are used. I only see that my Docker VM is using 90% of the processor load, but not how that is divided between the containers.
I've self-hosted my blog on a raspberry pi with 174MiB ram and BCM2835 (1) @ 700MHz cpu, I've covered it in a blog, Read_ it and tell me your reviews also, follow the blog and self host something yourself and share it with me.
hello, I decided to have a go at self hosting with jellyfin running in a docker container. Im on windows 10 but out of habit I ran docker compose from inside WSL (windows subsystem for linux) as a result I see that I can access the application by typing server IP + port on a browser but only on the laptop that is running the container. On any other computer in the same wifi network it doesnt work. I went through the usual troubleshooting: firewall inbound/outbound rules; check jellyfin settings... im wondering if the issue is that im using WSL. did anyone have a similar experience?
So my homelab has expanded over a year or two and now it consumes 3kWh of electricity a day or about 125W continuous. I know this isn't the highest figure out there but it's the highest energy consumption in my house so I'd like to try reduce it.
https://imgur.com/a/RTnotb6
Here's a picture of my rack. I have a Draytek VDSL Modem, N100 mini PC router/firewall, POE Gigabit 24 port Switch, 2.5Gbe 5 port switch, Reolink NVR, Synology NAS, and Framework Laptop mainboard as a server.
I will be getting fibre soon so the Draytek Modem can go. I may well use my NAS as an NVR and use Frigate for my cameras rather than the Reolink app, so the NVR can go.
It does seem a little wasteful to me to have the Synology NAS, Framework board, and N100 PC. I did used to run my containers on the NAS but I moved them to the Framework board. The N100 PC is just a bare metal router, no virtualisation.
What would you do to reduce power consumption?
I've run pfSense for years (probably 10+), and I'm perfectly satisfied with the way it works.
There is however 1 caveat, and that is performance.
I have a 4Gb symmetric FTTH Internet connection, with which pfSense struggles a lot.
I'm running an Minisforum MS-01 (with the 13900H), with Proxmox and using virtualised pfSense with VirtIO.
This is better for power usage than running BSD native, since power saving features of pfSense are pretty bad.
VirtIO is ok-ish during download (I can get my max download, with fair CPU usage), it fails during high upload scenario's. pfSense starts to throttle because of IRQ broadcast storms. This is probably not going to be fixed in the near future, since VirtIO Multiqueue support is probably not going to be implemented in the near future.
I've tested OpenWRT, which works way faster with a lot less CPU (probably due to PPPoE being properly handled).
One of the things I'm going to miss a lot in pfSense is the haProxy plugin. I used that for my local k8s cluster's API loadbalancer and the internet facing part of my k8s cluster.
I use the Let's Encrypt integration, and have strict SNI validation on (which means you can't get past my haProxy without the proper public DNS / Hostname in the request.
This has worked fine for serveral years, and I'm not really trying to come up with something else, but...
When I switch to OpenWRT, I have to, or at least, it seems the right choice, since running pfSense just as a loadbalancer next to OpenWRT seems... I dunno, weird.
I'm thinking of running haProxy native in an LXC container on the Proxmox node, next to the OpenWRT system, but it would require a lot of manual Let's Encrypt integration.
Another option would be to run Traefik on the node, which I already run on my k8s cluster, which would also be fine.
Just run a loadbalancer in your k8s cluster (which I already do), and forware the HTTP & HTTPS ports to the cluster. But Traefik requires an Enterprise License for builtin Let's Encrypt validation.
I could try to connect Cert Manager to Traefik for the Let's Encrypt validation, but that might seem overkill.
TL;DR; Any good tips on running a Let's Encrypt enabled LoadBalancer in Proxmox which kind of resembles the HaProxy Plugin in pfSense?
Is it possible to link my self-hosted Calibre library to the Calibre client on my laptop, allowing changes to be made and synced on both ends?
Dear self-hosted community,
I created https://shflix.com
Based on the awesome-selfhosted.net project, I created a user interface to browse apps and get to visualize apps and get tips on how to deploy them.
I hope you like my effort to make self-hosted apps more accessible to a broader audience,
Happy to get your feedbacks and support here or in Producthunt
https://www.producthunt.com/posts/shflix
This is a free website and the business model should rely on partnerships with hosting providers,
To visit the website: https://shflix.com
Happy self-hosting!
Hi,
I love guacamole, as a way of remoteaccess to my homelab, but always thought its a bummer to run it on a virtual machine. Last week I was a bit bored, so I created a small github repo, that you can clone to get guacamole up and running in a minute or two.
https://github.com/GerryCrooked/guacamole
I hope this helps someone and please let me know what you think about it (its well my first public repo, so I would appreciate any feedback :) ).
Hi, I planned to set up a used Dell Optiplex 3050 SFF as an Homeserver for Paperless-ngx and smaller projects like PiHole.
It ran just fine while I was waiting for the HDDs to arrive and I was able to install ubuntu, docker, etc. on the installed m.2 SSD.
After connecting two 4TB Seagate Ironwolf HDDs with an 6pin-to-2xSATA Cable I bought from amazon and the 2 Data Cables to the Ports SATA1 and SATA0, the PC wouldn't turn on. (No light on the power button, no fans, nothing) I disconnected the SATA power - still nothing. Disconnected all peripherie and the power cable. After reconnecting just the power cable I saw a flame and smelled burned electronics - the mainboaed burned out as shown in the picture.
Was this just a unlucky freak accident or could the HDDs be the cause of this damage? Does anyone have any experience of this? I would like to know if it is worthwhile just replacing the motherboard or if I should just switch to a Synology NAS.
Thank you very much in advance.
Hi all,
I’ve been self hosting on some junk that I inherited from various sources, all the usual suspects: *arr, plex, home assistant and paperless. I absolutely love it and am hooked.
Having said that, I never really bought dedicated hardware (beyond the odd peripheral here and there).
I want to improve my setup, as per the following (roughly in order of importance):
set up proxmox on a dedicated machine and put home assistant OS on it. I want to benefit from the advantages of a non-docker version, as well as use the backup manager from proxmox. HA is now officially mission critical in my house and needs to be more recoverable than in the current docker setup.
potentially rebuild my media server setup and move to jellyfin. I’m agnostic as to if this should be on a VM or not, but as much as I love plex I want to be a little more open source reliant
gain more experience with local LLMs. I have a server with an OK processor, internal GPU and 32 GB of RAM, but all my LLM stuff is super super slow
Obviously I also just like the tinkering aspect and who doesn’t want an excuse to buy more hardware??
Price is a factor as I really don’t have a huge amount to spare, this is why I’m waiting for Black Friday. Based on my current very limited experience it looks like I’d need one power efficient lower spec machine for HASS and media serving (if I do that via proxmox) and potentially one higher spec one for LLM - is that roughly correct? If so, the higher spec is really a nice to have and not super important.
What sort of processors/brands/memory/specs should I look for on eg AliExpress? Also I’ve watched some videos saying that a lot of the external drives on AliExpress are scams, so that true? If so, what’s the best or cheapest way to upgrade storage on my media server? I currently have 5 TB which is honestly fine, but I do need to delete some series every so often.
Thanks a lot for your help, this is always a nice and pleasant community (my wife refers to it as my online server friends which is awesome).
Hi all,
I have a pi4 sharing some folders in my main ntfs drive via smb.
I would like to backup certain content of the folder to another ntfs drive by setting up filters.
The backup needs to have restore points in case if my main ntfs drive fails and some important files got corrupted, I can still restore the broken files.
The restore point feature is something I'm not sure what to do here, since I don't use linux very much and the integration of linux + ntfs drive is making me nervous.
I found some articles saying restic could do the backup with restore points, but I'm not familiar with restic, could someone help me to research it in right direction?
Thanks.
Hey everyone!
I just deployed my first site, developed with PHP and JavaScript, using Apache2 on my Raspberry Pi (running Raspberry Pi OS Lite 64-bit). I’m interested in learning more about Apache2, which is why I chose it as my web server.
Here’s the setup so far:
The site is accessible via the public IP provided by my ISP. For this, I had to set up port forwarding on my router, configured the firewall on my Raspberry Pi, and adjusted ports.conf
in Apache2 along with custom .conf
files in sites-available
. This setup allows my website to load at http://public-ip:80
. When accessing the http://public-ip:80
the browser removes the :80
in the end (as expected). However, if I configure the application to use a different port, accessing http://public-ip
redirects me to my router's settings login page. In all cases, canyouseeme.org shows port 80 as closed with a "Connection timed out" error, even though it works fine when I expose my website on other ports. My ISP confirmed me that ports 80 and 443 aren’t blocked. For non-standard ports (e.g., 8080), I have to specify the port in the URL.
Next, I bought a domain from GoDaddy, set it up on Cloudflare, and updated the nameservers on GoDaddy. I’m trying to avoid Cloudflare’s zero-trust tunnels because I want to point my domain directly to my public IP using the traditional method of DNS records.
I'm finding it challenging to configure DNS records along with a custom self-hosting environment with Apache2. Since DNS records don’t allow specifying ports directly, the setup relies on serving the website over default ports. This means configuring the DNS to point to http://public-ip
and allowing the server to handle redirection through standard ports for web traffic, but this approach isn’t working as expected.
The thing I cannot get my head around is why exposing my website on port 80 and accessing it through http://public-ip
works, but the DNS records and canyousee.org do not.
At this point, I’m stuck. Has anyone experienced something similar or have suggestions on what to try next?
Thanks in advance!
P.S.: I am planning to add SSL once I figure out DNS.
I was just reading this guide which explains that tteck's Plex LXC script installs all the drivers, etc. that are needed for using Intel iGPU passthrough for Plex transcoding, and I was wondering if it would make sense when creating a Frigate LXC that uses Intel iGPU passthrough for object detection, to use that script as a base and then just install docker and Frigate on top, to save time doing everything from scratch?
https://www.derekseaman.com/2023/04/proxmox-plex-lxc-with-alder-lake-transcoding.html