/r/selfhosted
A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web services, and online tools.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Service: Dropbox - Alternative: Nextcloud
Service: Google Reader - Alternative: Tiny Tiny RSS
Service: Blogger - Alternative: WordPress
We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.
What Is SelfHosted, As it pertains to this subreddit?
/r/selfhosted
Good day! I wanted to introduce Meelo. It's an alternative for Plex/Jellyfin tailored for music collectors. It currently supports:
As of today, there is no mobile app. Only a web client is available. The next features on the roadmap are: gapless playback, labels, scrobbling and synced lyrics.
It's free and open-source! Check it out on GitHub: github.com/Arthi-chaud/Meelo
I am also looking for other features ideas. What other features would make Meelo great for music collectors? I've been thinking of adding support for extra media like digital booklets
Greetings. So I’ve been setting up my homelab and I was wondering is there a way to monitor bandwidth used by a server? Like I’d like to know the GB spent in period x etc
Hey, so I'd like to be clear that all this web stuff is all new to me lol, I only tend to mess with game servers.
I am setting up a website for my game servers, and on this I plan to host a hiscores page which is hosted by an app I use on port 8000, I can access it via http://sub.domain:8000 and http://ip:8000 but not via HTTPS, though https://sub.domain or https://ip works fine.
I either need access to it via HTTPS with the port, or via HTTPS and all traffic sent to port 8000.
The app and servers are hosted on a Windows VPS, and I have setup the SSL and web server using XAMPP Apache and I have a web host and wildcard SSL with Ionos.
If anybody could help with this I would be so appreciative lol, I have been pulling my hair out over this for nearly 48 hours. Thank you.
Hey, r/selfhosted!
HortusFox is a collaborative plant management and tracking system for green thumb (and non-green thumb) self-hosters.
The application's current feature set includes:
The project launched in 2023 and just released its milestone v4.0 update, which introduces:
Release | Website | Source Code
Disclaimer: I am not the developer of HortusFox, but am a big fan of the project and am posting on their behalf.
Hey all, I am trying to run Immich on my Hetzner Cloud instance and to combine it with Traefik. But I am running into issues in setting up Traefik with Docker.
My Traefik docker-compose is here: https://pastebin.com/1EJtdPWn
I tried a lot with different configurations, but even when removing all docker networks and container and just running the Traefik Docker, I still always getting these kind of errors…
2025-02-03T23:46:32Z ERR error="accept tcp [::]:8080: use of closed network connection" entryPointName=traefik 2025-02-03T23:46:32Z ERR error="accept tcp [::]:80: use of closed network connection" entryPointName=web 2025-02-03T23:46:32Z ERR Error while starting server error="accept tcp [::]:80: use of closed network connection" entryPointName=web 2025-02-03T23:46:32Z ERR Error while starting server error="accept tcp [::]:8080: use of closed network connection" entryPointName=traefik 2025-02-03T23:46:32Z ERR error="accept tcp [::]:443: use of closed network connection" entryPointName=websecure 2025-02-03T23:46:32Z ERR error="close tcp [::]:443: use of closed network connection" entryPointName=websecure
I don't know what to do… I can access the 8080 dashboard then, but when then also starting Immich, I get Error 404 or Gateway Timeout. It never binds…
The errors occur even when I don't run immich. I'm running Immich with this docker-compose(https://pastebin.com/4MigQ18x) and this docker-compose.override.yml (https://pastebin.com/VS5fBhV0).
So it seems that traefik can't bind to the Immich, event though the Traefik Panel at 8080 is accessible… No other container is running, nothing else is listening on 80 or 443… If I stop traefik, the ports are not claimed by anything else…
I wondered whether this could be an issue relating to Ipv6 of Hetzner?
I really appreciate any help! I am now kind of lost…
What is Gokapi?
Gokapi is a lightweight, self-hosted file-sharing platform designed for secure and efficient sharing with optional end-to-end encryption, expiring files and download restrictions. An API is also available. https://github.com/Forceu/Gokapi/
Hello /r/selfhosted! A lot of work has been put into the project in the recent months, to make Gokapi more usable for companies and individuals, that require a proper user management.
As a lot of things have changed under the hood with significant changes to the database structure and API, we would like you to be our guinea pigs so we can find any major bugs before we release it to the public. You can find the pre-release here: https://github.com/Forceu/Gokapi/releases/tag/v2.0.0-beta1
New users
docker run -v gokapi-data:/app/data -v gokapi-config:/app/config -p 127.0.0.1:53842:53842 -e TZ=UTC docker.io/f0rc3/gokapi:latest-dev
(change TZ to your timezone if required)Existing users
If you are already running an instance, make sure to have a backup of all data - there might be things that break you installation and once you upgrade, there is no going back. Also please read the notes regarding upgrading https://github.com/Forceu/Gokapi/releases/tag/v2.0.0-beta1
gokapi:latest
tag with gokapi:latest-dev
We would love any bug reports, feedback or pull requests! And thank you for being such a great community :)
Hi everyone,
I'm trying to set up Paperless-ngx in an LXC container on Proxmox, but I’m running into an issue when changing the storage locations. Hoping someone here can help!
What I’m Trying to Do
I installed Paperless-ngx using a Proxmox Helper Script in an LXC container. By default, Paperless stores its data inside /opt/paperless/, but I want to store everything on my ZFS pool (WDRed) instead of the SSD where the container is running.
What I Did :
I created the following directories on my ZFS pool:
/WDRed/data/paperless/consume/
/WDRed/data/paperless/media/
/WDRed/data/paperless/data/
/WDRed/data/paperless/static/
Then, I mounted them inside the LXC container like this:
pct set 100 -mp0 /WDRed/data/paperless/consume,mp=/mnt/paperless/consume pct set 100 -mp1 /WDRed/data/paperless/media,mp=/mnt/paperless/media pct set 100 -mp2 /WDRed/data/paperless/data,mp=/mnt/paperless/data pct set 100 -mp3 /WDRed/data/paperless/static,mp=/mnt/paperless/static
Inside the container, these directories exist and seem to be mounted correctly. I also updated /opt/paperless/paperless.conf with the new paths:
PAPERLESS_CONSUMPTION_DIR=/mnt/paperless/consume PAPERLESS_DATA_DIR=/mnt/paperless/data PAPERLESS_MEDIA_ROOT=/mnt/paperless/media PAPERLESS_STATICDIR=/mnt/paperless/static
BUT !
After changing these paths, Paperless-ngx stops working. When I open the web interface, I get a blank page with just the Paperless-ngx logo and this message: "Still loading... Are you still here? Hmm, something must have gone wrong. Here’s a link to the documentation."
If I revert the config to the default paths, everything works fine again.
Any ideas on what could be going wrong? Could it be a permissions issue, missing dependencies, or something I overlooked?
Thanks in advance for any help!
I know there are a million guides out there, but I'm struggling passing through my Intel Arc A380 GPU to my privileged tdarr LXC. I think I’ve read every guide and nothing seems to work.
I’m wondering if it’s because I’ve got an AMD CPU with iGPU, but I don’t think that is the case?
On my host Proxmox, I think I’ve got it all right:
root@pve:~# ls -lah /dev/dri/
total 0
crw-rw---- 1 root video 226, 1 Feb 3 23:08 card1
crw-rw---- 1 root render 226, 128 Feb 3 23:08 renderD128
root@pve:~# lsmod | grep i915
i915 3932160 0
drm_buddy 20480 3 amdgpu,xe,i915
ttm 102400 4 amdgpu,drm_ttm_helper,xe,i915
drm_display_helper 233472 3 amdgpu,xe,i915
cec 90112 3 drm_display_helper,xe,i915
i2c_algo_bit 16384 3 amdgpu,xe,i915
video 73728 4 asus_wmi,amdgpu,xe,i915
My LXC conf:
arch: amd64
cores: 4
dev0: /dev/dri/renderD128,gid=104
features: nesting=1
hostname: tdarr
memory: 8000
mp0: /mnt/media/plex,mp=/mnt/media/plex
mp2: /var/tdarr_cache,mp=/tdarr_cache
nameserver: 192.168.5.1
net0: name=eth0,bridge=vmbr0,gw=192.168.5.1,hwaddr=DC:32:22:87:CB:1D,ip=192.168.5.244/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-113-disk-0,size=8G
searchdomain: 192.168.1.1
swap: 1024
tags: arr;community-script
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
On my LXC:
root@tdarr:~# cat /etc/group | grep render
render:x:104:root
root@tdarr:~# intel_gpu_top
No device filter specified and no discrete/integrated i915 devices found
root@tdarr:~# vainfo
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.17.0
libva info: User environment variable requested driver 'iHD'
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_17
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [2]
param: 4, val: 0
libva error: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
libva info: va_openDriver() returns 18
vaInitialize failed with error code 18 (invalid parameter),exit
Hey everyone,
I have a bunch of courses that I’ve downloaded and stored locally on my PC. I’m looking for a program that can help me organize them into a library, track progress (e.g., mark lessons as watched), and maybe even resume a lesson from where I left off.
Ideally, I’d like something that:
- Lets me browse and categorize courses easily.
- Tracks watched/unwatched lessons.
- Remembers where I stopped in a lesson .
- Works offline (since the courses are stored locally).
I’ve considered using Jellyfin, Plex or Stremio for the video courses, but I’m wondering if there’s a dedicated app built for this kind of use case.
Does anyone know of a program that fits the bill? Or would a custom solution be my best bet?
Thanks in advance!
First of all, consider me a complete idiot where it comes to networking. I decided to jump into the deep end and self-host a TRACCAR instance as well as a few other things. I've been able to get the services I need running locally on the server but am stuck on Port Forwarding to access it from an external network.
No matter what I do, the ports always show as closed when checking with yougetsignal and other tools.
To help with troubleshooting I started a port listener on my main PC (not the server) for port 3333,
I've forwarded port 3333 to the internal IP of this machine through the router settings
I've created firewall rules in windows advanced firewall to allow all connections on port 3333
I've disabled all VPNs
Still when I go to test, port 3333shows as closed
What am I missing?
I must be doing something wrong or completely misunderstanding the goal
Any help you're able to provide would be amazing.
[EDIT : It was CGNAT, just trying to decide now whether to opt out of CGNAT and use a duckdns subdomain to have a static address or go with a reverse proxy, thanks for all the advice]
I recently posted about my adventures of the beginning of my self-hosting journey. I have a TrueNas server that I'm trying to prepare to expose to the outside world and I'm working on setting everything up right.
Moved my apps into Dockge for easier shell management and easier migration in the future.
Proxy manager: Zoraxy
DNS: Technitium
I bought a domain on porkbun and am working on setting up my reverse proxy, DNS, and open router ports.
Do I create 1 A record in porkbun where it directs *.domainname.domain to the IP of Zoraxy and then in Zoraxy set up all the local IP:port = app.domainname.domain pairs? I guess I'm confused bc porkbun has DNS and Technitium is also DNS and I'm unsure why I need Technitium?
In regards to opening ports, I open 443 and 80 for my public IP right?
Thanks!
I am trying to learn nomad with traefik so I set up an ubuntu VM on my laptop to test them out. I followed the instructions to install nomad from the HashiCorp website, and then followed this tutorial to stand up a traefik and a test container. Unfortunately, I am getting a "Gateway timeout" error everytime I try to connect to my machine using its local IP. If, however, I disable ufw, I can access any container I stand up using nomad. If I enable ufw and allow port 80, I still cannot access my test container. Why is this? I thought I could open port 80 and then traefik would port forward internally to the port chosen by nomad.
More strangely, all the ports I specify as "static" are accessible, even if the firewall is enabled and if I do not enable any of those static ports (and even if I explicitly ufw deny said ports). How is nomad bypassing the firewall? Why cant traefik do the same?
Hey everyone,
I’m using FreshRSS as my RSS reader and would love to find a self-hosted solution to bypass paywalls when fetching feeds. I’m mainly looking for a way to handle soft paywalls, where the full article is still accessible somehow.
I’ve tried Full-Text RSS (FiveFilters) in Docker, but it doesn’t work for all sites. Are there better alternatives? Maybe a combination of RSS-Bridge, Wallabag, Mercury Web Parser, or some scraping solutions?
If anyone has a working self-hosted setup for FreshRSS + Paywall Bypass, I’d love to hear your experience and recommendations!
Thanks in advance 🚀
I’m running nginx proxy manager and data base service as apps on truenas scale. I need to get the server accessible via https so the iPhone app will work. I have duck dns pointing to my internal ip for the truenas server. Nginx is this same ip with a specified port. When I set up the proxy for the service everything seems to work but when I go to the domain it takes me to the truenas page. It seems like the domain is going directly to the ip address and not to nginx.
Is this a router issue or something else.
I made a Web UI for smbstatus command.
https://github.com/milindpatel63/samba-monitor
It shows your active sessions, the share they are accessing, and the locked files of your samba share. Basically everything the smbstatus command displays in terminal.
It also allows for notifications to discord which can be excluded for specific ip's.
I'm not yet past hosting a few things like Pi hole, Plex, and some other basic services. So many guides just give you a docker compose file to customize for your own environment and instruct to you pull the latest image from wherever. But how do I trust that the software I'm running is not malicious or won't turn malicious? Obviously big name stuff like Pihole, Plex, Nginx etc are pretty easy to trust. But for less popular software, how do I trust that someone isn't going to send a malicious update? How careful do I need to be? There are so many sources and forks of things and sometimes it's hard to know whether the source you are using is official or a fork. It's easy to spend lots of time trouble shooting port issues and forget to look at the image source and vet it. It's also easy to imaging someone justifing using a fork of something that is tweaked for fit their needs instead of tinkering with the source that they cant get to work for whatever reason.
Like I think I'm comfortable enough creating a unique user with limited access and using that UID and GID to limit permissions. Careful about only mounting necessary volumes etc. But even those volumes might have lots of data I care about in some way shape or form. I'm just not an expert here, and like many newbies, run software on my NAS which would be pretty difficult to lose. Yes yes backups blah blah. Maybe beyond say a encryption attack someone is worried about their private data being harvested quietly? No shortage of bad things that can happen ...
In theory a rouge image shouldn't have access to much if I'm careful, but I'm curious if there's anything I should watch for? Most of the guides barely gloss over security. Both docker and Linux are known for contributing to a secure ecosystem. I just worry that it's for people who know what they are doing and not your average schmo editing a copy paste compose script.
Hello!
I have spent probably 20 hours reading into docker, vpn setup, and TrueNAS stuff. I grasp the concepts and am fiddling with setting up PIA as my vpn since I have a few years of this service paid for and don't really want to purchase another provider that supports wireguard natively. I think gluetun would be a solution to my problem.
I found this page (https://github.com/qdm12/gluetun-wiki/blob/main/setup/providers/private-internet-access.md) that has a docker compose file. When I try to run it, I am getting an error ([EFAULT] Failed 'up' action for 'gluetun' app. Please check /var/log/app_lifecycle.log for more details). I have just ignored the snipet of code above the docker compose which says this:
docker run -it --rm --cap-add=NET_ADMIN --device /dev/net/tun \
-e VPN_SERVICE_PROVIDER="private internet access" \
-e OPENVPN_USER=abc -e OPENVPN_PASSWORD=abc \
-v /yourpath/gluetun:/gluetun \
-e SERVER_REGIONS=Netherlands qmcgaw/gluetun
Do I need to run these comands in the TrueNAS shell before running the YAML docker compose script? If so is it just copy and paste in the terminal with my password and username applied?
I would appreciate any help as I am about to bang my head on the wall. Thanks!!
Hey everyone,
I have a spare Realme GT Neo 3T(SD 870,8GB RAM) with a working display, but the touch is completely unresponsive. I want to use it as a dedicated Telegram auto-forwarder that stays connected to my home internet and runs without much maintenance.
Since I can’t use the touchscreen, my only ways to control the device are:
ADB commands via my laptop
USB keyboard through the Type-C port
I’m looking for the best way to set this up so it runs smoothly and reliably. Any recommendations for apps, scripts, or automation methods that would work in this scenario?
Would really appreciate any advice! Thanks in advance.
I have Dockge running on my TrueNAS server on my home network. I have my DNS set up correctly, and all of the apps that are running can access the internet without a problem.
I recently tried to add a new app in Dockge, and I received this error:
Error response from daemon: Get "https://lscr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I checked my other apps, most of them also use lscr.io/linuxserver, and tried to update them. The same error came up. I have been able to access the domain to pull apps before, and now, without changing my configuration, I can't seem to access it at all.
According to status.linuxserver.io the server should be accessible. Is there anyone else experiencing issues, or is this an isolated event? If so, how do I fix it?
Has anyone here managed to host an Open Status instance? Was it worth it? And how has your experience been?
HP is no longer honoring warranty on their Solid State Drives.
I know consumer grade drives aren't recommended for any server equipment, but for those of us on a budget, we have to make choices. I bought a handful of HP drives on sale, and now have had this experience.
An hour and half of transfers to tell me they have outsourced their drive support because "The drives had too many issues."
Calling the phone numbers they provide to Multipoint just ring forever.
I intend to email them just so I can follow up with a BBB complaint if they don't correct the issue. It just burns me that they are flagrantly in breach of their contract.
I know HP isn't like an A-list company to a lot of techies these days, but I'm still surprised at this level of brazen enshittification.
Hello all!
I've tried beating my head against this wall for the past three weeks between NXT Platform and NextCloud and have been disappointed. NXT Platform was extremely difficult for me to set up, and NextCloud has been very difficult in trying to customize.
I am looking for an opensource intranet solution that I can set SSO login via Google Workspace and have mail brought in via app integration and OAuth. I'm open to alternatives and different ways of doing things, just as long as I can get it done. I have a VPS server with Linux environment.
Please, any insights would be great. I'm so, so tired of beating my head against this wall. It's for a small business with a small handful of employees. Needs aren't overwhelming- just an intranet solution with email and ability to link external things.
So I've built a home NAS and my setup is Unraid with a chunk of the arr stack (prowlarr, radarr, sonarr, jellyseerr, qbittorrent and plex). I currently use the "Excluded file names" feature in Qbittorrent to filter out a lot of files that I don't want e.g. Sample, *.nfo, *.txt, *.exe, *.png etc
That was working perfectly, but now that I'm in a private tracker, I need download the entirety of the torrent in order to not get flagged for incomplete downloads.
Is there a method that I could instead ->
Any suggestions or other ideas I'm open to of course!
(Or maybe even not self hosted, I guess.)
I want to watch a video with a friend once a month. So this system does not need to be robust. I will host the video. Two absolute requirements are that she needs to be able to tune in without making an account of any kind whatsoever and it needs to be browser based. No stand alone software on her end. Voice chat to go along with the video would be nice but not a requirement.
I'm searching on my own, also. I did find owncast but I'm not too far into their feature list yet. Anyone have any experience with them?
Edit: I guess something like a self hosted zoom with screen sharing would work great. We'd use zoom but I think they have a time limit on meetings for free accounts.
Don’t get me wrong—I absolutely love self-hosting. If something can be self-hosted and makes sense, I’ll run it on my home server without hesitation.
But when it comes to LLMs, I just don’t get it.
Why would anyone self-host models like Ollama, Qwen, or others when OpenAI, Google, and Anthropic offer models that are exponentially more powerful?
I get the usual arguments: privacy, customization, control over your data—all valid points. But let’s be real:
Running a local model requires serious GPU and RAM resources just to get inferior results compared to cloud-based options.
Unless you have major infrastructure, you’re nowhere near the model sizes these big companies can run.
So what’s the use case? When is self-hosting actually better than just using an existing provider?
Am I missing something big here?
I want to be convinced. Change my mind.
Hello All,
I'm posting this guide hoping it will help someone. You might be thinking there already is a guide at The Immich CLI | Immich, but it's kind of shit if you're trying to run this with docker.
In my case, I'm running immich on TrueNAS Scale, so npm isn't available (and I don't want to mess with the OS too much).
Navigate to the folder you want to import from before deploying the container, such as:
cd /mnt/stuff/my-photos
$(pwd)":/import:ro
will map the current location to /import. Then, here's the tricky part: you need to run the commands (such as upload) along with the docker run command. like so:
sudo docker run -it --rm \
-v "$(pwd)":/import:ro \
-e IMMICH\_INSTANCE\_URL=http://SERVER_IP:2283/api \
-e IMMICH\_API\_KEY=YOUR\_API\_KEY \
ghcr.io/immich-app/immich-cli:latest\
upload --recursive /import
Obviously, switch API key and server IP (and the port if you used another one) based on your immich configuration. since we already entered the url and API key while building the container, you do not need to run the login command.
If you want to run a test first to see what the command would do:
sudo docker run -it --rm \
-v "$(pwd)":/import:ro \
-e IMMICH\_INSTANCE\_URL=http://SERVER\_IP:2283/api \
-e IMMICH\_API\_KEY=YOUR\_API\_KEY \
ghcr.io/immich-app/immich-cli:latest\
upload --dry-run --recursive /import
There you go. I hope this helps someone.
My Authentik docker and worker docker are both trying to contact "data-centers" in what looks like Germany according to an IP address search. Is this anonymous data collection? If so, how can I disable this?
**Also posted this in the r/Authentik subreddit, but I figured some of you were hosting Authentik and maybe know how to disable this feature.
Edit** Thanks to u/germanpickles and u/unacceptableuse adding the environment variable AUTHENTIK_DISABLE_UPDATE_CHECK and setting the AUTHENTIK_ERROR_REPORTING__ENABLED to false has stopped the traffic.