/r/selfhosted
A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web services, and online tools.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Service: Dropbox - Alternative: Nextcloud
Service: Google Reader - Alternative: Tiny Tiny RSS
Service: Blogger - Alternative: WordPress
We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.
What Is SelfHosted, As it pertains to this subreddit?
/r/selfhosted
Hello everyone, does anyone use MailWish for email? The offer they have for their Unlimited - Lifetime plan is tempting me, but I can't find many comments or references to the service, which makes me suspicious.
https://mailwish.com/lifetime-promo-buy/
129$ for Unlimited Domains and Unlimited Email Accounts, 50GB Total Storage
I'm going to give a brief description of how my setup is. It works great, but I feel like I've got so much in place that if something failed, it would be a nightmare to track down.
I have Pihole, a DNS over HTTPS (DoH) server, and two Unbound servers.
When I make a request in my browser, the request for a website is transferred over to my DNS HTTPS (DoH) server, from there, the request is sent to Pihole, then if it's not blocked, the request is then sent to my two Unbound servers acting in recursive mode, which has access to the root host file.
And then of course Pihole's ports go through Traefik, but that's a whole other story.
This doesn't sound too complicated, but it adds a new layer that I also have an OpenVPN server I run. The client config needs to point to the Pihole server where it also hits that, the Unbound servers, etc.
So this is where it starts to get a little confusing.
If I am on a Windows machine, and I have configured the network card to use my Pihole server, but then I also connect on that same machine to my OpenVPN server which also has to route through to Pihole, am I just doubling the work it has to do?
I've been reading through the Pihole logs to see what traffic is coming through, and I don't notice any double hits. When I connect to the OpenVPN server, it seems that most of the Traffic does come from the VPN server connection, other than the occasional request from the server itself which is routed through Unbound / Pihole.
But the issue is, I've set this up in a way where I can't even get an image in my head of the flow, because I've put on layers. Pihole doesn't do DoH, so I needed a DoH server to take care of those requests.
The odd part is that it all just seems to work. I've ran test after test after test, both online tests for DNSSEC, DoH, etc, and tracing from my machine, and the hops appear to be correct.
However, I feel like if something broke, it would be an absolute nightmare to diagnose as I have to go through each container to figure out the problem.
This isn't even taking into account that I have two Unbound servers, both running on different IPs and machines, in case one goes down, but I also need to spin up a second Pihole instance on that other machine as a backup and link it into that second Unbound server in case the first choice goes down for some reason. But that's not too often, I see maybe 10 minutes of downtime a month or two, depending on what the server host has going on. But it's a pain when it does.
Hello everyone! So it's essentially what the title says. I'm looking for a storage solution (like snapraid + Mergerfs or ZFS) for my media server. I'm running Proxmox on an old PC with 1 NVME drive (Proxmox) and 3 10TB HDDs (Media Server). The server itself is hosted on an Ubuntu VM. When I first set it up a few months back, I initially set up Mergerfs and SnapRaid on Proxmox itself and while Mergerfs worked fine, SnapRaid refused to sync. I went on anyway and setup the VM that now has a fair amount of media files saved on it and I am sharing it with other people. Recently I've decided to host some more applications and maybe play around with some more VMs so I want the server to be running 24/7 which was not my original plan, but due to this I'm now more worried about having some sort of parity/redundancy. I'll outline my future plans, wants, and needs to hopefully give a better idea of the picture I have in my head for what this server will be/do. While I've read up a lot and watched several videos on SnapRaid, Mergerfs, btrfs, unraid, etc. as well as posted a few times on other subreddits looking for assistance, I still can't quite wrap my head around all of it, and I would greatly appreciate any help. Thanks!
Plans: I plan to continuously add more storage as needed to the server. I'm doing fine with my ~15TB of useable storage at the moment, but inevitably that will run out. I plan for this to be a "one-stop shop" for myself and my friend's media/streaming needs so the data is being and will continue to be accessed frequently and changed somewhat frequently (at least a few times a week).
Needs: I need a storage solution that will give the server some cushion if one of the drives fails. Essentially, I don't want to lose any of the data as much as I can help it. Also with this being a server that is being accessed by other people, I'd rather a drive failing not have any immediate downtime and only have to take the server down when I replace the affected drive. It also needs to be at least somewhat expandable without having to lose or temporarily migrate all the data as I will be adding storage as needed. Lastly, the solution needs to be as easy on the drives as possible. Unfortunately I don't have the capital to spend on ~$80 replacement drives super often. So possibly something that only spins up drives when they are being accessed and otherwise spins them down.
Wants: Since I already have a bunch of files downloaded, I'd rather not lose any of them. Problem being, I don't know how to actually get a backup of those files and the VM itself with how Mergerfs and SnapRaid is currently configured. If I could make the parity drive visible/useable with Proxmox's backup feature, all the data would fit on that drive.
Hi I'm trying to figure this nightmare out after about two weeks of just crazy attempts to make my system better. Would appreciate any help. Sorry for the long message, I'm just sore out of luck here.
What i'm looking for is someone that can look at my YAML file and maybe point me in the right direction. Once I get this up and running better, I hope to add more dockers in this YAML file to continue my process.
If you can also provide tips on how to automate all of this, my assumption is I will make a task schedule that triggers on Boot to kick this YAML off and also to allow me to rerun it when I need to manually.
Any other pointers would be really appreciated. I don't know if having everything in one YAML is the best method, but it seems to work nicely so far. Also by doing this, it seems like it will auto upgrade all my containers so I don't need an auto upgrade method I think.
The Details:
Synology NAS DS1019+
500GB NVMe (volume 2)
32TB Sata Storage Poole (volume 1)
16GB Ram
I own a domain through changeip.com and have the DDNS turned on to point to my NAS's dynamic IP address. I do not have a SSL Certificate at the moment but have been reading of using letsencrypt. I would love for all of my connections to be SSL but haven't figure that out yet.
I have created a Ramdisk for Plex Transcoding, and have moved all of my containers and the actual container manager to run on Volume 2.
My hope was to be able to run dockers safely and with an easy way to access them.
My goal is to have these running nicely with each other:
NGINX-Proxy-Manager [NON VPN NETWORK] (STILL SETTING UP / TESTING)- I still don't know what this is doing but I'm hoping I can be able to log into https://sonarr.myowndomain.com (notice the SSL) instead of using the different ports. With this, I have set it up using letsencrypt ports but have not completely tested it since I don't know what I'm supposed to test (but it's not working I think for what I want to do. I read maybe letsencrypt doesn't allow subdomains, not sure)
Gluetun [VPN NETWORK] I was able to get this running through OPENVPN and NORDVPN. I read about wireguard but just couldn't get it to work with NORDVPN (which I already bought) so I'm sticking with OPENVPN (Even though I have read it's not as fast). But I'm open to Wireguard (if it's easier to get up and running)
Qbittorrent [VPN NETWORK] This should run on the Gluetun network with a kill switch. I seem to have this ok. BUT my problem is do I need a private indexer? I won't use it often. Only for the stuff that Usenet doesn't have I guess but I need it tight before I try using it.
SABNZBD - [NON VPN NETWORK] Will be using NzbGeek which I have an API (so far great service with them). I was going to run this through Gluetun but upon getting that set up, I suffered horrible downloads (7Mbps). Only when I took it out of my original YAML file so that it ran directly through SSL did it go back to its normal 40 to 50Mbps.
Prowlarr - [VPN NETWORK]. I want prowlarr on the VPN Network since it does the searching. But I need it to be able to talk to my NON VPN NETWORK For my Arrs to communicate with it. I can't figure this out.
Radarr, Sonarr, Overseer - [NON VPN NETWORK]. I think these don't need to be on the VPN, as they are using Prowlarr for indexing so in order to make it run faster, I'm just wanting it to go through the NON VPN Network.
SO IN SUMMARY My issues are How do I get VPN and NON VPN work together so they can talk nice? I am having errors with my current YAML and it appears to be around networking maybe.
HERE IS MY YAML
version: "3.8"
# Define networks
networks:
vpn_network:
driver: bridge
nonvpn_network:
driver: bridge
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
ports:
- 8888:8888/tcp # HTTP proxy (optional)
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
- 8090:8090/tcp # qbittorrent
- 9696:9696/tcp # prowlarr
volumes:
- /volume2/docker/gluetun:/gluetun
environment:
- PUID=1027
- PGID=65536
- TZ=America/New_York
- VPN_SERVICE_PROVIDER=nordvpn
- VPN_TYPE=openvpn
- SERVER_CITIES=Atlanta
- OPENVPN_USER={{{MY USER HERE}}}
- OPENVPN_PASSWORD={{{MY PASSWORD HERE}}}
networks:
- vpn_network
restart: unless-stopped
qbittorrent:
image: linuxserver/qbittorrent:latest
container_name: qbittorrent
environment:
- PUID=1027
- PGID=65536
- TZ=America/New_York
- WEBUI_PORT=8090
volumes:
- /volume2/docker/qbittorrent:/config
- /volume1/data/torrents:/data/torrents
network_mode: service:gluetun # Use Gluetun's network
depends_on:
gluetun:
condition: service_healthy
restart: unless-stopped
sabnzbd:
image: lscr.io/linuxserver/sabnzbd:latest
container_name: sabnzbd
ports:
- 8080:8080
environment:
- PUID=1027
- PGID=65536
- TZ=America/New_York
volumes:
- /volume2/docker/sabnzbd/config:/config
- /volume2/docker/sabnzbd/downloads:/downloads
- /volume2/docker/sabnzbd/incomplete:/incomplete-downloads
- /volume2/docker/sabnzbd/nzbs:/nzbs
networks:
- vpn_network
- nonvpn_network
restart: unless-stopped
prowlarr:
image: lscr.io/linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=1027
- PGID=65536
- TZ=America/New_York
- WEBUI_PORT=9696
volumes:
- /volume2/docker/prowlarr/config:/config
networks:
- vpn_network
- nonvpn_network
depends_on:
gluetun:
condition: service_healthy
restart: unless-stopped
sonarr:
image: lscr.io/linuxserver/sonarr:latest
container_name: sonarr
ports:
- 8989:8989
environment:
- PUID=1027
- PGID=65536
- TZ=America/New_York
volumes:
- /volume2/docker/sonarr/config:/config
- /volume1/data/media/tv:/tv-anime
- /volume1/data/media/tv:/tv-korean
- /volume1/data/media/tv:/tv
- /volume2/docker/sabnzbd/downloads:/downloads
networks:
- vpn_network
- nonvpn_network
restart: unless-stopped
radarr:
image: lscr.io/linuxserver/radarr:latest
container_name: radarr
ports:
- 7878:7878
environment:
- PUID=1027
- PGID=65536
- TZ=America/New_York
volumes:
- /volume2/docker/radarr/config:/config
- /volume1/data/media/movies:/movies-anime
- /volume1/data/media/movies:/movies-korean
- /volume1/data/media/movies:/movies
- /volume2/docker/sabnzbd/downloads:/downloads
networks:
- vpn_network
- nonvpn_network
restart: unless-stopped
plex:
image: plexinc/pms-docker:latest
container_name: plex
environment:
- PUID=1027
- PGID=65536
- TZ=America/New_York
- PLEX_CLAIM=
- ADVERTISE_IP=http://192.168.1.8:32400/
ports:
- "32400:32400/tcp"
- "3005:3005/tcp"
- "8324:8324/tcp"
- "32469:32469/tcp"
- "32410:32410/udp"
- "32412:32412/udp"
- "32413:32413/udp"
- "32414:32414/udp"
volumes:
- /volume2/docker/plex/config:/config
- /volume1/data/media:/media
- /tmp/plexramdisk:/transcode
networks:
- nonvpn_network
- vpn_network
restart: unless-stopped
overseerr:
image: sctx/overseerr
container_name: overseerr
environment:
- LOG_LEVEL=debug
- TZ=America/New_York
- PUID=1027
- PGID=65536
ports:
- "5055:5055"
volumes:
- /volume2/docker/overseerr:/app/config
networks:
- nonvpn_network
- vpn_network
restart: unless-stopped
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy-manager
ports:
- "800:80"
- "4430:443"
- "810:81"
volumes:
- ./data:/data
- /volume2/docker/nginx-proxy-manager/letsencrypt:/etc/letsencrypt
networks:
- nonvpn_network
- vpn_network
restart: unless-stopped
Hi guys,
Since yesterday i have a problem which is driving me nuts. A couple of says ago i set up a home server for hosting different servers (teamspeak, minecraft, etc). Yesterday was a short power outage and since then I cannot connect using external IPs, only LAN. I can't even use remote desktop into the server with puplic IP but only local address (which was possible before).
Weird thing is, before i set up the server I was hosting them from my PC and can't even do that anymore, like the firewall is blocking all the traffic. Tried disabling the firewall of my pc (both private and public) but that didn't work either.
I check the firewall rules, IP changes, even reset the router and tried another one. On the ISP side they told me everything was normal and no ports were blocked (asked them to specifically check if port 9987 UDP and 30033 TCP were blocked).
Any ideas?
I was hosting all my services in docker on one machine, but I now want to reach some form of HA so I can have a node/location failure without long impact. I have deployed two extra nodes on different locations, created a mesh VPN network with nebula and a docker swarm with a GlusterFS distributed storage.
Now I have many apps that need a postgres DB and I already have one central postgres DB for all but I would like to have one hostname that I can point my apps to and act as a proxy and have a postgres cluster so I have NO downtime of the db as that would mean nearly all apps are unusable in some way.
Does somebody already have experience with creating a postgres cluster? I would like to keep it simple so not too many containers (easy to manage). I don't need an extra web UI maybe some notification would be nice, but I have already setup a notification on node failure.
First of all I want to preface that I’m an absolute novice when it comes to self hosting.
On my spare time I’m working on a password manager application for my own use. It’s basically a small web api with a sql database.
I’m looking for some guidance/recommendation for a home server to initially host the database and the api and eventually other projects.
Budget range would be 200-300 euro.
I'm trying to run a simple docker setup on my raspberry pi
This is my docker-compose.yaml file
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
command: -H unix:///var/run/docker.sock
expose:
- 9000
volumes:
- portainer_data:/data
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
network_mode: bridge
nginx:
container_name: nginx
image: nginx:latest
restart: unless-stopped
ports:
- 80:80
volumes:
- /etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./src:/usr/share/nginx/html
network_mode: bridge
depends_on:
portainer:
condition: service_started
volumes:
portainer_data:
this is the /etc/nginx/nginx.conf file referenced above (maybe incorrecltly)
events{}
http {
server {
listen 80;
server_name portainer.mydomain.net;
location / {
proxy_pass http://127.0.0.1:9000;
}
}
}
this is the error I'm getting in the logs
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1,
I’m trying to access portainer via portainer.My domain.net
Help please !
I already have an email hosting (IMAP & SMTP), and I'm looking for an email client I can run 24/7 on my server to reply to emails with a pre-defined message based on the subject line of the incoming mail.
Example:
if subject line
equals testing123
-> reply with text: hi, foo bar. regards,John Smith
What is the easiest way to accomplish this?
Hi guys. I'm lurking here for quite some time, recently joined the sub.
I already self-host some of the apps, but I use VPS to do so. While it's working fine, I don't like that I need to expose some of the applications to the internet to be able to access them myself. Also, for some apps the hardware requirements are higher (more CPU, more RAM which cost more) so I need to have 2 or 3 of them.
So I started thinking about setting up a small server at home. I'd like to ask you for some advice whether what I want to do is achieveable and maybe some tips on setting up some stuff.
So, what do I want to host for sure:
I was thinking about getting Mini PC GMKTec NucBox G3 Intel N100 16/512 WiFi 6 BT5.2 and start from there.
Requirements for the setup:
I also though about getting a NAS to store the photos from Immich since the amount is growing (currently about 350GB) but I also have some video recordings from my trips on external drives and I'd like to keep these data in one place. And I can imagine the media server will require some space too.
So, can you help me out?
I'm currently backing up about 200GB of files using Kopia and storing them in Backblaze B2. It's been about 2 years of incremental backups, and currently, I'm using nearly 2TB of space on Backblaze.
I had thought with incremental backups that the used storage would be much less. I assume this is because full copies of the data are made for each backup type? ie. daily, weekly, monthly, annually?
If that's the case, I would likely choose to store fewer versions of each, but my question is: is there any way to prune the backups from Backblaze using the Kopia UI? If I lower the snapshot retention for each type, would that go and delete old snapshots from Backblaze?
Have been using Umbrel OS on a Raspberry Pi 400 with a 1TB external SSD for the past couple of years to run a 24/7/365 Bitcoin node/Electrum/Ordinals (nothing else). Well, the SSD is now full. Was originally going to simply clone to a larger SSD and carry on but then discovered that Umbrel offers their own hardware now and it's on sale currently for $369 USD. 16GB RAM and 2TB SSD would be a nice little upgrade to what I was using before and possibly encourage me to experiment a bit with other apps in their ecosystem. I was quickly convinced that was the way to go. Then, I did some Redditing and started down the rabbit hole of homelab/self hosted solutions and holy shit, I now have major analysis paralysis. Would love feedback from anyone willing to share their POV.
On one hand, I love the neat, tidy solution of the Umbrel Home and how it's about as energy efficient as my Pi... and I don't really care that it doesn't even have a display out since I'd prefer to operate it remotely via my main PC anyway.
On the other hand, I know that it's not a popular solution with those more experienced with this world. I am absolutely intrigued by the seemingly unlimited potential of a more open ended platform. But, I'm also concerned about going further down that rabbit hole: the technical know how required, the potential added cost/power consumption and, probably most importantly, what would I actually do with it beyond what I'm already doing? I think that's the main thing, actually... I love the idea of starting my own homelab but not quite convinced I have a great need for it. Again, totally open to suggestions. After some basic research, I could see the immediate value in something like Nextcloud to move away from Google apps but then, I could also just run that on the Umbrel Home. Idk, there's so much I don't know.
Fwiw, I do have some basic technical know how. I'm a web dev and used to use Linux as my primary OS for a few years (Ubuntu, then later Arch) so I do have at least a little basic familiarity there. I am also familiar with basic Docker concepts though not much first hand experience. Will shut up now and listen. Many thanks to anyone patient enough to offer insight and/or guidance!
Hi ! Since a few weeks, I try to find the best combination for both security and easy access. For me, a VPN is a no go. What i've done so far (i'm on a synology) :
- Use a reverse proxy for each docker
- Add rules in the firewall (like GeoIP, etc)
- Add protections for DDOS
- Each docker has it's own network
- Use docker secrets for sensitive data
I don't want to use caddy or traefik because Synology already propose a solution for reverse proxy. For all services which provide 2fa, I use it (like Vaultwarden). My next goal is to add 2FA for all exposed services. Atm, I use :
- Audiobooksheld -> is exposed,
- Jellyfin -> is exposed,
- MakeMKV -> not exposed,
- MKVToolnix -> not exposed,
- portainer -> not exposed,
- Vaultwarden -> is exposed,
- Authentik -> is exposed,
- Homarr -> is exposed,
- Komga -> is exposed,
- Mealie -> is exposed,
- PaperlessNGX -> is exposed
Most of them support OIDC (SSO) authentification. So I use Authentik with a custom flow that force 2FA with TOTP code. Everything is working BUT, my main question is how do you manage your mobile apps. For example, I use Symfonium as a client for Jellyfin, and it can only login via credentials + password. Another example is paperless mobile app. A workaround is to create a temporary password, connect the mobile and then erase the password, but I'm not a big fan of that. Any solutions ?
Back to my quest for high-availability servers, either via docker swarm or a reverse proxy
I am a bit stuck on the distributed file system. The goal is to keep the databases of each instance in synch. My two immediate use cases are VaultWarden and TLS certs. Although, for the TLS certs can have 1 read-write instance (VPS) and the rest be read-only (local reverse proxies). The idea is that a VPS would be the primary and if it fails, it rolls over to a local (Tailscale accessible) server.
My candidates are:
Any suggestions or recommendations? What is everyone using?
What is easy and reliable?
How-to guides would be greatly appreciated.
Thanks
I have two docker containers one for caddy and the other for siyuan.
the siyuan is pointed to a subdomain of mine managed by cloudflare
my caddyfile:
http://siyuan.mydomain.com {
reverse_proxy siyuan:6806
}
I Have tested a lot of things but it dose not work :/
I have the necessary A DNS records for the subdomain, the caddy ports are open and everything from the port to the name is correct.
No idea what is left here to try
Hi Does anyone have a suggestion. With home assistant my gps location is tracked but I would like to see a (city) map where I have been in a certain period.
I have already looked at owntracks but for that you need their app. in my case I want to use home assistant as a tracker. I have also seen dawarich, looks like what I'm looking for but is still very much in development, just like the home assistant connection
I'm looking to either take the time to learn how to build a searchable database or use existing tools for storing historical manuscripts and public domain books on my website. I’ve explored various WordPress plugins, but they either lack essential features (like item previews and being a genuine database) or have outdated designs. I also looked into Paperless Ngx, but it doesn’t offer the connectivity I want between my website and the software. LMS options like Koha and VuFind seem too rigid. I’d appreciate any recommendations for flexible, modern approachs or frameworks that could fit this project!
After trying almost every possible home dashboard solution, I realized none of them really fit my needs. My priorities were simple:
Unfortunately, most solutions I found either lacked features or were too rigid. But hey, one of the perks of being a developer is that when you can't find what you need, you can build it yourself!
So I built my own home dashboard, and here’s what it does:
And there’s more I’m planning to add as I keep iterating.
Here are some screenshots of the dashboard. I’d love to hear your thoughts!
What do you think? If you’ve tried something similar or have ideas for improvements, I’d love to hear them. Building this has been super fun and rewarding—especially since it now perfectly fits my workflow.
Would you want a detailed walkthrough or even a public repo? 👀
Home page with links and integrations for NGINX proxy manager, proxmox and portainer in action
Links arranged im folders. Folders can be placed in sidebar or inside other folders
Youtube links with Youtube integration
Ah! Snippet manager allows me to access my snippets on any machine on my network
All devices on network are listed with IP and MAC.
Hey everyone,
I’m searching for a reliable cloud-hosted front-end for LLMs that I can use across multiple devices like my phone, tablet, and laptop. Unfortunately, I cannot host the LLM myself, so I need a solution that’s easy to access and manage. (willing to overlook if the solution is truly brilliant)
For the past 10 days, I’ve been using TypingMind and have been pretty happy with its performance. However, I have a few concerns:
I’m open to paying a small, one-time fee for a lifetime subscription but cannot commit to monthly payments.
If you know of any alternatives I should consider before settling on TypingMind, I’d greatly appreciate your suggestions!
Note: i have already tried librechat and openWebUi and both are good but typingMind seems to be more polished.
Thanks in advance!
I am running caddy and several other services in Docker. I JUST started to use Docker and learn it. I only have a couple but I have caddy setup and working.
Currently I am using host as the network for simplicity but I think I should change that.
Caddy has to be on the same network, does that include bridge? or do I HAVE to create a new network and have all containers on that network? How does custom networks work, the same as bridge? If so what is the difference, if I use a custom network will it be the same subnet vs bridge does multiple subnets?
Hey everyone,
I am currently looking into selfhosted notification services to monitor various events on my home infrastructure, including:
While ntfy and gotify are commonly recommended solutions, I have concerns about message delivery reliability. Specifically, I'm worried about scenarios where notifications might be lost if:
From my understanding, both of these services lack queuing/retry mechanisms, which could mean important notifications get lost if delivery fails? Am I correct about this limitation?
Can you some tell me if my understanding of ntfy/gotify's delivery mechanisms is correct, if there are potential workaround to ensure reliable message delivery, or if there are alternative solutions that prioritize guaranteed message delivery?
My primary concern is ensuring critical notifications aren't lost when network connectivity is temporarily unavailable.
I get something about matrix.org and element, etc.
And I am willing to deploy a synapse in my local network.
So I tried to run synapse by using docker, and I get something wrong which could not be solved after I searched the internet.
I want to get some info from reddit.
My docker compose file is:
services:
synapse:
image: matrixdotorg/synapse:latest
container_name: synapse
environment:
- SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
ports:
- "0.0.0.0:8008:8008"
- "0.0.0.0:8009:8009"
- "0.0.0.0:8448:8448"
volumes:
- ./data:/data
restart: always
networks:
- synapse_net
networks:
synapse_net:
name: cufoon_net_for_any
external: true
What I used to generate config:
docker run -it --rm -v ./data:/data -e SYNAPSE_SERVER_NAME=192.168.1.21 -e SYNAPSE_REPORT_STATS=yes matrixdotorg/synapse:latest generate
After running, It told me it is unhealthy:
And I try to register a user, I got:
It returned 502, I can not understand why this is happening, This is really weird.
Let me start by saying that I am a total novice regarding networking and all and I wold like to know if i should be worried about exposing services and how to protect them maybe.
I have a domain name from namecheap, lets say \abcdomain.com, I made a cloudflare account and the domain uses cloudflare's nameservers.
There i have proxied subdomains for my services as my ISP uses a CGNAT.
My question really is how do bad actors find out about my services if I never advertise them anywhere. I expose them for my personal use outside my LAN. Like can they scan all IP's until they find services or like domains, then they try popular subdomains, like \immich.abcdomain.com etc..., or they have some other method?
I'm asking because the cloudflare dashboard shows web traffic request from all aroud the world to my domain. Is that something I should be worried about?
Hiya, I'd be ever so happy if you could help me out here:
I received a MacStudio from work as a LLM-Server, which I set up with Docker, CapRover, Ollama & Mistral.rs, Open WebUI, PocketBase and SurrealDB via some domain and it works pretty fine. But I am again and again stumbling some limitations and having to find solutions - currently n8n which has an old version on CapRover and for the life of me I can't set up the reverse proxy if I build via Docker. Currently, I can't really test things as it would mean downtime - which I'd like to minimise.
And I know too little and no one at work can help me. I tried to learn more, but quite a bit of the vocabulary used is not really accessible (or I'm just too dumb...) So, please help me :) Another issue I'm facing: It should be accessible for me, I'd rather concede some performance gains to ease of setting up - I'd really like to go down the rabbit hole, but I need to budget my time...
The question is, what services would I need, especially as "base" (platform as a service/reverse proxy - am I even using the right words?). I'd need to access the above mentioned services securely via domain. Plus, I'd like to have a CD/CI-pipeline for SvelteKit apps.
If you'd have recommendations, please let me know.
What I was looking at:
Many thanks in advance!
Hey guys,
its me again with medication assistant :D
For anyone who never heard of MedAssist, it is selfhosted web application that tracks medication usage. It's main feature is to send e-mail remainder when it's time to reorder medication.
I have received a great feedback and you all guys made me even more excited to spend time on this project. Honestly, I can't believe how many people even visited github page, thank you a lot! Some of you broke demo page which helped me find weak spots, so thx about that as well <3. I received some feature requests and bug issues via reddit, lemmy and github. I spent some time working on them and now I want to announce an update (still develop branch):
Demo is up and running again, feel free to try it or brake it. Fingers crossed there are not many bugs left. If it turns out it's stable enough, I'll merge develop to main branch and create latest release.
Planning to add few more features in the next release.
BREAKING CHANGE: Make sure you backup your database file (medication.db) and modify docker-compose
Database path was changed (to achieve uniform path no matter what installation method was chosen), so make sure to update docker-compose with:
volumes:
- /path/to/database/directory:/app/medassist
Change to:
volumes:
- /path/to/database/directory:/app/database
Also change version tag to develop
or v0.15.0.develop
if you are using docker.
Link directly to develop branch with new update: https://github.com/njic/medassist/tree/develop
All suggestions are welcome and feel free to star the project on github <3
R
GitHub: https://github.com/makeplane/plane
Vihar here from the Plane team. We've just shipped our most significant update yet with v0.24.0
, and I'm excited to share what we've been building. This release focuses on making Plane lightning-fast, more secure, and even more powerful for self-hosters.
Here's what we've packed into this update.👇
Switch layouts, apply filters, and open issues without loading screens (except Gantt view). While initial load may take a moment, subsequent actions are instantaneous thanks to our new no-load implementation. Note: SSL configuration required for optimal security.
Switch between screens without loading screens using Hyper mode
Instance administrators can now manage all workspaces directly from /god-mode
. View the total number of workspaces, track member activity, and access other essential metrics.
/callout
command. Create notes, warnings, tips, and more to highlight important information in your documentation.All assets on Plane—including attachments, cover images, profile pictures, logos, avatars, and images in issues and page descriptions—will now be stored in a private bucket. This upgrade means enhanced data security, so images uploaded to Plane will only be accessible within the platform. Read more here.
You will now find a new Drafts
section in the left navbar where all your drafts across all projects will show up. All under a single tab.
See full updates on our release notes here. Docs to self-host Plane are here. If you're facing any issues with the upgrade, let us know in the comments, or on our Discord.
Hey all,
for work, I have a new application where we need to store customer data for at least 25 years, we will be generating about 150GB of PDFs and test images per day. I need the current years data available at hand, but can push data older than 365 days to archival storage.
150GB per day means 55TB/year and to maintain that data for 25 years I am looking at managing 1.5PB of data. - This data will be collected from 3 different production sites and transmitted to our HQ over a VPN connection.
To maintain the current years data, I will setup a small NAS that will automatically collect the test data from my machines. But where I am lost is the long term archive. I have looked into solutions like backblaze and s3 but those seem quite expensive when looking at 25 years of data collected. I could also simply swap out high density HDDs as they fill up and mark them with the data they hold but disk rot scares me (also the risk of drives being dropped is a real one) or I can invest into LTO tape storage.
Does anyone have any recommended LTO setups? Hardware and software? I would like easy transfers from maybe a TrueNAS system to the LTO perhaps daily or weekly, I would like as much of the software as possible to run on ubuntu or some linux distro. I would also like the LTO to be available over the network so certain users can access them over the lan - this would allow me to dedicate 1 person in my IT team to handle the tapes.
Is there a influxdb/grafana style stack that has a way to display an android widget? It doesn't have to update real time but just show some running tally type info without having to open a browser or app.
I am currently using Kutt. It's works fine but it needs login to change the shortened url to a more rememberable one. So I'm thinking of a url shortener that shortens the url into a common vocab from a dictionary list by default. I know it'll significantly decrease the urls it can handle but it's not a problem to me since it's for my personal and some of my friends.
Any recommendations? I will consider making one if there isn't any.