/r/truenas
All things related to TrueNAS, the world's #1 most deployed Storage OS!
/r/truenas
Forgive me if this has already been asked and answered, maybe my search terms suck but I can’t find anything certain.
Now that electric eel allows for the pool and vdevs to be expanded on to additional disks, can we also change the layout while expanding the vdev to allow for additional redundancy? It’s not super critical for my use case since it’s really just for a plex server, but more redundancy is never a bad thing, right?
Hi, all.
Is it normal for ZFS replication to take days? The data set was about 1tb on a gigE network.
I have TrueNAS virtualized in proxmox (e5-2650v2 cpus) and have 4 VPUs assigned. They have been pegged for days(5 and counting) for replication to a zfs pool on a different proxmox system. I assume gets pegged bc of the encryption (i could not select none).
Is this normal?
Is there a significant risk of data loss if i stop the replication (don't know how), shut down the truenas server and give it an additional 12 VPUs?
Thank you.
Hey everyone, first NAS I'm building, and after some considerations I've decided to go with 2 mirrored vdevs, 4 disks in total (initially it was 6 disks, but I don't have the budget right now). I'd be more at ease with a raidz2 but the speed difference will have more impact that I'm willing to accept. Today, all my data takes 4tb (we're talking about 10 years worh of data), but with a baby on the way, I feel it can double pretty quick. I was aiming to buy 4x8TB disks for 16tb total storage, 12 usabe considering the 20% free space (around 1k euro in amazon) and upgrade it in 6/8 years. However looking at the disks, seems that 16tb provides the best cost/tb, but will cost me 1.8k euro (also on amazon) for 32tb total storage and 26tb usable. I was fine with 8tb, but I can't shake the feeling of missing out something by not going 16tb. You all who have way more NAS experience than me, what would you choose? I'd love to hear your opinion and suggestions on all that as well.
Hi folks,
so, I just set up a Cloudsync task to Backblaze B2 such that I also have an offsite backup of my files.
Unfortunately, it SEEMS like there is something going wrong as the status report of the running task shows too little data to be transferred.
As the source, I selected my top-level dataset (which has child datasets, of course). This dataset in total is currently 1.65 TiB:
But when the task runs, it only reports 399 GiB (and way to few files)
Transfer size differs significantly
Is this some misunderstanding or problem on Layer 8 (=me) or is this a bug? If I look into my Backblaze Bucket, it also seems to correctly transfer child datasets, so it does not seem to be an issue to select the root dataset as the backup source. And also when I did the dry run I think it showed the correct size.
Thanks a lot in advance!
I'm having trouble creating a bootable USB for TrueNAS SCALE for my PC. I will admit I'm inexperienced when it comes to creating bootable media/imaging PCs but I get the general idea. Create a bootable media (In my case a USB), plug it into the PC and change the boot order to boot from the bootable media first, disable secure boot, and start it up to complete the installation.
Problem is I have only been able to get it to boot from the USB into TrueNAS SCALE once and when it did it loaded with an error that I didn't write down sadly. Since then I haven't been able to boot back into the OS but I have booted up to a pure black screen instead of Windows 11 so it must have booted from it a few other times. I've recreated the image on the USB more than 20 times with different settings when creating the image and nothings working.
Here's what I'm working with:
PC: HP Elitedesk 800 G5 Mini
Seeking advice on how I can create a bootable USB that will work on my PC.
UPDATE: After following you guy's suggestions and some realizations along the way. I was ablw to install the image onto my SSD and its up and running. Thank you guys!
Is there a reason to keep a cold spare hdd if you have raidz2? If I have an additional drive around, I could go with raidz3 but my gut tells me raidz2 and a cold spare make sense for something like a 7 drive pool. Maybe it makes more sense when there are more pools using the same drive size so a cold spare lets me rebuild more than one pool in case of drive failure?
I'm running one compose with both Gluetun and qBit on TrueNAS Scale EE Dockge running flawlessly; zero issues with torrenting and port forwarding. As you know, when Gluetun boots up or returns an unhealty check, it picks another random port to forward which I then have to change in qBit.
Is there a way to have qBit detect the forwarded port and adjust it appropriately? If possible I'd love to have this code within the compose to keep it simple and easy. I see within the terminal anytime the port gets forwarded by Gluetun, the port gets logged within a file:
INFO [port forwarding] writing port file /tmp/gluetun/forwarded_port
I also would like this change to be constantly updated during uptime to catch whenever Gluetun changes its port during an unhealthy check.
If this isn't possible through the compose, how could I get this to work within TrueNAS scale? All I have is Dockge on it running all my stacks.
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
restart: unless-stopped
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
ports:
- 8888:8888/tcp # HTTP proxy
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
- 8080:8080 # qbit
- 6881:6881 # qbit
- 6881:6881/udp # qbit
volumes:
- /mnt/Swimming/Sandboxes/docker/gluetun/config:/gluetun
environment:
- TZ=Australia/Sydney
- PUBLICIP_API=ipinfo
- PUBLICIP_API_TOKEN=###########
- VPN_SERVICE_PROVIDER=protonvpn
- VPN_TYPE=openvpn
- VPN_PORT_FORWARDING=on
- OPENVPN_USER=############+pmp
- OPENVPN_PASSWORD=###########################
- UPDATER_PERIOD=24h
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
environment:
- PUID=3000
- PGID=3000
- TZ=Australia/Sydney
- WEBUI_PORT=8080
- TORRENTING_PORT=6881
volumes:
- /mnt/Swimming/Sandboxes/docker/qbittorrent/config:/config
- /mnt/Swimming/MediaServer/downloads/torrents:/mediaserver/downloads/torrents
restart: unless-stopped
network_mode: service:gluetun
networks: {}
Dell 3630 with i9
16gb ram
256 boot M.2
2x 8TB HDD
ElectricEel-24.10.0.2
Looking for some insight on how to get Jellyfin or Plex, mainly Jellyfin setup
I've watched all the videos on it, but I think there are variances in the version that don't make sense. I'm new at this NAS stuff and want to run my own media server on my TVs and other computers. Is there any video or tutorial on how to do it that is recent with the new ElectricEel? I'm probably going to sit down and try a few more times, but there are some things that I don't get or know how to set up. I had it all logged in, but I couldn't point my Jellyfin to my HDDs, which was frustrating.
Hey, I have already read a couple of posts about this topic but couldn’t find a solution. I have the Minecraft Application by TrueNAS Community installed. Unfortunately I didn’t configure a host path instead of ix_volumes. Is there a way to backup my worlddata anyways?
Thank you in advance!
One use case I have is to provide a VM to household members when they need it (accessing from Chromebook), but don't want it holding on to RAM when not actively using it. If this possible with TrueNAS Scale or is there another option that would work better for me?
Thanks!
What kind of sata cables do you guys use on your DIY server's built on consumer motherboards?
I am using the regular ones, but they are so stiff and very hard to cable manage. Not only they look bad, I don't want to cause damage by trying to cable manage too hard.
Any opinions on these cables? https://www.amazon.es/CableDeconn-Velocidad-6Gbps-Cable-Servidor/dp/B00V7NOJIS/ref=sr_1_5?__mk_es_ES=%C3%85M%C3%85%C5%BD%C3%95%C3%91&nsdOptOutParam=true&sr=8-5
I must admit, they do look sketchy despite the reviews..
Hi, I have a dell mini pc where I can pass thought either individual drives or intel integrated data controller. I wonder what is the best or whether there is any difference. I have tried drive pass through and it works fine.
Hi all, it looks like I just lost 5TB of data which really sucks. I recently created some encrypted datasets through the TrueNAS GUI and moved data over to them. I've done that before plenty of times and TrueNAS just saved the key in its DB. I just did a minor update of TrueNAS and I'm now locked out of two of my datasets after the reboot.
Some of the datasets I created (all in the same way through the GUI) are fine, but the keys are missing for two of the datasets. I exported backups before backing up and I've gone through old snapshots of the boot pool to see if the keys are there but they seem to have vanished.
So this doesn't happen again, does anyone know what circumstances TrueNAS won't save your encryption key if you create an encrypted dataset through the GUI with all the defaults? Feels kind of like a bug to me, but maybe I missed some option?
Also, they are encryption roots so I think there is no way to recover any of the data, but does anyone know any last ditch options I can try? Maybe there is a bug where TrueNAS will reuse an existing key instead of generating a new one or something like that?
Edit: Added a screenshot of the settings I used. I just created another dataset and the keys were added to the TrueNAS database so I have no idea what went wrong.
Edit 2: If anyone comes across this in the future, it was because I renamed the datasets via command line. TrueNAS scale lost track of the association between the key and the dataset. I don't think that TrueNAS should have deleted the key if it didn't match up with a dataset, it should have just left it there IMHO. This person had the same issue as me, but they still had the keys available to recover (https://www.truenas.com/community/threads/recovering-zfs-renamed-encrypted-datasets-missing-on-boot.92553/) also (https://www.truenas.com/community/threads/dataset-encryption-key-deleted-on-rename-export.86450/)
Edit 3: I got my data back! Turns out that TrueNAS keeps regular backups of the system database at `/var/db/system/configs-<UUID>`. I used the script at https://milek.blogspot.com/2022/02/truenas-scale-zfs-wrapping-key.html and modified it to run through all the DB backups. I piped the output through grep and eventually found a key that worked.
Final lessons learnt: 1. Be very careful renaming encrypted datasets! 2. TrueNAS keeps regular backups of the config database
I'm looking for recommendations on the ideal configuration given the following drive configuration:
4x SATA PNY CS900 2TB (540 TBW)
4x SATA WD161KFGX 16TB
4x INTEL u.2 NVMe P4510 7.28 TB
2x Samsung m.2 NVMe 990 PRO 2TB
Server has dual Intel Xeon Gold 6148 CPU @ 2.40GHz (40 cores / 80 Threads) and 256GB ECC memory.
Currently using 10gig network direct attached copper
Current uses:
Booting from 2x PNY SSD mirror, Intel P4510 configured as 1 x RAIDZ1 | 4 wide | 7.28 TiB
Everything else is unconfigured.
NFS:
TV/Music/Movie streaming, one or two clients at a time (currently ~7tb)
ISO Image hosting for ProxMox VM's
Planned:
VM storage for ProxMox cluster, mixed 2.5 gigabit and 10 gigabit network
The only mission critical data is my music, which would take days to rip from CD again, plus acquiring from other sources. Movies / TV are largely transient, and ISO images are easily sourced.
I upgraded to EE but I am struggling to get my VMs to load. When I tried to start a VM I am getting the following error with all of my VMs.
Every single one of my VMs is doing this which is really weird. I tried to create a new test VM and I was able to start it up. If I rollback to the previous version of TrueNAS my VMs load fine.
Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/supervisor/supervisor.py", line 189, in start
if self.domain.create() < 0:
^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/libvirt.py", line 1373, in create
raise libvirtError('virDomainCreate() failed')
libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor: 2024-12-02T20:52:47.772466Z qemu-system-x86_64: warning: This family of AMD CPU doesn't support hyperthreading(8)
Please configure -smp options properly or try enabling topoext feature.
2024-12-02T20:52:47.779590Z qemu-system-x86_64: system firmware block device has invalid size 0
2024-12-02T20:52:47.779598Z qemu-system-x86_64: info: its size must be a non-zero multiple of 0x1000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 208, in call_method
result = await self.middleware.call_with_audit(message['method'], serviceobj, methodobj, params, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1526, in call_with_audit
result = await self._call(method, serviceobj, methodobj, params, app=app,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1457, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 179, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 49, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_lifecycle.py", line 58, in start
await self.middleware.run_in_thread(self._start, vm['name'])
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in run_in_thread
return await self.run_in_executor(io_thread_pool_executor, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1361, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/vm_supervisor.py", line 68, in _start
self.vms[vm_name].start(vm_data=self._vm_from_name(vm_name))
File "/usr/lib/python3/dist-packages/middlewared/plugins/vm/supervisor/supervisor.py", line 198, in start
raise CallError('\n'.join(errors))
middlewared.service_exception.CallError: [EFAULT] internal error: qemu unexpectedly closed the monitor: 2024-12-02T20:52:47.772466Z qemu-system-x86_64: warning: This family of AMD CPU doesn't support hyperthreading(8)
Please configure -smp options properly or try enabling topoext feature.
2024-12-02T20:52:47.779590Z qemu-system-x86_64: system firmware block device has invalid size 0
2024-12-02T20:52:47.779598Z qemu-system-x86_64: info: its size must be a non-zero multiple of 0x1000
Hi everyone, I wanna start by saying I’m a completely noob at running a home NAS and wanted to give it a shot with some spare computer parts I had laying around.
I created an SMB share and can successfully load and pull files onto my NAS from windows and macOS.
Last night I backed up all my photos and essentially dumped 10,000+ photos and videos into a single folder. About 200GB
My PC has no problem opening the folder and displaying its contents. But when on my Mac, when I open the same folder it just says “Loading…”
I can navigate to other directories on the NAS and see their contents and upload and download its files. But with this large folder it just says loading… I’m not even sure if it’s actually doing anything.
Does anyone have any ideas that could point me in the right direction? Is it my network, my Mac, server resources being slow? Any help will be appreciated thanks.
I am trying to use audio book shelf and I want to create a folder to put my books but I can't find any of my folders they I would find on my windows PC. Where would I find my media folder that I made local access to? What's the path to shared data sets
I know that anything related to Truecharts is no longer compatible, but is the a Docker/Docker-Compose alternative for additional community catalogs or collections? Or is there really only the official ones and your own custom ones?
Hi. It’s clearly me that’s done something wrong…
But can someone share their plex storage permissions?
I’ve got app data called Plex_media that I’ve also shared via an SMB
And then I’ve set each of the plex storage options in truenas to that folder.
Within that folder I’ve set up TV/Movies/Personal and added the corresponding data. Mapped Plex categories to see each of those folders and index them accordingly.
Things keep disappearing. Moving. I’ll watch half of an episode and then it’ll completely vanish.
Everything in the folders stay where they are and all the files are in the correct places, just plex itself has a mad one after re indexing the folders. Please send help and some form of straight jacket to put me in. Thx
Hey folks. I’m interested in setting up an offsite backup for my NAS. It’d just be hosting copies of files, so processing and networking speed is pretty irrelevant. I’m running a prebuilt UGREEN box for my home NAS, and I have basically no savvy for PC building.
Any suggestions for parts for a super cheap build? Or even where to start?
I grabbed a Define R5 case on sale a while back, so I can accommodate an ATX motherboard. I plan to use 6 Seagate Exos in RAID Z1 (or maybe 8 in Z2).
I'm definitely new to the enterprise server world, and was torn between TrueNas and unraid. I've landed on TrueNas Core, and trying to install that on my new (to me) PowerEdge R730XD with 12x 4TB SAS drives, and Google hasn't been my friend so far.
I picked up a 500 GB NVMe m.2 drive that connects to PCIe to use as the truenas boot drive, as to not waste an entire 4tb storage disk just for the OS (because as I understand it, it shouldn't run off a USB drive like unraid does).
I got it installed with UEFI boot, however the server doesn't seem to recognize the NVMe drive to boot the OS from.
Does anyone know if there's an easy way to get that to work using my current config, or would it be better to pick up a smaller drive to install in the back to install the OS to, connected to the PERC H730? I believe with the H730 card I have, I can install either SAS or Sata drives, but I'd have to do more research on how that works, if the suggestion was to pick up a cheap sata drive, but I can always just get a small ass drive to be safe.
Just trying to get this NAS off the ground to back up an old Drobo 5n I have.
Does anyone over at iXsystems have a clue about web design? The new forums are terrible - they layout, navigation, organization, etc. all scream either “we don’t have a clue” or we let the cave coder who doesn’t socialize design the new forums.
The old forums made sense, had logical groupings and a logical setup. The new forums seem totally void of that.
Hi, I'm trying to install Castopod on Truenas ElectricEel-24.10.0.2. It always fails the "up" action, and the logs say the castopod container is unhealthy. I've shown the Host Paths, error and log below. Any troubleshooting steps or anything else I'd be very grateful for, or any other info I need to share!
Hello,
How reliable is virtualisation on TrueNAS please? I saw a YT video from Craft Computing where he says it is simply not stable.
My QNAP TS-453D (Celeron J4125, 4 cores, 4 threads - 20 GB - 4 x TBHDD + 2 x 1 TB NVMe) runs 20 docker containers stable but I’ve enough of the QTS security issues.
I need to reinstall the file system so it makes hardly any difference to go the QTS or the TrueNAS way. I just want a reliable box to store files on and run containers on for 3 people.
Thanks.
Hello,
I upgraded to ElectricEel-24.10.0.2 from ElectricEel-24.10.0.0, and since, my apps do not start with message:
"Failed to start docker for Applications: Docker service could not be started"
The reality is different. My apps started (Nextcloud, Homepage, Immich, Portainer and even other docker apps installed with Portainer) but the TrueNAS GUI does not have access to the apps anymore.
Any idea?
EDIT1: downgrade towards ElectricEel-24.10.0.0 "solves" the issue.
As the title says.
I have $200 credit for dell that I could use to purchase basically anything offered on Dell.com, and I've wanted to purchase a nas for some time now to have my own off cloud self managed storage.
I was previously interested in purchasing a synology as that's what is typically recommended or at-least my observations. I took a look at dell.com and they do have decent monitors and other items, I searched nas out of curiosity, & found out they have a nas called Buffalo. So looking to find out if that is a reliable nas for this price range and if it could be a good opportunity to begin my journey with home/self storage.
Here is a link with the product selection for the buffalo nas im referring to :
I'm using TureNAS Core. I'm currently on version TrueNAS-13.0-U6.3 and I'm trying to set up a plex server. I've set up the plug in section on my pool. When I try to install the Plex plug in i get the following error "Error: 13.2-RELEASE was not found!".
I use TrueNAS Core at home. I'm also not super smart when it comes to network management, it's a hobby, and one I'm not great at.
I have literally spent 6HOURS trying to get an ubuntu VM access to a truenas SCALE SMB share. It simply will not allow me to write or use the share. I have nuked and re-created the zfs dataset at least 10 times, tried general, SMB (current), SMB+NFS....
I am accessing as user plex. I have changed the password multiple times and am 10,000% sure it's correct.
from truenas scale console:
ls -ld /mnt/titan/docker
drwxrwx--- 2 root root 2 Dec 1 17:02 /mnt/titan/docker
Per the permissions ACL editor (and I have been through this at least 25 times). I've restarted SMB share about 20 times as well. I've rebooted. For the love of all that is good, what in the hell is the problem!!!
My NAS was giving issues so I took it down to check the cables (all 3 drives connected to one of the LSI ports were offline). When I restarted it, all the drives were offline.
I replaced the cables, but the zpool isn't automatically coming online with the disks. Any idea how to get it to recognize the disks from the raid and place them back in service?
It was a raid-z2 with 6 disks. One disk is broken (the SATA connector broke off while working with it) so it'll be replaced, but the other 5 disks are online and working fine.
Link to what I see in the disks and storage tabs https://imgur.com/a/xpDYB23