/r/openzfs
Open ZFS: ZFS on BSD and Linux, the opensource edition of ZFS. This subreddit is focused on openZFS for BSD and Linux operating systems. The aim here is to hunker down into using openZFS or ZoL on GNU/Linux and the equivalent GNU BSD operating systems.
OpenZFS
OpenZFS: ZFS on BSD and Linux, the opensource edition of ZFS. This subreddit is focused on openZFS for FreeBSD and Linux operating systems. The aim here is to hunker down into using openZFS or ZoL on GNU/Linux and the equivalent GNU BSD operating systems.
While the terminology isn't perfect here yet, I hope you get the idea. We're not here to steal limelight from /r/zfs, but instead have a focus of posts on running openZFS/ ZoL on things like Debian or FreeNas or CentOS or Ubuntu or FreeBSD or Arch, etc.
(If we're missing an 'open' OS that you think should be listed here, please message the moderators)
We're really excited about using openZFS because of how awesome it is! Who would use another filesystem when you have such a feature-set ?!
Post Filters | Post Filters |
---|---|
Guides & Tips | Blog Posts |
Video | Meta |
Linux ZFS | BSD ZFS |
If you'd like to help moderate, please contact the existing mod(s) for this subreddit using the link provided below in this sidebar.
(We're still working on fancy flair)
related subreddits:
/r/datahoarder <-- lots of folks use openZFS there!
Disclaimer
This subreddit assumes no official connection to open-zfs.org. It merely uses a similar convention to describe the open-source edition of ZFS for the sake of community contribution.
/r/openzfs
Hello, I'm currently utilizing ZFS at work where we've employed a zvol formatted with NTFS. According to ZFS, the data REF is 11.5TB, yet NTFS indicates only 6.7TB.
We've taken a few snapshots, which collectively consume no more than 100GB. I attempted to reclaim space using fstrim, which freed up about 500GB. However, this is far from the 4TB discrepancy I'm facing. Any insights or suggestions would be greatly appreciated.
Our setup is as follows:
```
pool: pool
state: ONLINE
scan: scrub repaired 0B in 01:52:13 with 0 errors on Thu Apr 4 14:00:43 2024
config:
NAME STATE READ WRITE CKSUM
root ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
vda ONLINE 0 0 0
vdb ONLINE 0 0 0
vdc ONLINE 0 0 0
vdd ONLINE 0 0 0
vde ONLINE 0 0 0
vdf ONLINE 0 0 0
NAME USED AVAIL REFER MOUNTPOINT
root 11.8T 1.97T 153K /root
root/root 11.8T 1.97T 11.5T -
root/root@sn-69667848-172b-40ad-a2ce-acab991f1def 71.3G - 7.06T -
root/root@sn-7c0d9c2e-eb83-4fa0-a20a-10cb3667379f 76.0M - 7.37T -
root/root@sn-f4bccdea-4b5e-4fb5-8b0b-1bf2870df3f3 181M - 7.37T -
root/root@sn-4171c850-9450-495e-b6ed-d5eb4e21f889 306M - 7.37T -
root/root@backup.2024-04-08.08:22:00 4.54G - 10.7T -
root/root@sn-3bdccf93-1e53-4e47-b870-4ce5658c677e 184M - 11.5T -
NAME PROPERTY VALUE SOURCE
root/root type volume -
root/root creation Tue Mar 26 13:21 2024 -
root/root used 11.8T -
root/root available 1.97T -
root/root referenced 11.5T -
root/root compressratio 1.00x -
root/root reservation none default
root/root volsize 11T local
root/root volblocksize 8K default
root/root checksum on default
root/root compression off default
root/root readonly off default
root/root createtxg 198 -
root/root copies 1 default
root/root refreservation none default
root/root guid 9779813421103601914 -
root/root primarycache all default
root/root secondarycache all default
root/root usedbysnapshots 348G -
root/root usedbydataset 11.5T -
root/root usedbychildren 0B -
root/root usedbyrefreservation 0B -
root/root logbias latency default
root/root objsetid 413 -
root/root dedup off default
root/root mlslabel none default
root/root sync standard default
root/root refcompressratio 1.00x -
root/root written 33.6G -
root/root logicalused 7.40T -
root/root logicalreferenced 7.19T -
root/root volmode default default
root/root snapshot_limit none default
root/root snapshot_count none default
root/root snapdev hidden default
root/root context none default
root/root fscontext none default
root/root defcontext none default
root/root rootcontext none default
root/root redundant_metadata all default
root/root encryption off default
root/root keylocation none default
root/root keyformat none default
root/root pbkdf2iters 0 default
/dev/zd0p2 11T 6.7T 4.4T 61% /mnt/test
I've have a ext4 on LVM on linux RAID based NAS for a decade+ that runs syncthing and syncs dozens of devices in my homelab. Works great. I'm finally building it's replacement based on ZFS RAID (first experience with ZFS), so lots of learning.
I know that:
My question is this: seeing how ZFS is COW, and syncthing would just constantly be flooding the array with small random writes to existing files, isn't it more efficient to make a dataset out of my syncthing data and enable dedup there only?
Addendum: How does this syncthing setting interact with the ZFS dedup settings? copy_file_range
Would it override the ZFS setting or do they both need to be enabled?
I'm pretty sure my nvme pool is underperforming due to hitting the ARC unnessarily.
I read somewhere that this can be fixed via directio. how?
We deploy turnkey data ingest systems that are typically always configured with a 12 drive RAID6 configuration (our RAID host adapters are Atto, Areca, LSI depending on the hardware or OS version).
I've experimented with ZFS and RAIDZ2 in the past and could never get past the poor write performance. We're used to write performance in the neighborhood of 1.5 GBs with our hardware RAID controllers, and RAIDZ2 was much slower.
I recently read about dRAID and it sounds intriguing, If I'm understanding correctly, one benefit is that it overcomes the write performance limitations of RAIDZ2?
I've read through the docs, but I need a little reinforcement on what I've gleaned.
Rounding easy numbers to keep it simple - Given the following:
How would I configure a dRAID? Would it be this?
zpool create mypool draid2:12d:0s:12c disk1 disk2 ... disk12
And in the end, will this be (relatively) equivalent to the typical hardware RAID6 configurations I'm used to?
The files are large, and the RAIDs are temporary nearline storage as we daily transfer everything to mirrored sets of LTO8, so I'm not terribly concerned about the compression & block size tradeoffs noted in the ZFS docs.
Also, one other consideration - our client applications run on macOS while the RAIDs are deployed in the field, and then our storage is hosted on both macOS and linux (Rocky8) systems when it comes back to the office, so my other consideration is: will a dRAID created with the latest version of openzfs for osx v2.2.2 be plug-n-play compatible with the latest version of openzfs on linux, ie export pool on Mac, import on linux, good to go? Or are there some zfs options that must be enabled to make the same RAID compatible across both platforms? (This is not a high priority question though, so please ignore it if you never have to deal with Apple!)
I'm not a storage expert, but I did stay at a Holiday Inn Express last night. Feedback appreciated! Thanks!
Hello fellows, here's what i'm facing:
I got a machine with 6 drive slot, and already used 4 of them(4TiB*4) as a ZFS pool, let's call it Pool A.
Now I bought 2 more drive to expand my disk space, and there're 2 ways to do so:
Create A Pool B with the 2 new disks using MIRROR
Combine the 2 new disks as MIRROR and add it into Pool A; which means A strip over the original Pool A and the new mirror
Obviously, doing the second way will be more convenient since I don't need to change any other settings to adapt to a new Path(or Pool actually).
However, I'm not sure what would happen if one of the drive broke.So I'm not sure if trying the second way is safe.
So how should I choose? Anyone can help?
Been using ZFS for 10 years, and this is the first time a disk has actually gone bad. The pool is a mirrored-pair and both disks show as ONLINE state but one has 4 read errors now. System performance is really slow, probably because I'm getting slow read times on the dying disk.
Before the replacement arrives, what would be the recommended way to deal with this? Should I 'zpool detatch' the bad disk from the pool? Or would it be better to use 'zpool offline'? Or are either of these not recommended for a mirrored-pair?
So... not so long ago I got a new Linux server. My first home server. I got a while bunch of HDDs and was looking into different ways I could set up a NAS. Ultimately, I decided to go bare ZFS and NFS/SMB shares.
I tried to study a lot to get it right the first time. But some bits still feel "dirty". Not sure how else to put it.
Anyway, now I want to give my partner an account so that she can use it as a backup or cloud storage. But I don't want to have access to her stuff.
So, what is the best way to do this? Maybe there's no better way, but perhaps what are best practices?
Please note that my goal is not to "just get it done". I'd like to learn to do it well.
My Linux server does not have SElinux yet, but I've been reading that this is an option (?) Anyway, if that's the case I'd need to learn how to use it.
Commands, documentation, books, blogs, etc all welcome!
Good day.
zpool status oldhddpool show:
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
wwn-0x50014ee6af80418b FAULTED 6 0 0 too many errors
dmesg: WARNING: Pool 'oldhddpool' has encountered an uncorrectable I/O failure and has been suspended.
Well, before clear zpool I made check for badblocks:
$ sudo badblocks -nsv -b 512 /dev/sde
Checking for bad blocks in non-destructive read-write mode
From block 0 to 625142447
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done
Pass completed, 0 bad blocks found. (0/0/0 errors)
------------
Afer this I make
zpool clear oldhddpool ##with no warnings
zpool scrub oldhddpool
But array still tell me about IO errors. And command 'zpool scrub oldhddpool' freeze (only reboot helpful)
I don't understand:
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
Ubuntu 23.10 / 6.5.0-17-generic / zfs-zed 2.2.0~rc3-0ubuntu4
Thanks.
Details about the pool provided below.
I have a raidz2 pool with a cache drive. I would have expected the cache drive to be used only during reads.
From the docs:
Cache devices provide an additional layer of caching between main memory and disk. These devices provide the greatest performance improvement for random-read workloads of mostly static content.
A friend is copying 1.6TB of data from his server into my pool, and the cache drive is being filled. In fact, it has filled the cache drive (with 1GB to spare). Why is this? What am I missing? During the transfer, my network was the bottleneck at 300mbps. RAM was at ~5G.
pool: depool
state: ONLINE
scan: scrub repaired 0B in 00:07:28 with 0 errors on Thu Feb 1 00:07:31 2024
config:
NAME STATE READ WRITE CKSUM
depool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-TOSHIBA_HDWG440_12P0A2J1FZ0G ONLINE 0 0 0
ata-TOSHIBA_HDWQ140_80NSK3KUFAYG ONLINE 0 0 0
ata-TOSHIBA_HDWG440_53C0A014FZ0G ONLINE 0 0 0
ata-TOSHIBA_HDWG440_53C0A024FZ0G ONLINE 0 0 0
cache
nvme-KINGSTON\_SNV2S1000G\_50026B7381EB4E90 ONLINE 0 0 0
and here is its relevant creation history:
2023-06-27.23:35:45 zpool create -f depool raidz2 /dev/disk/by-id/ata-TOSHIBA_HDWG440_12P0A2J1FZ0G /dev/disk/by-id/ata-TOSHIBA_HDWQ140_80NSK3KUFAYG /dev/disk/by-id/ata-TOSHIBA_HDWG440_53C0A014FZ0G /dev/disk/by-id/ata-TOSHIBA_HDWG440_53C0A024FZ0G
2023-06-27.23:36:23 zpool add depool cache /dev/disk/by-id/nvme-KINGSTON_SNV2S1000G_50026B7381EB4E90
Hello,
I have setup home nas using zfs on the drive. I can cut paste aka move in Linux without any problem. But when doing cut paste in samba throws an error.
Am I missing anything? I am using similar samba config on zfs that i used on ext4 so I am sure I am missing something here.
Any advice ?
I'm on the following Ubuntu/open-zfs:
zfs-2.1.5-1ubuntu622.04.2
zfs-kmod-2.1.5-1ubuntu622.04.1
I'm looking for an easy way to upgrade to openzfs 2.2.2, is there a guide for this? I was hoping for just an "apt-get upgrade" or something.
Yes just that question. I cannot find what a dnode is in the documentation. Any guidance would be greatly appreciated. I'm obviously searching in the wrong place.
Hello everyone,
I was recently reading more into zfs encryption as part of building my homelab/nas and figured that zfs encryption is what fits best for my usecase.
Now in order to achieve what I want, I'm using zfs encryption with a passphrase but this might also apply to key-based encryption.
So as far as I understand it, the reason why I can change my passphrase (or key) without having to re-encrypt all my stuff is because the passphrase (or key) is used to "unlock" the actual encryption key. Now I was thingking that it might be good to backup that key, in case I need to reimport my pools on a different machine in case my system dies but I have not been able to find any information about where to find this key.
How and where is that key stored? I'm using zfs on ubuntu, guess that matters.
Thanks :-)
Hi all,
Using FreeBSD is it possible to make mirror of raidz's?
zpool create a mirror raidz disk1 disk2 disk3 raidz disk4 disk5 disk6 cache disk7 log disk8
I remeber using 10 on /solaris 10u9 ZFS build/version 22 or 25 (Or it was just a dream?).
New sub member here. I want to install something like Ubuntu w/ root on ZFS on a thinkpad x1 gen 11, but apparently that option is gone in Ubuntu 23.04. So I'm thinking: install Ubuntu 22.04 w/ ZFS root, upgrade to 23.04, and then look for alternate distros to install on the same zpool so if Ubuntu ever kills ZFS support I've a way forward.
But maybe I need to just use a different distro now? If so, which?
Context: I'm a developer, mainly on Linux, and some Windows, though I would otherwise prefer a BSD or Illumos. If I went with FreeBSD, how easy a time would I have running Linux and Windows in VMs?
Bonus question: is it possible to boot FreeBSD, Illumos, and Linux from the same zpool? It has to be, surely, but it's probably about bootloader support.
hi folks. while importing the pool, the zpool import comand hangs. i then check the system log, there're whole bunch of messages like these:
Nov 15 04:31:38 archiso kernel: BUG: KFENCE: out-of-bounds read in zil_claim_log_record+0x47/0xd0 [zfs]
Nov 15 04:31:38 archiso kernel: Out-of-bounds read at 0x000000002def7ca4 (4004B left of kfence-#0):
Nov 15 04:31:38 archiso kernel: zil_claim_log_record+0x47/0xd0 [zfs]
Nov 15 04:31:38 archiso kernel: zil_parse+0x58b/0x9d0 [zfs]
Nov 15 04:31:38 archiso kernel: zil_claim+0x11d/0x2a0 [zfs]
Nov 15 04:31:38 archiso kernel: dmu_objset_find_dp_impl+0x15c/0x3e0 [zfs]
Nov 15 04:31:38 archiso kernel: dmu_objset_find_dp_cb+0x29/0x40 [zfs]
Nov 15 04:31:38 archiso kernel: taskq_thread+0x2c3/0x4e0 [spl]
Nov 15 04:31:38 archiso kernel: kthread+0xe8/0x120
Nov 15 04:31:38 archiso kernel: ret_from_fork+0x34/0x50
Nov 15 04:31:38 archiso kernel: ret_from_fork_asm+0x1b/0x30
then follows by kernel trace. does it mean the pool is toasted? is there a chance to save it? i also tried import it with -F option but it doesn't make any difference.
i'm using Arch w/ kernel 6.5.9 & zfs 2.2.0.
I've moved from Opensuse Leap to Tumbleweed because of a problem with a package that I needed a newer version. Whenever there is a Tumbleweed kernel update, it takes a while for openzfs to provide a compatible kernel module. Would moving to Tumbleweed Slowroll fix this? Alternatively, is there a way to avoid a kernel update until there is a compatible openzfs kernel module?
Hi,
I noticed my Proxmox box's (> 2 years with no issues) 10x10TB array's monthly scrub is taking much longer than usual, does anyone have an idea of where else to check?
I monitor and record all SMART data in influxdb and plot it -- no fail or pre-fail indicators show up, I've also checked smartctl -a on all drives.
dmesg shows no errors, the drives are connected over three 8643 cables to an LSI 9300-16i, system is a 5950X, 128GB RAM, the LSI card is connected to the first PCIe 16x slot and is running at PCIe 3.0 x8.
The OS is always kept up to date, these are my current package versions:libzfs4linux/stable,now 2.1.12-pve1 amd64 [installed,automatic]
zfs-initramfs/stable,now 2.1.12-pve1 all [installed]
zfs-zed/stable,now 2.1.12-pve1 amd64 [installed]
zfsutils-linux/stable,now 2.1.12-pve1 amd64 [installed]
proxmox-kernel-6.2.16-6-pve/stable,now 6.2.16-7 amd64 [installed,automatic]
As the scrub runs, it slows down and takes hours to move single percentage point, the time estimate goes up a little every time but there are no errors, this run started with an estimate of 7hrs 50min (which is about normal)pool: pool0
state: ONLINE
scan: scrub in progress since Wed Aug 16 09:35:40 2023
13.9T scanned at 1.96G/s, 6.43T issued at 929M/s, 35.2T total
0B repaired, 18.25% done, 09:01:31 to go
config:
NAME STATE READ WRITE CKSUM
pool0 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0
ata-WDC_WD101EFAX-68LDBN0_ ONLINE 0 0 0
ata-WDC_WD101EFAX-68LDBN0_ ONLINE 0 0 0
errors: No known data errors
I am trying to upgrade my current disks to larger capacity. I am running VMware ESXi 7.0 on top of standard desktop hardware with the disks presented as RDM's to the guest VM. OS is Ubuntu 22.04 Server.
I can't even begin to explain my thought process except for the fact that I've got a headache and was over-ambitious to start the process.
I ran this command to offline the disk before I physically replaced it:sudo zpool offline tank ata-WDC_WD60EZAZ-00SF3B0_WD-WX12DA0D7VNU -f
Then I shut down the server using sudo shutdown
, proceeded to shut down the host. Swapped the offlined disk with the new disk. Powered on the host, removed the RDM disk (matching the serial number of the offlined disk), added the new disk as an RDM.
I expected to be able to import the pool, except I got this when running sudo zpool import
:
pool: tank
id: 10645362624464707011
state: UNAVAIL
status: One or more devices are faulted.
action: The pool cannot be imported due to damaged devices or data.
config:
tank UNAVAIL insufficient replicas
ata-WDC_WD60EZAZ-00SF3B0_WD-WX12DA0D7VNU FAULTED corrupted data
ata-WDC_WD60EZAZ-00SF3B0_WD-WX32D80CEAN5 ONLINE
ata-WDC_WD60EZAZ-00SF3B0_WD-WX32D80CF36N ONLINE
ata-WDC_WD60EZAZ-00SF3B0_WD-WX32D80K4JRS ONLINE
ata-WDC_WD60EZAZ-00SF3B0_WD-WX52D211JULY ONLINE
ata-WDC_WD60EZAZ-00SF3B0_WD-WX52DC03N0EU ONLINE
When I run sudo zpool import tank I get:
cannot import 'tank': one or more devices is currently unavailable
I then powered down the VM, removed the new disk and replaced the old disk in exactly the same physical configuration as before I started. Once my host was back online, I removed the new RDM disk, and recreated the RDM for the original disk, ensuring it had the same controller ID (0:0) in the VM configuration.
Still I cannot seem to import the pool, let alone online the disk.
Please please, any help is greatly appreciated. I have over 33TB of data on these disks, and of course, no backup. My plan was to use these existing disks in another system so that I could use them as a backup location for at least a subset of the data. Some of which is irreplaceable. 100% my fault on that, I know.
Thank in advance for any help you can provide.
Is it possible to convert a raidz pool to a draid pool? (online)
what is mean
zpool status
sda ONLINE 0 0 0 (non-allocating)
what is (non-allocating)
thx
Hello to everyone.
I'm trying to compile ZFS within ubuntu 22.10 that I have installed on Windows 11 via WSL2. This is the tutorial that I'm following :
https://github.com/alexhaydock/zfs-on-wsl
The commands that I have issued are :
sudo tar -zxvf zfs-2.1.0-for-5.13.9-penguins-rule.tgz -C .
cd /usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule
./configure --includedir=/usr/include/tirpc/ --without-python
(this command is not present on the tutorial but it is needed)
The full log is here :
https://pastebin.ubuntu.com/p/zHNFR52FVW/
basically the compilation ends with this error and I don't know how to fix it :
Making install in module
make[1]: Entering directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module'
make -C /usr/src/linux-5.15.38-penguins-rule M="$PWD" modules_install \
INSTALL_MOD_PATH= \
INSTALL_MOD_DIR=extra \
KERNELRELEASE=5.15.38-penguins-rule
make[2]: Entering directory '/usr/src/linux-5.15.38-penguins-rule'
arch/x86/Makefile:142: CONFIG_X86_X32 enabled but no binutils support
cat: /home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module/modules.order: No such file or directory
DEPMOD /lib/modules/5.15.38-penguins-rule
make[2]: Leaving directory '/usr/src/linux-5.15.38-penguins-rule'
kmoddir=/lib/modules/5.15.38-penguins-rule; \
if [ -n "" ]; then \
find $kmoddir -name 'modules.*' -delete; \
fi
sysmap=/boot/System.map-5.15.38-penguins-rule; \
{ [ -f "$sysmap" ] && [ $(wc -l < "$sysmap") -ge 100 ]; } || \
sysmap=/usr/lib/debug/boot/System.map-5.15.38-penguins-rule; \
if [ -f $sysmap ]; then \
depmod -ae -F $sysmap 5.15.38-penguins-rule; \
fi
make[1]: Leaving directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module'
make[1]: Entering directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule'
make[1]: *** No rule to make target 'module/Module.symvers', needed by 'all-am'. Stop.
make[1]: Leaving directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule'
make: *** [Makefile:920: install-recursive] Error 1
The solution could be here :
https://github.com/openzfs/zfs/issues/9133#issuecomment-520563793
where he says :
Description: Use obj-m instead of subdir-m.
Do not use subdir-m to visit module Makefile.
and so on...
Unfortunately I haven't understood what to do.
I'm running a raidz1-0 (RAID5) setup with 4 data 2TB SSDs.
During midnight, somehow 2 of my data disks experience some I/O error (from /var/log/messages
).
When I investigated in the morning, the zpool status shows the following :
pool: zfs51
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: http://zfsonlinux.org/msg/ZFS-8000-HC
scan: resilvered 1.36T in 0 days 04:23:23 with 0 errors on Thu Apr 20 21:40:48 2023
config:
NAME STATE READ WRITE CKSUM
zfs51 UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 36 0 0 insufficient replicas
sdc FAULTED 57 0 0 too many errors
sdd ONLINE 0 0 0
sde UNAVAIL 0 0 0
sdf ONLINE 0 0 0
errors: List of errors unavailable: pool I/O is currently suspended
I tried doing zpool clear
, I keep getting the error message cannot clear errors for zfs51: I/O error
Subsequently, I tried rebooting first to see if it resolves - however there was issue shut-downing.As a result, I had to do a hard reset. When the system boot back up, the pool was not imported.
Doing zpool import zfs51
now returns me :
cannot import 'zfs51': I/O error
Destroy and re-create the pool from
a backup source.
Even putting -f
or -F
, I get the same error. Strangely, when I do zpool import -F
, it shows the pool and all the disks online :
pool: zfs51
id: 12204763083768531851
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zfs51 ONLINE
raidz1-0 ONLINE
sdc ONLINE
sdd ONLINE
sde ONLINE
sdf ONLINE
Yet however, when importing by the pool name, the same error shows.
Even tried using -fF
, doesn't work.
After scrawling through Google and reading up on different various ZFS issues, i stumbled upon the -X
flag command (that solves users facing similar issue).
I went ahead to run zpool import -fFX zfs51
and the command seems to be taking long.However, I noticed the 4 data disks having high read activity, which I assume its due to ZFS reading the entire data pool. But after 7 hours, all the read activity on the disks stopped. I also noticed a ZFS kernel panic message :
Message from syslogd@user at Jun 30 19:37:54 ...
kernel:PANIC: zfs: allocating allocated segment(offset=6859281825792 size=49152) of (offset=6859281825792 size=49152)
Currently, the command zpool import -fFX zfs51
seems to be still running (terminal did not return back the input to me). However, there doesnt seem to be any activity in the disks. Also running zpool status in another terminal seems to hanged as well.
zpool import -o readonly=on -f POOLNAME
) and salvage the data - anyone can any advise on that?> Pleased to announce that iXsytems is sponsoring the efforts by @don-brady to get this finalized and merged. Thanks to @don-brady and @ahrens for discussing this on the OpenZFS leadership meeting today. Looking forward to an updated PR soon.
https://www.youtube.com/watch?v=2p32m-7FNpM
--Kris Moore
https://github.com/openzfs/zfs/pull/12225#issuecomment-1610169213