/r/openzfs

Photograph via snooOG

Open ZFS: ZFS on BSD and Linux, the opensource edition of ZFS. This subreddit is focused on openZFS for BSD and Linux operating systems. The aim here is to hunker down into using openZFS or ZoL on GNU/Linux and the equivalent GNU BSD operating systems.

OpenZFS

OpenZFS: ZFS on BSD and Linux, the opensource edition of ZFS. This subreddit is focused on openZFS for FreeBSD and Linux operating systems. The aim here is to hunker down into using openZFS or ZoL on GNU/Linux and the equivalent GNU BSD operating systems.

While the terminology isn't perfect here yet, I hope you get the idea. We're not here to steal limelight from /r/zfs, but instead have a focus of posts on running openZFS/ ZoL on things like Debian or FreeNas or CentOS or Ubuntu or FreeBSD or Arch, etc.

(If we're missing an 'open' OS that you think should be listed here, please message the moderators)

We're really excited about using openZFS because of how awesome it is! Who would use another filesystem when you have such a feature-set ?!


Post Filters Post Filters
Guides & Tips Blog Posts
Video Meta
Linux ZFS BSD ZFS

If you'd like to help moderate, please contact the existing mod(s) for this subreddit using the link provided below in this sidebar.

(We're still working on fancy flair)


related subreddits:



Disclaimer

This subreddit assumes no official connection to open-zfs.org. It merely uses a similar convention to describe the open-source edition of ZFS for the sake of community contribution.

/r/openzfs

841 Subscribers

3

Backup the configuration and restore.

Hello. I am using OpenZFS with my AlmaLinux 9.5 KDE. It is handling two separate NAS drives in RAID 1 configuration.

Since I don't know much about it features, I would like to ask if I can backup the configuration for restoring in case (God Forbids) something went wrong. Or what is the process of restoring the old configuration if I reinstall the OS or change to another distribution that supported OpenZFS.

Kindly advise since it is very important for me.

And thank you.

1 Comment
2024/11/21
10:44 UTC

9

A ZFS Love Story Gone Wrong: A Linux User's Tale

I've been a Linux user for about 4 years - nothing fancy, just your typical remote desktop connections, ZTNA, and regular office work stuff.

Recently, I dove into Docker and hypervisors, which led me to discover the magical world of OpenZFS. First, I tested it on a laptop running XCP-NG 8.3 with a mirror configuration. Man, it worked so smoothly that I couldn't resist trying it on my Fedora 40 laptop with a couple of SSDs.

Let me tell you, ZFS is mind-blowing! The Copy-on-Write, importing/exporting features are not only powerful but surprisingly user-friendly. The dataset management is fantastic, and don't even get me started on the snapshots - they're basically black magic! 😂

Here's where things got interesting (read: went south). A few days ago, Fedora dropped its 41st version. Being the update-enthusiast I am, I thought "Why not upgrade? What could go wrong?"

Spoiler alert: Everything.

You see, I was still riding that new-ZFS-feature high and completely forgot that version upgrades could break things. The Fedora upgrade itself went smoothly - too smoothly. It wasn't until I tried to import one of my external pools that reality hit me:

Zpool command not found

After some frantic googling, I discovered that the ZFS version compatible with Fedora 41 isn't out yet. So much for my ZFS learning journey... Guess I'll have to wait!

TL;DR: Got excited about ZFS, upgraded Fedora, broke ZFS, now questioning my life choices.

14 Comments
2024/11/01
03:37 UTC

1

ZFS on Root - cannot import pool, but it works

1 Comment
2024/09/18
17:05 UTC

2

Veeam Repository - XFS zvol or pass through ZFS dataset?

0 Comments
2024/09/17
18:42 UTC

2

am I understanding this correctly. Expandable vdev and a script to gain performance back

Watching the latest Lawrence Systems on TrueNAS Tutorial: Expanding Your ZFS RAIDz VDEV with a Single Drive

watching it I understand a few things, first if you are on raidz1, z2 or z3 you are stuck on that. 2nd, you can only add 1 drive at a time. 3rd is the question, when you add a drive you don't gain a setup like if you had all the drives at once. Example, you purchase 9 drives and then setup raidz2 vs purchase 3 drives and add as needed for a similar raidz2. Tom mentioned a script you can run called (ZFS In Place Rebalancing Script) and it fixes this issue as best it can? you might not get an exact performance gain but will get the next best thing

am I thinking this correctly

0 Comments
2024/09/15
16:32 UTC

2

My pool disappeared?? Please help

So I have a mirror pool on two 5TB hard disks. I unmounted it a few days ago, yesterday I reconnect the disks and they both say : I have no partitions.

What could cause this? What can I do now?

I tried reading the top 20mb, it is not zeroes but fairly random looking data and I see some strings that I recognise as dataset names.

I can't mount it obviously, it says pool doesn't exist. The OS claims the disks are fine.

The last thing I remember was letting a scrub finish, it reported no new errors and I did sync and unmounted and exported. First try I was still in a terminal on the disk, so it said busy, then tried it again and for the first time ever it said the root dataset was busy still. I tried again and it seemed to be unmounted so I shut the disks off.

4 Comments
2024/09/13
20:45 UTC

1

How to add a new disk as parity to existing individual zpool disks to improve redundancy

1 Comment
2024/09/12
15:52 UTC

1

Preserve creation timestamp when copying

Both ZFS and ext4 support timestamps for file creation. However if you simply copy a file it is set to now.

I want to keep the timestamp as is after copying but I can't find tools that do it. Rsync tells me -N not supported on Linux and cp doesn't do it with the archiving flags on. The only difference seems to be they preserve directory modification dates.

Any solution to copy individual files with timestamps intact? From ext4 to zfs and vice versa?

2 Comments
2024/09/02
13:12 UTC

0

How to check dedup resource usage changes when excluding datasets?

So I have a 5TB pool. I'm adding 1TB of data that is video and likely will never dedup.

I'm adding it to a new dataset, let's call it mypool/video.

Mypool has dedup, because it's used for backup images. So mypool/video inherited it.

I want to zfs set dedup=off mypool/video after video data is added and see the impact on resource usage.

Expectations : Dedup builds a DDT and that takes up RAM. I expect that if you turn it off not much changes, since the blocks have been read into RAM. But after exporting and importing the pool, this should be visible, since the DDT is read again from disk and it can skip that dataset now?

3 Comments
2024/09/02
13:06 UTC

2

HDD is goint into mega read mode "z_rd_int_0" and more. What is this?

My ZFS pool / hdds are suddenly reading data like mad. System is idle. Same after reboot. See screenshot below from "iotop" example where it had already gone through 160GB+.

"zpool status" shows all good.

Never happened before. What is this?
Any ideas? Tips?

Thank you!

PS: Sorry for the title typo. Can't edit that anymore.

https://preview.redd.it/dvruabjzqf5d1.png?width=557&format=png&auto=webp&s=35eb78287b6a916211d7f88756edda2bfbe41ef0

1 Comment
2024/06/08
23:52 UTC

2

Readability after fail

Okay, maybe dumb question, but if I have two drives in RAID1, is that drive readable if I pull it out of the machine? With windows mirrors, I’ve had system failures and all the data was still accessible from a member drive. Does openzfs allow for that?

2 Comments
2024/06/05
05:13 UTC

0

How would YOU set up openzfs for.. ?

I7 960 16 gb ddr3 400gb seagate x2 400gb wd x2 120gb ssd x2 64gb ssd

On free bsd.

l2arc, slog, pools, mirror, raid-z? Any other recomended partitions, swap, etc.

These are the toys currently have to work with, any ideas?

Thank you.

2 Comments
2024/04/27
01:44 UTC

1

ZFS and the Case of Missing Space

Hello, I'm currently utilizing ZFS at work where we've employed a zvol formatted with NTFS. According to ZFS, the data REF is 11.5TB, yet NTFS indicates only 6.7TB.

We've taken a few snapshots, which collectively consume no more than 100GB. I attempted to reclaim space using fstrim, which freed up about 500GB. However, this is far from the 4TB discrepancy I'm facing. Any insights or suggestions would be greatly appreciated.

Our setup is as follows:

```
  pool: pool
 state: ONLINE
  scan: scrub repaired 0B in 01:52:13 with 0 errors on Thu Apr  4 14:00:43 2024
config:

        NAME        STATE     READ WRITE CKSUM
        root        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            vda     ONLINE       0     0     0
            vdb     ONLINE       0     0     0
            vdc     ONLINE       0     0     0
            vdd     ONLINE       0     0     0
            vde     ONLINE       0     0     0
            vdf     ONLINE       0     0     0

NAME                                                 USED  AVAIL     REFER  MOUNTPOINT
root                                               11.8T  1.97T      153K  /root
root/root                                          11.8T  1.97T     11.5T  -
root/root@sn-69667848-172b-40ad-a2ce-acab991f1def  71.3G      -     7.06T  -
root/root@sn-7c0d9c2e-eb83-4fa0-a20a-10cb3667379f  76.0M      -     7.37T  -
root/root@sn-f4bccdea-4b5e-4fb5-8b0b-1bf2870df3f3   181M      -     7.37T  -
root/root@sn-4171c850-9450-495e-b6ed-d5eb4e21f889   306M      -     7.37T  -
root/root@backup.2024-04-08.08:22:00               4.54G      -     10.7T  -
root/root@sn-3bdccf93-1e53-4e47-b870-4ce5658c677e   184M      -     11.5T  -

NAME        PROPERTY              VALUE                  SOURCE
root/root  type                  volume                 -
root/root  creation              Tue Mar 26 13:21 2024  -
root/root  used                  11.8T                  -
root/root  available             1.97T                  -
root/root  referenced            11.5T                  -
root/root  compressratio         1.00x                  -
root/root  reservation           none                   default
root/root  volsize               11T                    local
root/root  volblocksize          8K                     default
root/root  checksum              on                     default
root/root  compression           off                    default
root/root  readonly              off                    default
root/root  createtxg             198                    -
root/root  copies                1                      default
root/root  refreservation        none                   default
root/root  guid                  9779813421103601914    -
root/root  primarycache          all                    default
root/root  secondarycache        all                    default
root/root  usedbysnapshots       348G                   -
root/root  usedbydataset         11.5T                  -
root/root  usedbychildren        0B                     -
root/root  usedbyrefreservation  0B                     -
root/root  logbias               latency                default
root/root  objsetid              413                    -
root/root  dedup                 off                    default
root/root  mlslabel              none                   default
root/root  sync                  standard               default
root/root  refcompressratio      1.00x                  -
root/root  written               33.6G                  -
root/root  logicalused           7.40T                  -
root/root  logicalreferenced     7.19T                  -
root/root  volmode               default                default
root/root  snapshot_limit        none                   default
root/root  snapshot_count        none                   default
root/root  snapdev               hidden                 default
root/root  context               none                   default
root/root  fscontext             none                   default
root/root  defcontext            none                   default
root/root  rootcontext           none                   default
root/root  redundant_metadata    all                    default
root/root  encryption            off                    default
root/root  keylocation           none                   default
root/root  keyformat             none                   default
root/root  pbkdf2iters           0                      default



/dev/zd0p2       11T  6.7T  4.4T  61% /mnt/test
2 Comments
2024/04/08
20:06 UTC

2

Syncthing on ZFS a good case for Deduplication?

I've have a ext4 on LVM on linux RAID based NAS for a decade+ that runs syncthing and syncs dozens of devices in my homelab. Works great. I'm finally building it's replacement based on ZFS RAID (first experience with ZFS), so lots of learning.

I know that:

  1. Dedup is a good idea in very few cases (let's assume I wait until fast-dedup stabilizes and makes it into my system)
  2. That most of my syncthing activity is little modifications to existing files
  3. That random async writes are harder/slower on a zraid2. Syncthing would be everpresent but the load on the new NAS would be light otherwise.
  4. Syncthing works by making new files then deleting the old one

My question is this: seeing how ZFS is COW, and syncthing would just constantly be flooding the array with small random writes to existing files, isn't it more efficient to make a dataset out of my syncthing data and enable dedup there only?

Addendum: How does this syncthing setting interact with the ZFS dedup settings? copy_file_range

Would it override the ZFS setting or do they both need to be enabled?

4 Comments
2024/04/06
18:15 UTC

3

How do I enable directio for my nvme pool?

I'm pretty sure my nvme pool is underperforming due to hitting the ARC unnessarily.

I read somewhere that this can be fixed via directio. how?

2 Comments
2024/04/06
14:35 UTC

1

dRAID - RAID6 equivalent

We deploy turnkey data ingest systems that are typically always configured with a 12 drive RAID6 configuration (our RAID host adapters are Atto, Areca, LSI depending on the hardware or OS version).

I've experimented with ZFS and RAIDZ2 in the past and could never get past the poor write performance. We're used to write performance in the neighborhood of 1.5 GBs with our hardware RAID controllers, and RAIDZ2 was much slower.

I recently read about dRAID and it sounds intriguing, If I'm understanding correctly, one benefit is that it overcomes the write performance limitations of RAIDZ2?

I've read through the docs, but I need a little reinforcement on what I've gleaned.

Rounding easy numbers to keep it simple - Given the following:

  • (12) 10TB drives - equivalent to 100TB usable storage 20TB parity typical hardware RAID6
  • 12 bay JBOD
  • 2 COLD spares

How would I configure a dRAID? Would it be this?

zpool create mypool draid2:12d:0s:12c disk1 disk2 ... disk12  
  • draid2 = 2 parity
  • 12d = 12 data disks total (...OR...would it be specified as 10d, ie, draid2 = 2 parity + 10 data? The 'd' parameter is the one I'm not so clear on...is the data disks number inclusive of the parity number, or exclusive?
  • 0s = no hot spares, if a drive dies, a spare will get swapped in
  • 12c = total disks in the vdev, parity + data + hot spares – again, I'm not crystal clear on this...if I intend to use cold spares, should it be 14c to allocate room for the 2 spares, or is that not necessary?

And in the end, will this be (relatively) equivalent to the typical hardware RAID6 configurations I'm used to?

The files are large, and the RAIDs are temporary nearline storage as we daily transfer everything to mirrored sets of LTO8, so I'm not terribly concerned about the compression & block size tradeoffs noted in the ZFS docs.

Also, one other consideration - our client applications run on macOS while the RAIDs are deployed in the field, and then our storage is hosted on both macOS and linux (Rocky8) systems when it comes back to the office, so my other consideration is: will a dRAID created with the latest version of openzfs for osx v2.2.2 be plug-n-play compatible with the latest version of openzfs on linux, ie export pool on Mac, import on linux, good to go? Or are there some zfs options that must be enabled to make the same RAID compatible across both platforms? (This is not a high priority question though, so please ignore it if you never have to deal with Apple!)

I'm not a storage expert, but I did stay at a Holiday Inn Express last night. Feedback appreciated! Thanks!

0 Comments
2024/03/15
08:33 UTC

1

[Help Request] Strip over pool or A new pool

Hello fellows, here's what i'm facing:

I got a machine with 6 drive slot, and already used 4 of them(4TiB*4) as a ZFS pool, let's call it Pool A.

Now I bought 2 more drive to expand my disk space, and there're 2 ways to do so:

  1. Create A Pool B with the 2 new disks using MIRROR

  2. Combine the 2 new disks as MIRROR and add it into Pool A; which means A strip over the original Pool A and the new mirror

Obviously, doing the second way will be more convenient since I don't need to change any other settings to adapt to a new Path(or Pool actually).

However, I'm not sure what would happen if one of the drive broke.So I'm not sure if trying the second way is safe.

So how should I choose? Anyone can help?

0 Comments
2024/02/19
09:12 UTC

1

Dealing with a bad disk in mirrored-pair pool

Been using ZFS for 10 years, and this is the first time a disk has actually gone bad. The pool is a mirrored-pair and both disks show as ONLINE state but one has 4 read errors now. System performance is really slow, probably because I'm getting slow read times on the dying disk.

Before the replacement arrives, what would be the recommended way to deal with this? Should I 'zpool detatch' the bad disk from the pool? Or would it be better to use 'zpool offline'? Or are either of these not recommended for a mirrored-pair?

2 Comments
2024/02/18
03:07 UTC

1

Authentication

So... not so long ago I got a new Linux server. My first home server. I got a while bunch of HDDs and was looking into different ways I could set up a NAS. Ultimately, I decided to go bare ZFS and NFS/SMB shares.

I tried to study a lot to get it right the first time. But some bits still feel "dirty". Not sure how else to put it.

Anyway, now I want to give my partner an account so that she can use it as a backup or cloud storage. But I don't want to have access to her stuff.

So, what is the best way to do this? Maybe there's no better way, but perhaps what are best practices?

Please note that my goal is not to "just get it done". I'd like to learn to do it well.

My Linux server does not have SElinux yet, but I've been reading that this is an option (?) Anyway, if that's the case I'd need to learn how to use it.

Commands, documentation, books, blogs, etc all welcome!

0 Comments
2024/02/16
08:17 UTC

1

Tank errors at usb drives

Good day.

zpool status oldhddpool show:

state: SUSPENDED

status: One or more devices are faulted in response to IO failures.

action: Make sure the affected devices are connected, then run 'zpool clear'.

wwn-0x50014ee6af80418b FAULTED 6 0 0 too many errors

dmesg: WARNING: Pool 'oldhddpool' has encountered an uncorrectable I/O failure and has been suspended.

Well, before clear zpool I made check for badblocks:

$ sudo badblocks -nsv -b 512 /dev/sde

Checking for bad blocks in non-destructive read-write mode

From block 0 to 625142447

Checking for bad blocks (non-destructive read-write test)

Testing with random pattern: done

Pass completed, 0 bad blocks found. (0/0/0 errors)

------------

Afer this I make

zpool clear oldhddpool ##with no warnings

zpool scrub oldhddpool

But array still tell me about IO errors. And command 'zpool scrub oldhddpool' freeze (only reboot helpful)

I don't understand:

state: SUSPENDED

status: One or more devices are faulted in response to IO failures.

action: Make sure the affected devices are connected, then run 'zpool clear'.

Ubuntu 23.10 / 6.5.0-17-generic / zfs-zed 2.2.0~rc3-0ubuntu4

Thanks.

​

1 Comment
2024/02/04
12:09 UTC

2

zfs cache drive is used for writes (I expected just reads, not expected behavior?)

Details about the pool provided below.

I have a raidz2 pool with a cache drive. I would have expected the cache drive to be used only during reads.

​

From the docs:

Cache devices provide an additional layer of caching between main memory and disk. These devices provide the greatest performance improvement for random-read workloads of mostly static content.

​

A friend is copying 1.6TB of data from his server into my pool, and the cache drive is being filled. In fact, it has filled the cache drive (with 1GB to spare). Why is this? What am I missing? During the transfer, my network was the bottleneck at 300mbps. RAM was at ~5G.

​

pool: depool
state: ONLINE
scan: scrub repaired 0B in 00:07:28 with 0 errors on Thu Feb  1 00:07:31 2024
config:
NAME                                         STATE     READ WRITE CKSUM

depool                                       ONLINE       0     0     0

 raidz2-0                                   ONLINE       0     0     0
ata-TOSHIBA_HDWG440_12P0A2J1FZ0G         ONLINE       0     0     0
ata-TOSHIBA_HDWQ140_80NSK3KUFAYG         ONLINE       0     0     0
ata-TOSHIBA_HDWG440_53C0A014FZ0G         ONLINE       0     0     0
ata-TOSHIBA_HDWG440_53C0A024FZ0G         ONLINE       0     0     0
cache

 nvme-KINGSTON\_SNV2S1000G\_50026B7381EB4E90  ONLINE       0     0     0

and here is its relevant creation history:

2023-06-27.23:35:45 zpool create -f depool raidz2 /dev/disk/by-id/ata-TOSHIBA_HDWG440_12P0A2J1FZ0G /dev/disk/by-id/ata-TOSHIBA_HDWQ140_80NSK3KUFAYG /dev/disk/by-id/ata-TOSHIBA_HDWG440_53C0A014FZ0G /dev/disk/by-id/ata-TOSHIBA_HDWG440_53C0A024FZ0G
2023-06-27.23:36:23 zpool add depool cache /dev/disk/by-id/nvme-KINGSTON_SNV2S1000G_50026B7381EB4E90
6 Comments
2024/02/01
17:48 UTC

0

Question about cut paste on zfs over samba

Hello,

I have setup home nas using zfs on the drive. I can cut paste aka move in Linux without any problem. But when doing cut paste in samba throws an error.

Am I missing anything? I am using similar samba config on zfs that i used on ext4 so I am sure I am missing something here.

Any advice ?

2 Comments
2024/01/21
07:54 UTC

0

What is a dnode?

Yes just that question. I cannot find what a dnode is in the documentation. Any guidance would be greatly appreciated. I'm obviously searching in the wrong place.

3 Comments
2023/12/14
12:09 UTC

2

zfs encryption - where is the key stored?

Hello everyone,

​

I was recently reading more into zfs encryption as part of building my homelab/nas and figured that zfs encryption is what fits best for my usecase.

​

Now in order to achieve what I want, I'm using zfs encryption with a passphrase but this might also apply to key-based encryption.

​

So as far as I understand it, the reason why I can change my passphrase (or key) without having to re-encrypt all my stuff is because the passphrase (or key) is used to "unlock" the actual encryption key. Now I was thingking that it might be good to backup that key, in case I need to reimport my pools on a different machine in case my system dies but I have not been able to find any information about where to find this key.

​

How and where is that key stored? I'm using zfs on ubuntu, guess that matters.

​

Thanks :-)

14 Comments
2023/12/08
10:26 UTC

0

is it possible? zpool create a mirror raidz disk1 disk2 disk3 raidz disk4 disk5 disk6 cache disk7 log disk8

Hi all,

Using FreeBSD is it possible to make mirror of raidz's?

zpool create a mirror raidz disk1 disk2 disk3 raidz disk4 disk5 disk6 cache disk7 log disk8

I remeber using 10 on /solaris 10u9 ZFS build/version 22 or 25 (Or it was just a dream?).

0 Comments
2023/12/06
12:40 UTC

2

Best Linux w/ zfs root distro?

New sub member here. I want to install something like Ubuntu w/ root on ZFS on a thinkpad x1 gen 11, but apparently that option is gone in Ubuntu 23.04. So I'm thinking: install Ubuntu 22.04 w/ ZFS root, upgrade to 23.04, and then look for alternate distros to install on the same zpool so if Ubuntu ever kills ZFS support I've a way forward.

But maybe I need to just use a different distro now? If so, which?

Context: I'm a developer, mainly on Linux, and some Windows, though I would otherwise prefer a BSD or Illumos. If I went with FreeBSD, how easy a time would I have running Linux and Windows in VMs?

Bonus question: is it possible to boot FreeBSD, Illumos, and Linux from the same zpool? It has to be, surely, but it's probably about bootloader support.

10 Comments
2023/11/21
02:28 UTC

1

zpool import hangs

hi folks. while importing the pool, the zpool import comand hangs. i then check the system log, there're whole bunch of messages like these:

Nov 15 04:31:38 archiso kernel: BUG: KFENCE: out-of-bounds read in zil_claim_log_record+0x47/0xd0 [zfs]
Nov 15 04:31:38 archiso kernel: Out-of-bounds read at 0x000000002def7ca4 (4004B left of kfence-#0):
Nov 15 04:31:38 archiso kernel:  zil_claim_log_record+0x47/0xd0 [zfs]
Nov 15 04:31:38 archiso kernel:  zil_parse+0x58b/0x9d0 [zfs]
Nov 15 04:31:38 archiso kernel:  zil_claim+0x11d/0x2a0 [zfs]
Nov 15 04:31:38 archiso kernel:  dmu_objset_find_dp_impl+0x15c/0x3e0 [zfs]
Nov 15 04:31:38 archiso kernel:  dmu_objset_find_dp_cb+0x29/0x40 [zfs]
Nov 15 04:31:38 archiso kernel:  taskq_thread+0x2c3/0x4e0 [spl]
Nov 15 04:31:38 archiso kernel:  kthread+0xe8/0x120
Nov 15 04:31:38 archiso kernel:  ret_from_fork+0x34/0x50
Nov 15 04:31:38 archiso kernel:  ret_from_fork_asm+0x1b/0x30

then follows by kernel trace. does it mean the pool is toasted? is there a chance to save it? i also tried import it with -F option but it doesn't make any difference.

i'm using Arch w/ kernel 6.5.9 & zfs 2.2.0.

​

1 Comment
2023/11/15
05:05 UTC

1

Opensuse slowroll and openzfs question

I've moved from Opensuse Leap to Tumbleweed because of a problem with a package that I needed a newer version. Whenever there is a Tumbleweed kernel update, it takes a while for openzfs to provide a compatible kernel module. Would moving to Tumbleweed Slowroll fix this? Alternatively, is there a way to avoid a kernel update until there is a compatible openzfs kernel module?

1 Comment
2023/09/16
14:49 UTC

2

zpool scrub slowing down but no errors?

Hi,

I noticed my Proxmox box's (> 2 years with no issues) 10x10TB array's monthly scrub is taking much longer than usual, does anyone have an idea of where else to check?

I monitor and record all SMART data in influxdb and plot it -- no fail or pre-fail indicators show up, I've also checked smartctl -a on all drives.

dmesg shows no errors, the drives are connected over three 8643 cables to an LSI 9300-16i, system is a 5950X, 128GB RAM, the LSI card is connected to the first PCIe 16x slot and is running at PCIe 3.0 x8.

The OS is always kept up to date, these are my current package versions:libzfs4linux/stable,now 2.1.12-pve1 amd64 [installed,automatic]

zfs-initramfs/stable,now 2.1.12-pve1 all [installed]

zfs-zed/stable,now 2.1.12-pve1 amd64 [installed]

zfsutils-linux/stable,now 2.1.12-pve1 amd64 [installed]

proxmox-kernel-6.2.16-6-pve/stable,now 6.2.16-7 amd64 [installed,automatic]

As the scrub runs, it slows down and takes hours to move single percentage point, the time estimate goes up a little every time but there are no errors, this run started with an estimate of 7hrs 50min (which is about normal)pool: pool0

state: ONLINE

scan: scrub in progress since Wed Aug 16 09:35:40 2023

13.9T scanned at 1.96G/s, 6.43T issued at 929M/s, 35.2T total

0B repaired, 18.25% done, 09:01:31 to go

config:

​

NAME STATE READ WRITE CKSUM

pool0 ONLINE 0 0 0

raidz2-0 ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD100EFAX-68LHPN0_ ONLINE 0 0 0

ata-WDC_WD101EFAX-68LDBN0_ ONLINE 0 0 0

ata-WDC_WD101EFAX-68LDBN0_ ONLINE 0 0 0

​

errors: No known data errors

2 Comments
2023/08/16
03:42 UTC

Back To Top