/r/storage

Photograph via snooOG

A subreddit for enterprise level IT data storage-related questions, anecdotes, troubleshooting request/tips, and other related discussions.

A subreddit for enterprise data storage-related questions, anecdotes, troubleshooting request/tips, and other related discussions.

Areas of interest for this sub include: SAN, NAS, EMC, HPC, HDS, HP/3PAR, Violin-Memory, Dell/Compellent, NetApp, IBM, Pure Storage, Nimble Storage, Cisco, Sun, Seagate, Symantec, Western Digital news, discussion, and information.


Rules:

  1. Please try to keep submissions on topic and of high quality.
  2. Submissions must relate to enterprise level IT data storage. For posts about your home NAS you might be better posting to /r/homelab or /r/datahoarder .
  3. Don't post links to your personal or corporate storage/IT-related blog. Text posts referencing your blog are okay. See Rule 1.
  4. Do not post sponsored content. This includes blogs written by vendors and/or IT review websites.
  5. Please follow proper reddiquette.
  6. Report any posts/comments that violate the above rules and a mod will investigate. Also, feel free to contact any of the mods if you wish to discuss the rules.

Related Reddits:

/r/EMC2

/r/NetApp

/r/sysadmin

/r/vmware

/r/linux

/r/hpc

/r/datahoarder

/r/zfs

/r/storage

27,540 Subscribers

3

Enterprise HGST SAS SSD Have Encryption Block Stopping Secure Erase

Good Day,

I've been dealing with this issue for a few weeks, scouring dozens of forums. I pulled a series of 500GB SAS SSD out of a Tegile 3600 storage array after we decommed it. There is no data on there I need. We want to repurpose the disks into another Dell array we were given that has no drives. We no longer have the old controller and it's off support.

Note: They are formatted to 512 and not 520 or other block size.

The issue is every time I try to secure erase or low level format, the drive blocks the attempt. I have tried:

  1. Seagate and WD vendor tools - They report healthy status on the drives, but refuse to present a erase or low level format option. "Failed to Start" is the common error.

  2. Windows Disk Management - Drive recognized, refuses to even initialize the drive.

  3. Dell H330 PERC Controller - drives show as "Foreign." No options to erase or initialize the drives. Tried importing / clearing the config, nothing changes. Gone direct through the controller and Lifecycle controller with no difference. iDrac recognizes the drive type and size without problem. It gives me no options to erase.

  4. Windows / Linux - GParted, dd, and a half dozen other hard drive low level format utilities. All see the drive but cannot format.

  5. Linux sg3-utils - This gives me the most info, but still get stuck. It appears each drive has a password assigned from the old controller. Without that I can't access the drives. I don't need to access them, there is nothing on there, but it is stopping me from formatting them.

This is a typical output.

sudo sg_sanitize --overwrite --zero /dev/sde
    Generic   External          0157   peripheral_type: disk [0x0]
      << supports protection information>>
      Unit serial number: (deleted)            
      LU name: 5000cca04exxxxxx

A SANITIZE will commence in 15 seconds
    ALL data on /dev/sdh will be DESTROYED
        Press control-C to abort

A SANITIZE will commence in 10 seconds
    ALL data on /dev/sdh will be DESTROYED
        Press control-C to abort

A SANITIZE will commence in 5 seconds
    ALL data on /dev/sdh will be DESTROYED
        Press control-C to abort
Sanitize failed: Illegal request, Invalid opcode
sg_sanitize failed: Illegal request, Invalid opcode

In other commands, I see Bad PASSWD or PASSWD Needed. I just need a way to override that to format them cleanly.

Anyone else had an experience like this? Cheers.

8 Comments
2024/03/12
17:34 UTC

7

Spanned volume and LUN migration

We need to migrate 5 LUNs to a different storage, the current one goes EOS soon. The problem is, this five LUNs are presented (RDM) to a vmware Windows Server VM, and within the VM itself, the five LUNs have been grouped as a spanned volume. My idea is replicating (PPRC) such volumes to the new storage, stop the VM, umap the older LUNs, map the new LUNs to the ESX cluster, pass them to the VM as RDM, and re-create the spanned volume again, without losing the data, this goes without saying. I am not sure this is going to work ... any idea or suggestion will be appreciated.

UPDATE: I just tested the procedure in a couple of lab servers and it worked. I followed the steps from this KB: https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753750(v=ws.11)?redirectedfrom=MSDN and could succesfully recreate the spanned volume with all its data.

13 Comments
2024/03/11
12:48 UTC

0

Box - I can’t sign in to ask why I have 2.5 GB data and it says I have 9.5. The sign in option doesn’t work. What’s with this company?

2 Comments
2024/03/09
23:06 UTC

1

Anyone familiar with the PM9A3 family of SSDs?

Can anyone tell me what is the difference between these two part numbers?

MZQL21T9HCJR-00W07

MZQL21T9HCJR-00A07

I'm looking for the 2.5" U.2 model of their 1.92TB capacity drives.

5 Comments
2024/03/08
09:37 UTC

3

XFS - How much of a performance benefit we will get with external device for journaling for XFS?

We have 150 storage nodes, with 25 disks per storage node, all are HDD, with 1 nvme slot. HDD are XFS. How much performance gain would we see with an external device journaling, journal written to NVME. These nodes are write heavy work load, with around 2K requests for write per/sec. Size of these files are 4MB each.

6 Comments
2024/03/07
20:30 UTC

1

Primary PSU

Hi!

I have a Fujitsu Eternus DX100 S5 which has 2 PSU - PSU#0 and PSU#1.

I couldn't find any information about which PSU is the primary one. Or does the power get distributed equally when both PSUs are connected to power?

Asking this because I want the Eternus to use power from the UPS to get accurate runtime estimation.

4 Comments
2024/03/07
14:27 UTC

2

PIN Locked operator panel on TS4300

Hello! I've got my hands on a TS4300 (I work for an ITAD company) and it arrived with a PIN-locked operator panel.

Any tips on removing the PIN lock? I can't seem to find good documentation on this.

Thanks

11 Comments
2024/03/06
13:46 UTC

8

Powerprotect DD alternative?

Is there an alternative to PowerProtect data domain? As of it has same features (deduplication, compression, encryption, multitenency, vault, etc)?

Ik its a very powerful appliance but im wondering if there is an alternative that is as good or below it.

Thanks!

21 Comments
2024/03/03
08:43 UTC

7

Looking at Dell PowerVault ME5024, what are the pros and cons of SAS vs iSCSI

As title, we need more storage capacity for our 2 Hyper-V hosts. We had a quote from Dell for PV ME5012 8 x 12Gb SAS but I've done some more research and realised that ME5024 has bigger disk capacity and that 10GbE iSCSI is an option.

Both Hyper-V servers have 4 port 10GbE cards with 2 unused ports on each. We also have an unused 16 port 10GbE Netgear XS716T switch that we can utilise for this.

Assuming we are going to use SSDs, would there be much performance hit with iSCSI over SAS. I am assuming that multiple 10GbE ports can be teamed, is that correct?

Any other gotchas I should be aware of? TIA

27 Comments
2024/03/01
14:52 UTC

3

Dell ME4024: Cabling dual controllers to a VLT domain.

UPDATE: Thank you for all of your replies. I believe I have learned enough from you guys

TLDR - Dell docs show cabling the controllers in a way that isolates each subnet to a switch. Why? I thought it would be ideal to have a path through each switch to both controllers via both subnets.

Obligatory "I'm not a storage guy" or "Network guy" for that matter.

 

We're trying to figure out the proper way to cable a dual controller ME4024 storage array to two S4048-ON switches that are configured for VLT (Dell's MLAG). We will be using all four CNC ports on each controller (8 total connections) and two subnets. iSCSI protocol to be used.

 

The following example from the official documentation has left us a little confused. If we're understanding this correctly, the docs are telling us to cable the controller ports to the switches in such a way that one subnet is on Switch-A and the other subnet is on Switch-B. Highlights have been added to show subnets 10 (green) and 11 (blue).

NOTE: The controller ports are 3, 2, 1, 0 (left to right) btw.

https://imgur.com/a/90nmLHj

 

This doesn't make much sense to us. Wouldn't you want a path through each switch for each controller AND subnet? If a switch is lost in the above example, access to an entire iSCSI subnet is also lost, no? Is there a reason for isolating the iSCSI subnets to their own switches? Are there different cabling recommendations if the switches are configured in VLT (Dell's MLAG)?

 

Based on the subnet/address to port mapping in the example above, we thought something like the following would be appropriate:

  • Controller ports 0,1 to one switch. Controller ports 2,3 to the other switch. This makes each subnet available in both paths (Switch A and Switch B).

OR

  • Since each pair of ports (0,1 & 2,3) on each controller share a CNC chip, make ports 0/1 subnet 10 and ports 2/3 subnet 11 . . . cable as shown above.
15 Comments
2024/02/29
21:32 UTC

4

Storage virtualization solutions similar to IBM SVC

other than DellEMC Vplex, ¿is there other similar product among the main players? I am talking HPE, Hitachi, Netapp, Pure and the likes.

Thanks.

13 Comments
2024/02/29
18:57 UTC

0

Cloning a single HDD, into multiple HDD (Storage)

So, I have an external 5TB HDD (currently only 1.3 TB used), Currently consisting of my unedited YouTube recordings & Device backups, which suddenly started to have READ ERROR. And thus, I wanted to clone it & get it replaced, however though I only have 1 available 1 TB HDD and some higher storage (512 GB) Micro SD Card.

Since folder-to-folder transfer is not possible, as it automatically goes to 0kbps after sometime. And YouTube uploads are 8+ hours for just a single video (About 40 GB) (I have 25mbps WiFi). I was wondering if I can kinda merge storages to clone it?

5 Comments
2024/02/29
17:29 UTC

7

Broadcom impact on storage ecosystem?

How does Broadcom<>VMware impact the storage ecosystem?

Nutanix has AHV, but it’s Hyperconverged. Hyper-V is… wait are we still talking about Hyper-V? KVM alternatives exist but lack a lot of the VMware user experience - at least as of this exact moment.

If a customer were to try to leave VMware, where do they go and how does that impact their storage?

20 Comments
2024/02/29
03:54 UTC

13

Interviewing at Pure Storage - can anyone help me understand why they win and why they suck?

I'm in this brutal job market and I really want to show up massively prepared by understanding from either partners who sell Pure Storage, customer who buy Pure Storage (or don't and went with a competitor), or just data storage industry experts their opinion on:

  1. What are other companies getting wrong about storage that Pure Storage is getting right (or also getting wrong)?
  2. Where is Pure Storage the leader and where are they behind (I.e.; making way for companies like Vast Data to steal some market share)?
  3. Anything else you want to tell me?

Any thought leadership is super helpful. There's only so much you can glean from a website, YouTube, and paid PR press coverage.

Yours truly,
A non-quitter who sees the light at the end of this job-hunt hellhole :D

59 Comments
2024/02/28
18:47 UTC

3

Nimble Storage with anything other than esxi Since Broadcom Aquired them

since VMWare is on the way out and our Nimble storage isn't going anywhere I was wondering if any of you are running nimble storage on any other hypervisors besides VMWare. and how the experience has been

10 Comments
2024/02/28
16:46 UTC

2

Dell PowerStore 3200T vs IBM FlashSystems 7300

Which would you buy and why? Workload will be VMware primarily. What does everyone think?

15 Comments
2024/02/27
22:26 UTC

1

Will Synology HAT3300 4TB work with Qnap TS-233??

2 Comments
2024/02/27
10:43 UTC

3

Dell Compellent being phased out, can't unassign disks

We have a Compellent setup at our DR site, 2x SC9000 / 2x SC4020 w/ 24 SSDs.

I'm trying to get it as "decomissioned" as possible. There's as much a chance that we'll be sending these back to Dell as a partial discount item as there is of us holding onto them to bump up our other Compellent setup at HQ that we use for Engineering scratch space.

We have 3 disks of the 48, 1 in SC4020 #1, 2 in #2, that won't unassign themselves.

Whenever we do anything, we're given the error message "you must enter a value greater than or equal to 1 for attribute storagealerttheshold"

Sadly, these systems have a lapsed warranty, so Dell's not exactly inclined to help us (we've asked our rep for other things with the spare Compellent we had at HQ)

Anyone know how to deal with that storagealertthreshold error, or otherwise clear those drives so they can be unassigned?

9 Comments
2024/02/26
20:49 UTC

6

Fiber channel direct attached

Hello all,

I'm looking for some review with Fiber channel solution in direct attache to Array disks ( No FC swicth ).
Someone use this solution ? Any issues ?

Thx a lot !

21 Comments
2024/02/24
13:11 UTC

3

Centralized Storage System for AI/ML workload

Hi All,

I am quite new to storage as I never configured anything beyond RAID on a single server. But I am working with my team to setup a hardware setup containing minimum of 3-4 nodes for an aiml workload.

Following storage workloads can be seen with the existing 3-4 servers

  • VM's with storage drives
  • Containers with persistent volumes
  • SQL Database like Postgresql
  • NOSQL databases like Redis
  • metadata stores hosting the artifacts generated from ML training
  • Older Training Datasets which are in CSV format
  • Latest Training Datasets which are generated from streaming data
  • code repositories like git
  • software repositories and so on (Quite a large list)

I was thinking to setup a Network based storage (NAS Vs SAN) acting as a centralized storage system instead of relying on the local drives for the same.

CEPH came to my mind but it needs

  • minimum of 3 nodes to setup
  • Complexity in management (this is how everyone scared me off).

High Availability on Day-1 is not a critical requirement but looking for a solution with centralized storage where I can start with a single node addressing all my different data needs with multiple NVME drives and then scale to multi node cluster once we reach the scale.

Any suggestions or ideas would be of great help to me.

47 Comments
2024/02/23
07:26 UTC

10

Choosing shared storage for a 10-servers' research lab

Good day,

I'm looking for advice on how to organize the shared storage in our research lab.

We currently have 10 servers (will probably expand to 15 approx in the future, but I doubt we'll go beyond this number) which we need to connect with a shared storage for the data. We don't need parallel writing to the same file, but need parallel read/write access to the same folders (we want to store datasets and access them from any server, ideally without moving around). All servers are in the same rack and connected with 10gb network.

File patterns - mix of big files (100+gb) and a lot of small files (10-50Kb). Ideally, users would like to directly run data processing (splitting big files into small, streaming data into big CSV, parallel processing of many small files) on this file system without copying data locally, modifying, and uploading back, so I'd love to have some performance for random read/writes.

Storage size: 50-100Tb, eligible to scale later.

A pretty important factor is that we don't have separate administrators for this, so ideally it shouldn't be something that requires constant monitoring, tuning, and troubleshooting.

So far I've been looking at distributed file systems (Ceph and similar), cluster FS (GFS2/OCFS2), or just NFS share. I'm afraid of Ceph and similar due to the learning curve and administration. With cluster FS OCFS2 supports only 16Tb, and while GFS2 allows up to 16 hosts and 100Tb, I don't know what to expect performance-wise, administration-wise, and how it will work closer to the limits.

Will NFS be a viable solution here? Build from own hardware (our server) or something like netapp?

Am I missing some other obvious solution?

Thank you in advance.

19 Comments
2024/02/23
00:11 UTC

3

Need for Hardware Raid with NVME

Hi All,

I am new to this thread and want to ask your opinion whether I need a hardware raid in the case of NVME drives? I am creating a hardware setup containing 3 machines namely storage-server, mltraining-server and dev-server.

I am quite new to CEPH based storage system but I want to leverage the storage server for all data which is either created or used by the training and dev servers. Today I have NVME drives in all my servers including storage. Do you think I need hardware Raid for this configuration or CEPH is good enough ?

14 Comments
2024/02/22
21:27 UTC

3

Add disk to Raid 5

Hi everyone! So I will soon be getting a “Raidsonic IB-RD3640SU3” which I plan to eventually fill up with 4 2tb drives. I only have 3 of the four drives right now. So my question is: how easy is it to add the last drive later? Will it be plug and play and the array will update itself with the new drive?

Thanks!

7 Comments
2024/02/22
10:46 UTC

1

New volumen iscsi

I have a Red Hat cluster, and they tell me from storage that they have already presented a new 1.5T disk... but when I use the lsblk command I don't see anything... should I do some additional action?

server~]$ lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

rhel-root 253:0 0 70G 0 lvm /

rhel-swap 253:1 0 180G 0 lvm [SWAP]

rhel-csgi 253:2 0 60G 0 lvm /csgi

rhel-home 253:3 0 20G 0 lvm /home/cgi

rhel-tmp 253:4 0 46.6G 0 lvm /tmp

sdb 8:16 0 1.5T 0 disk

mpatha 253:5 0 1.5T 0 mpath

data01_vg-data01_lv 253:27 0 1.5T 0 lvm /dcs/data01

sdc 8:32 0 1.5T 0 disk

mpathb 253:6 0 1.5T 0 mpath

data02_vg-data02_lv 253:46 0 1.5T 0 lvm /dcs/data02

sdd 8:48 0 1.5T 0 disk

mpathm 253:17 0 1.5T 0 mpath

data13_vg-data13_lv 253:38 0 1.5T 0 lvm /dcs/data13

sde 8:64 0 1.5T 0 disk

mpatho 253:19 0 1.5T 0 mpath

data14_vg-data14_lv 253:39 0 1.5T 0 lvm /dcs/data14

sdf 8:80 0 1.5T 0 disk

mpathp 253:20 0 1.5T 0 mpath

data15_vg-data15_lv 253:40 0 1.5T 0 lvm /dcs/data15

sdg 8:96 0 1.5T 0 disk

mpathq 253:21 0 1.5T 0 mpath

data16_vg-data16_lv 253:41 0 1.5T 0 lvm /dcs/data16

sdh 8:112 0 1.5T 0 disk

mpathr 253:22 0 1.5T 0 mpath

data17_vg-data17_lv 253:42 0 1.5T 0 lvm /dcs/data17

sdi 8:128 0 1.5T 0 disk

mpaths 253:23 0 1.5T 0 mpath

data18_vg-data18_lv 253:43 0 1.5T 0 lvm /dcs/data18

sdj 8:144 0 1.5T 0 disk

mpatht 253:24 0 1.5T 0 mpath

data19_vg-data19_lv 253:44 0 1.5T 0 lvm /dcs/data19

sdk 8:160 0 1.5T 0 disk

mpathu 253:25 0 1.5T 0 mpath

data20_vg-data20_lv 253:45 0 1.5T 0 lvm /dcs/data20

sdl 8:176 0 1.5T 0 disk

mpathc 253:7 0 1.5T 0 mpath

data03_vg-data03_lv 253:28 0 1.5T 0 lvm /dcs/data03

sdm 8:192 0 1.5T 0 disk

mpathd 253:8 0 1.5T 0 mpath

data04_vg-data04_lv 253:29 0 1.5T 0 lvm /dcs/data04

sdn 8:208 0 1.5T 0 disk

mpathe 253:9 0 1.5T 0 mpath

data05_vg-data05_lv 253:30 0 1.5T 0 lvm /dcs/data05

sdo 8:224 0 1.5T 0 disk

mpathf 253:10 0 1.5T 0 mpath

data06_vg-data06_lv 253:31 0 1.5T 0 lvm /dcs/data06

sdp 8:240 0 1.5T 0 disk

mpathg 253:11 0 1.5T 0 mpath

data07_vg-data07_lv 253:32 0 1.5T 0 lvm /dcs/data07

sdq 65:0 0 1.5T 0 disk

mpathh 253:12 0 1.5T 0 mpath

data08_vg-data08_lv 253:33 0 1.5T 0 lvm /dcs/data08

sdr 65:16 0 1.5T 0 disk

mpathi 253:13 0 1.5T 0 mpath

data09_vg-data09_lv 253:34 0 1.5T 0 lvm /dcs/data09

sds 65:32 0 1.5T 0 disk

mpathj 253:14 0 1.5T 0 mpath

data10_vg-data10_lv 253:35 0 1.5T 0 lvm /dcs/data10

sdt 65:48 0 1.5T 0 disk

mpathk 253:15 0 1.5T 0 mpath

data11_vg-data11_lv 253:36 0 1.5T 0 lvm /dcs/data11

sdu 65:64 0 1.5T 0 disk

mpathl 253:16 0 1.5T 0 mpath

data12_vg-data12_lv 253:37 0 1.5T 0 lvm /dcs/data12

sdv 65:80 0 600G 0 disk

mpathn 253:18 0 600G 0 mpath

appl01_vg-appl01_lv 253:26 0 600G 0 lvm /dcs/appl01

sdw 65:96 0 1.5T 0 disk

mpatha 253:5 0 1.5T 0 mpath

data01_vg-data01_lv 253:27 0 1.5T 0 lvm /dcs/data01

sdx 65:112 0 1.5T 0 disk

mpathb 253:6 0 1.5T 0 mpath

data02_vg-data02_lv 253:46 0 1.5T 0 lvm /dcs/data02

sdy 65:128 0 1.5T 0 disk

mpathm 253:17 0 1.5T 0 mpath

data13_vg-data13_lv 253:38 0 1.5T 0 lvm /dcs/data13

sdz 65:144 0 1.5T 0 disk

mpatho 253:19 0 1.5T 0 mpath

data14_vg-data14_lv 253:39 0 1.5T 0 lvm /dcs/data14

sdaa 65:160 0 1.5T 0 disk

mpathp 253:20 0 1.5T 0 mpath

data15_vg-data15_lv 253:40 0 1.5T 0 lvm /dcs/data15

sdab 65:176 0 1.5T 0 disk

mpathq 253:21 0 1.5T 0 mpath

data16_vg-data16_lv 253:41 0 1.5T 0 lvm /dcs/data16

sdac 65:192 0 1.5T 0 disk

mpathr 253:22 0 1.5T 0 mpath

data17_vg-data17_lv 253:42 0 1.5T 0 lvm /dcs/data17

sdad 65:208 0 1.5T 0 disk

mpaths 253:23 0 1.5T 0 mpath

data18_vg-data18_lv 253:43 0 1.5T 0 lvm /dcs/data18

sdae 65:224 0 1.5T 0 disk

mpatht 253:24 0 1.5T 0 mpath

data19_vg-data19_lv 253:44 0 1.5T 0 lvm /dcs/data19

sdaf 65:240 0 1.5T 0 disk

mpathu 253:25 0 1.5T 0 mpath

data20_vg-data20_lv 253:45 0 1.5T 0 lvm /dcs/data20

sdag 66:0 0 1.5T 0 disk

mpathc 253:7 0 1.5T 0 mpath

data03_vg-data03_lv 253:28 0 1.5T 0 lvm /dcs/data03

sdah 66:16 0 1.5T 0 disk

mpathd 253:8 0 1.5T 0 mpath

data04_vg-data04_lv 253:29 0 1.5T 0 lvm /dcs/data04

sdai 66:32 0 1.5T 0 disk

mpathe 253:9 0 1.5T 0 mpath

data05_vg-data05_lv 253:30 0 1.5T 0 lvm /dcs/data05

sdaj 66:48 0 1.5T 0 disk

mpathf 253:10 0 1.5T 0 mpath

data06_vg-data06_lv 253:31 0 1.5T 0 lvm /dcs/data06

sdak 66:64 0 1.5T 0 disk

mpathg 253:11 0 1.5T 0 mpath

data07_vg-data07_lv 253:32 0 1.5T 0 lvm /dcs/data07

sdal 66:80 0 1.5T 0 disk

mpathh 253:12 0 1.5T 0 mpath

data08_vg-data08_lv 253:33 0 1.5T 0 lvm /dcs/data08

sdam 66:96 0 1.5T 0 disk

mpathi 253:13 0 1.5T 0 mpath

data09_vg-data09_lv 253:34 0 1.5T 0 lvm /dcs/data09

sdan 66:112 0 1.5T 0 disk

mpathj 253:14 0 1.5T 0 mpath

data10_vg-data10_lv 253:35 0 1.5T 0 lvm /dcs/data10

sdao 66:128 0 1.5T 0 disk

mpathk 253:15 0 1.5T 0 mpath

data11_vg-data11_lv 253:36 0 1.5T 0 lvm /dcs/data11

sdap 66:144 0 1.5T 0 disk

mpathl 253:16 0 1.5T 0 mpath

data12_vg-data12_lv 253:37 0 1.5T 0 lvm /dcs/data12

sdaq 66:160 0 600G 0 disk

mpathn 253:18 0 600G 0 mpath

appl01_vg-appl01_lv 253:26 0 600G 0 lvm /dcs/appl01

9 Comments
2024/02/22
03:01 UTC

Back To Top