0% found this document useful (0 votes)
181 views8 pages

QNAP RAID Management and Recovery Guide

The document describes the configuration and status of software RAID arrays on a NAS device. It shows the user logging in, accessing the console menu, and viewing disk and RAID status. The user then rebuilds a RAID5 array with 4 disks and confirms the array is online with 1 disk in rebuild status.

Uploaded by

Mohan Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
181 views8 pages

QNAP RAID Management and Recovery Guide

The document describes the configuration and status of software RAID arrays on a NAS device. It shows the user logging in, accessing the console menu, and viewing disk and RAID status. The user then rebuilds a RAID5 array with 4 disks and confirms the array is online with 1 disk in rebuild status.

Uploaded by

Mohan Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

login as: admin

[email protected]'s password:

Access denied

[email protected]'s password:

+-------------------------------------------------------------------------+

| Console Management - Main menu |

| |

| 1: Show network settings |

| 2: System event logs |

| 3: Reset to factory default (password required) |

| 4: Activate/ deactivate a license |

| 5: App management |

| 6: Reboot in Rescue mode (w/o configured disk) |

| 7: Reboot in Maintenance Mode |

| Q: Quit (return to normal shell environment) |

| |

| |

| |

| |

| |

| |

| |

| |
| |

| |

| |

+-------------------------------------------------------------------------+

>> q (processing)

+-------------------------------------------------------------------------+

| Main > Quit |

| |

| Are you sure you want to exit the console menu and return to normal |

| shell environment? (Y/N) |

| |

| |

| |

| |

| |

| |

| |

| |

| |

| |

| |

| |

| |
| |

| |

| |

| |

+-------------------------------------------------------------------------+

>> y (processing)

[~] # df

Filesystem Size Used Available Use% Mounted on

none 400.0M 331.7M 68.3M 83% /

devtmpfs 897.6M 4.0K 897.6M 0% /dev

tmpfs 64.0M 1.0M 63.0M 2% /tmp

tmpfs 913.7M 148.0K 913.6M 0% /dev/shm

tmpfs 16.0M 100.0K 15.9M 1% /share

/dev/mmcblk1p5 7.7M 46.0K 7.7M 1% /mnt/boot_config

tmpfs 16.0M 0 16.0M 0% /mnt/snapshot/export

/dev/md9 499.5M 178.0M 321.5M 36% /mnt/HDA_ROOT

cgroup_root 913.7M 0 913.7M 0% /sys/fs/cgroup

/dev/md13 417.0M 367.3M 49.7M 88% /mnt/ext

tmpfs 32.0M 27.2M 4.8M 85% /samba_third_party

/dev/ram2 433.9M 2.3M 431.6M 1% /mnt/update

tmpfs 4.0K 0 4.0K 0% /tmp/default_dav_root

tmpfs 64.0M 3.7M 60.3M 6% /samba

[~] # md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~


Scanning system...

RAID metadata found!

UUID: 372a0ef8:64d782c9:c0236fec:e15a8e4e

Level: raid5

Devices: 4

Name: md1

Chunk Size: 512K

md Version: 1.0

Creation Time: Nov 20 14:41:20 2018

Status: OFFLINE

=====================================================================================
==========

Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State

=====================================================================================
==========

---------------------------------- 0 Missing -------------------------------------------

NAS_HOST 2 /dev/sdd3 1 Active Sep 16 14:41:08 2023 10030095 .AAA

NAS_HOST 3 /dev/sda3 2 Active Sep 16 14:41:06 2023 10030094 .AAA

NAS_HOST 4 /dev/sdb3 3 Active Sep 16 14:41:06 2023 10030094 .AAA

=====================================================================================
==========

[~] # qcli_storage -d

Enclosure Port Sys_Name Type Size Alias Signatu


NAS_HOST 1 /dev/sdc HDD:free 5.46 TB 3.5" SATA HDD 1 QNAP FL

NAS_HOST 2 /dev/sdd HDD:data 5.46 TB 3.5" SATA HDD 2 QNAP FL

NAS_HOST 3 /dev/sda HDD:data 5.46 TB 3.5" SATA HDD 3 QNAP FL

NAS_HOST 4 /dev/sdb HDD:data 5.46 TB 3.5" SATA HDD 4 QNAP FL

[~] # mdadm -AfR /dev/md1 /dev/sdd3 /dev/sda3 /dev/sdb3

mdadm: Marking array /dev/md1 as 'clean'

mdadm: /dev/md1 has been started with 3 drives (out of 4).

[~] # mdadm --add /dev/md1 /dev/sdc

mdadm: Cannot open /dev/sdc: Device or resource busy

[~] # mdadm --add /dev/md1 /dev/sdc3

mdadm: added /dev/sdc3

[~] # md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...

RAID metadata found!

UUID: 372a0ef8:64d782c9:c0236fec:e15a8e4e

Level: raid5

Devices: 4

Name: md1

Chunk Size: 512K

md Version: 1.0
Creation Time: Nov 20 14:41:20 2018

Status: ONLINE (md1) [_UUU]

================================================================================

Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events

================================================================================

NAS_HOST 1 /dev/sdc3 0 Rebuild Sep 26 12:11:11 2023 10030097

NAS_HOST 2 /dev/sdd3 1 Active Sep 26 12:11:11 2023 10030097

NAS_HOST 3 /dev/sda3 2 Active Sep 26 12:11:11 2023 10030097

NAS_HOST 4 /dev/sdb3 3 Active Sep 26 12:11:11 2023 10030097

================================================================================

[~] # cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi

md1 : active raid5 sdc3[4] sdd3[1] sdb3[3] sda3[2]

17551701504 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]

[>....................] recovery = 0.1% (7024664/5850567168) finish=493.

md322 : active raid1 sdb5[3](S) sda5[2](S) sdd5[1] sdc5[0]

6702656 blocks super 1.0 [2/2] [UU]

bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdb2[3](S) sda2[2](S) sdd2[1] sdc2[0]

530112 blocks super 1.0 [2/2] [UU]

bitmap: 0/1 pages [0KB], 65536KB chunk


md13 : active raid1 sdc4[32] sdb4[3] sda4[2] sdd4[1]

458880 blocks super 1.0 [32/4] [UUUU____________________________]

bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdc1[33] sdb1[3] sda1[2] sdd1[1]

530048 blocks super 1.0 [33/4] [UUUU_____________________________]

bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

[~] # df

Filesystem Size Used Available Use% Mounted on

none 400.0M 331.9M 68.1M 83% /

devtmpfs 897.6M 4.0K 897.6M 0% /dev

tmpfs 64.0M 1.0M 63.0M 2% /tmp

tmpfs 913.7M 148.0K 913.6M 0% /dev/shm

tmpfs 16.0M 100.0K 15.9M 1% /share

/dev/mmcblk1p5 7.7M 46.0K 7.7M 1% /mnt/boot_config

tmpfs 16.0M 0 16.0M 0% /mnt/snapshot/export

/dev/md9 499.5M 178.0M 321.5M 36% /mnt/HDA_ROOT

cgroup_root 913.7M 0 913.7M 0% /sys/fs/cgroup

/dev/md13 417.0M 367.3M 49.7M 88% /mnt/ext

tmpfs 32.0M 27.2M 4.8M 85% /samba_third_party

/dev/ram2 433.9M 2.3M 431.6M 1% /mnt/update

tmpfs 4.0K 0 4.0K 0% /tmp/default_dav_root

tmpfs 64.0M 3.7M 60.3M 6% /samba


[~] # pvs

[~] # lvs

[~] # cd /etc/config

[/etc/config] # vi uLinux.conf

[/etc/config] # storage_util --sys_startup

Detect disk(8, 48)...

dev_count ++ = 0

Detect disk(8, 16)...

dev_count ++ = 1

Detect disk(8, 32)...

dev_count ++ = 2

Detect disk(8, 0)...

dev_count ++ = 3

Detect disk(8, 48)...

Detect disk(8, 16)...

Detect disk(8, 32)...

Detect disk(8, 0)...

[/etc/config] # storage_util --sys_startup_p2

sys_startup_p2:got called count = -1

[/etc/config] #

You might also like