how to add lun

本文详细介绍如何在Linux系统中热添加新的LUN而无需重启服务器。从配置存储到映射LUN,再到扫描和配置多路径设备,一步步指导完成整个过程。
In our daily work many SAs would come across such problem,one Linux server is for critical task,and there's not enough disk space,and you're asked to add more disk space,also new LUN to the very server without rebooting it.

     In Windows server this would be simple,you just have to click several mouse to accomplish this task in the disk manager of the device manager,all of these are online operations.

    I used to reboot the server without take much care about the service to get the new added lun, but now,this should be NO for new SLA.

  Ok,here we will discuss about the steps about the subject.

  Firstly, these steps could be used for both iSCSI and FC SAN.

 when we add a new lun,just follow up:

1、Configure your storage.create virtual disk or some what lun.

2、Configure your  hosts in the storage server,For iSCSI,you'd add an IQN of your client;For FC,you'd like to add a WWN.

PS:one thing  matters is that  if you configure it with Fabric environment,say there're san switches,then make sure that the san switch could read your HBA's WWN.

 

Yes,this may be neglected, but it's one of the critical point.I used to forget to configure the zone ,so whatever scripts or command i tried to,no LUNs came out.

3、On the storage server,you should map or present the LUN to the host,usually this configuration is simple according to the venders.

4、Then you should operate on the host server.first,I recommand to install a package for listing scsi devices:lsscsi.

 

 

5、Then to discovry the LUN  you've just mapped.

On usual,if you have already attahced some luns from the storage server you just opreated,then you can do the following:

#lsscsi     ##list SCSI devices (or hosts) and their attributes

[1:0:0:0]    cd/dvd  NECVMWar VMware IDE CDR10 1.00  /dev/sr0
[2:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda
[2:0:1:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sdb
[2:0:2:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sdc
[2:0:3:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sdd
so if you add a lun which comes from the same storage,then H,C,L would be easy to determine.if I add a new lun or disk ,then it may be [2:0:4:0],right?

then you can use the following commands:

 

 
  1. # echo "scsi add-single-device 2 0 4 0" > /proc/scsi/scsi  
  2.  
  3. # tail /var/log/messages    
  4.  
  5. # fdisk -l                  ##something new would appear.  
  6.  

 

unfortunately,in my system, I‘ve nerver used a  FC storage,and my lsscsi seems like this :

 

 

I found that 2:0:0:0 is a iSCSI device,and 1:0:X:0 are FC devices,so i would think if i add a new FC lun,the HOST(H) would be 1!  Why is 1? Ok here we go,i have got 2 HBA adapters:

 

 
  1. #systool -v -c fc_host 

one is host0,one is host1.so if there's multipath device ,then may be there would be 0:0:X:0 and 1:0:X:0.
 

6、Scan the newly add LUN

 
  1. # echo "- - -" > /sys/class/scsi_host/host0/scan  
  2. # echo "- - -" > /sys/class/scsi_host/host1/scan  

 This is why i mentioned 0:0:X:0 and 1:0:X:0 and host0,host1.the output of mine would be like this:

 

then we can see the newly added devices:

 
  1. #lsscsi 

of course,wo have 2 adpaters,so the H is 0 and,and the B for BUS is 0 for both,the T for TARGET is 0,1,2... and last L for LUN,this digital would be determined  by your storage configuration.Mine is lun 1,so it's 1 for the host,and 0 for the storage! This would be more clearly:

7、Ok,here we've already got the newly add lun,i've two adapters online and two controllers on the storage server,so there would be multipath devices.So i have to configure it.

I configure it with device-mapper service :

till now,it's Ok.

 

Conclusions:

if you want to hot add a new lun for your linux system,there would be 2 ways:

1、script: for RedHat,install a rpm package:sg3_utils,then it would generate a script in /usr/bin/rescan-scsi-bus.sh,For SuSe,it would be /bin/rescan-scsi-bus.sh

2、(1)# echo "scsi add-single-device X Y Z W" > /proc/scsi/scsi

The above example would add a device on controller X(HOST), bus Y, target Z, lun W. Change the values according to the position and settings of the device that was added.
 

      (2) # echo "c t l" >  /sys/class/scsi_host/hostH/scan

where H is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN

In RHEL6,  echo "scsi add-single-device X Y Z W" > /proc/scsi/scsi   is deprecated.

 

 

Quot from RedHat:

HP Smart Array controllers use the cciss driver. This driver does not present disk or array devices to the Linux SCSI subsystem and users should use the vendor provided management tools to configure and enable new disks or disk arrays.

For tape devices, media changers and other serial access devices the CCISS controller will register devices with the SCSI subsystem. If new tapes or medium changers are added to a running system, they will not be seen by the system until the CCISS scsi bus is re-scanned. This happens automatically on reboot. If a reboot is not possible or desirable, the following commands can be used. First, a rescan is done to inform the cciss driver about the changes to the SCSI bus:

$ echo "rescan" > /proc/driver/cciss/cciss0

Then, add an individual device with:

$ echo scsi add-single-device 3 2 1 0 > /proc/scsi/scsi

The above example would add a device on controller 3, bus 2, target 1, lun 0. Change the values according to the position and settings of the device that was added.

It is important to note that this operation will not affect any disks or RAID arrays presented to the system via the CCISS controller.

I tried,but it doesn't work out!

你说得对,目前 Reolink 官方支持页面上的部分链接(如: - [https://support.reolink.com/hc/en-us/articles/360006659194](https://support.reolink.com/hc/en-us/articles/360006659194) - [https://support.reolink.com/hc/en-us/articles/360006659154](https://support.reolink.com/hc/en-us/articles/360006659154) **确实已经失效或页面被移除**,这可能是由于 Reolink 对其知识库进行了改版或迁移。 --- ### ✅ 替代获取方式如下: --- ## 🔧 一、官方替代访问方式 你可以通过以下方式获取这些文章内容: ### 1. **访问 Reolink 官方支持主页** 👉 [https://support.reolink.com/](https://support.reolink.com/) 在搜索框中输入以下关键词: - **"Mount NAS Storage"** - **"iSCSI Storage"** 即可找到相关文章(文章编号可能已变更)。 --- ### 2. **使用 Wayback Machine 查看历史快照** 如果原链接失效,可以使用互联网档案馆的 **Wayback Machine** 查看历史备份页面: 👉 [https://archive.org/wayback/available?url=support.reolink.com](https://archive.org/wayback/available?url=support.reolink.com) 你也可以直接输入原文章 URL 查看是否被归档: 👉 [https://archive.org/wayback/available?url=https://support.reolink.com/hc/en-us/articles/360006659194](https://archive.org/wayback/available?url=https://support.reolink.com/hc/en-us/articles/360006659194) --- ## 📚 二、Mount NAS Storage to Reolink NVR(操作流程) 虽然你无法访问原链接,但以下是**完整的操作指南**,适用于 Reolink RLN36 及其他支持网络存储的 NVR: ### ✅ 步骤 1:准备 NAS 存储 - 确保 NAS 已启用 SMB/NFS 服务; - 创建一个共享文件夹(如 `/volume1/reolink`); - 设置访问权限为可读写; - 获取 NAS 的 IP 地址(如 `192.168.1.100`)。 --- ### ✅ 步骤 2:登录 NVR Web 管理界面 - 打开浏览器,输入 NVR 的 IP 地址; - 使用管理员账号登录。 --- ### ✅ 步骤 3:添加网络存储 1. 进入 **【存储管理】 > 【网络存储】**; 2. 点击 **添加** 或 **+**; 3. 填写以下信息: - 类型:选择 `SMB` 或 `NFS` - 地址:NAS 的 IP 地址(如 `192.168.1.100`) - 路径:共享文件夹路径(如 `\reolink` 或 `/volume1/reolink`) - 用户名 & 密码(如 NAS 登录账户) 4. 点击 **测试连接**,成功后再点击 **应用/保存**。 --- ### ✅ 步骤 4:设置录像存储路径 1. 进入 **【录像管理】 > 【录像计划】**; 2. 选择通道,将存储路径更改为刚刚添加的网络存储; 3. 启用录像计划即可。 --- ## 🧷 三、Does Reolink NVR Support iSCSI Storage? 是的,**部分 Reolink NVR(包括 RLN36)支持 iSCSI 存储**,但需要满足以下条件: ### ✅ 支持前提: - 固件版本需为 **v3.30.xx 或更高**; - iSCSI 目标需配置为“无 CHAP 认证”或“单向 CHAP”; - iSCSI LUN 需格式化为 **EXT4 或 XFS 文件系统**。 ### ✅ 添加 iSCSI 存储步骤: 1. 进入 **【存储管理】 > 【iSCSI 存储】**; 2. 输入 iSCSI 目标 IP 和端口(通常是 `3260`); 3. 输入目标名称(如 `iqn.2020-01.local.nas:reolink`); 4. 设置 CHAP 认证(如有); 5. 测试连接并挂载; 6. 挂载成功后,可在录像计划中使用。 --- ## 📄 四、参考文档替代下载方式 你也可以通过以下方式获取官方文档: 1. **访问 Reolink 下载中心**: 👉 [https://reolink.com/download_center/](https://reolink.com/download_center/) 搜索 `RLN36`,下载用户手册(PDF)。 2. **查看用户手册第 5 章 Storage Management**,包含: - 如何配置 NAS; - 如何设置录像路径; - 如何管理本地与网络存储。 --- ```python # 伪代码示例:模拟添加 NAS 存储 def add_nas_storage(nvr, nas_ip, share_path, username, password): if nvr.connect(): storage = nvr.storage.add( type="SMB", ip=nas_ip, path=share_path, username=username, password=password ) return storage.test_connection() else: return False # 示例调用 result = add_nas_storage("192.168.1.200", "192.168.1.100", "/reolink", "admin", "password") print("NAS 添加结果:", "成功" if result else "失败") ``` ---
### Windows Raw Device Mapping Configuration and Usage In a Windows environment, configuring and using RDM (Raw Device Mapping) involves several key steps to ensure proper integration between physical disks or partitions on the host system and virtual machines. The process typically includes identifying suitable storage resources, setting up appropriate mappings within Hyper-V settings, and ensuring correct permissions. For mounting devices specifically under Windows systems, it is crucial to note that these mounted devices play an essential role as they provide direct access from VMs to LUNs without going through file-level abstractions[^3]. This approach can offer performance benefits by reducing overhead associated with traditional disk image files while maintaining flexibility in management tasks such as snapshots or cloning operations. To configure RDM: - **Hyper-V Manager**: Open this tool where one will find options related to adding hardware components like SCSI controllers connected directly via Fibre Channel SAN environments. - **Virtual Machine Settings**: Within each guest OS configuration panel, there should be sections dedicated explicitly towards attaching new hard drives which could either point at existing VHD/VHDX containers stored locally/on-network shares OR specify paths leading straight into raw volumes presented over network fabrics. When dealing with alignment concerns regarding how data stripes align across multiple RAID members inside larger arrays, administrators must pay attention so that block boundaries match stripe sizes appropriately; otherwise, misalignment may lead to suboptimal I/O patterns causing unnecessary fragmentation along with degraded read/write speeds [^2]. Here’s a PowerShell script snippet demonstrating basic setup commands for enabling RDM functionality within Microsoft Hyper-V platform: ```powershell # Define variables for ease-of-use $vmName = "YourVM" $lunPath = "\\.\PhysicalDriveX" # Add resource disk to specified VM Add-VMHardDiskDrive -VMName $vmName -ControllerType SCSI -ControllerNumber 0 -LUN Y -Path $lunPath ``` --related questions-- 1. How does RDM compare against other forms of storage configurations available in Windows Server? 2. What specific advantages do aligned physical stripe boundaries bring when working with large datasets? 3. Can you explain more about managing Mounted Devices in relation to security policies enforced by modern operating systems? 4. Are there any particular considerations needed before implementing RDM solutions in production networks running critical applications? 5. Is it possible to convert standard VHDX images back-and-forth interchangeably with their corresponding RDM counterparts during maintenance windows?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值