Huawei OceanStor 9000 V5 Scale-Out NAS Technical White Paper
Huawei OceanStor 9000 V5 Scale-Out NAS Technical White Paper
Issue 01
Date 2020-06-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Contents
1 Introduction .............................................................................................................................. 1
2 Hardware, Software, and Network ........................................................................................ 3
2.1 Hardware and Software Architectures .................................................................................................................... 3
2.2 Network Overview ................................................................................................................................................ 5
2.2.1 Ethernet Networking (Ethernet Front-End and Ethernet Back-End)...................................................................... 7
2.2.2 InfiniBand Networking (InfiniBand Front-End and InfiniBand Back-End) ........................................................... 8
2.2.3 Ethernet + InfiniBand Networking (Ethernet Front-End and InfiniBand Back-End).............................................. 8
2.3 System Running Environment................................................................................................................................ 9
1 Introduction
In the data explosion era, data available to people has been increasing exponentially.
Traditional standalone file systems have to add more disks to expand their capacity. Such file
systems are no longer capable of meeting modern storage requirements in terms of capacity
scale, capacity growth speed, data backup, and data security. New storage models are
introduced to resolve this issue:
Centralized storage
File metadata (data that provides information about other data, such as the file location
and size) and data information are stored centrally. Back-end SAN and NAS are mounted
to front-end NFS. This model of storage system is difficult to expand, not to mention
providing petabytes of capacity.
Asymmetrical distributed storage
It has only one metadata service (MDS) node, and stores file metadata and data
separately. Such storage systems include Lustre and MooseFS. One issue with a single
MDS node is single point of failure, which can be avoided using heartbeat mechanism,
but the performance bottleneck with single-point access is inevitable.
Fully symmetrical distributed storage
It employs a fully symmetrical, decentralized, and distributed architecture. Files on
storage devices can be located using the consistent hash algorithm, an implementation of
distributed hash table (DHT). Therefore, this model of storage system does not need to
have an MDS node. It has storage nodes only and does not differentiate between
metadata and data blocks. However, it requires ensured efficiency, balance, and
consistency for the consistent hash algorithm in node expansion and failure scenarios.
Huawei OceanStor distributed file system (DFS) storage has a fully symmetrical,
decentralized, and distributed architecture, but it does not use DHT to locate files on storage
nodes. Each node of an OceanStor DFS storage system can provide MDS and data service as
well as client agent for external access. OceanStor DFS has no dedicated MDS nodes,
eliminating single point of failure and performance bottlenecks. It enables smooth
switchovers during node expansion or failures, and the switchover process is transparent to
services. OceanStor DFS provides a unified file system space for application servers, allowing
them to share data with each other. A storage device that works in distributed cluster mode
typically uses dual-controller or multi-controller nodes to provide services. Each node
supports a specific service load. When the capacity is insufficient, disk enclosures are added
to expand the capacity. On such storage devices, services and nodes are bonded. As a result, a
service and the associated file system run on only one node. This easily leads to load
imbalance within the system. Furthermore, the capacity expansion approach is essentially
scale-up, which aims to improve the performance of a single node but fails to improve the
whole system performance linearly as the capacity increases.
As the software basis of OceanStor 9000 V5, OceanStor DFS (originally called Wushan FS)
works in all-active share-nothing mode, where data and metadata (management data) are
distributed evenly on all nodes. This prevents system resource contentions and eliminates
system bottlenecks. Even if a node fails, OceanStor 9000 V5 automatically identifies the
failed node and restores its data, making the failure transparent to services. In this way,
service continuity is ensured. OceanStor 9000 V5 adopts a networking mechanism featuring
full redundancy and full mesh, employs a symmetrical distributed cluster design, and provides
a globally unified namespace, allowing nodes to concurrently access any file stored on
OceanStor 9000 V5. In addition, OceanStor 9000 V5 supports fine-grained global locking
within files and allows multiple nodes to concurrently access different parts of the same file,
implementing high access concurrency at a high performance level.
OceanStor 9000 V5 provides different types of hardware nodes for different application
scenarios, for example, P nodes for performance-intensive applications and C nodes for
large-capacity applications. Different types of nodes can be intermixed to achieve an optimal
effect. In an intermixed deployment, at least three nodes of each type are required. OceanStor
9000 V5 has different node pools for different hardware nodes, centralizing each type of
nodes in a single file system. Node pools meet multiple levels of capacity and performance
requirements, and the Dynamic Storage Tiering (DST) feature is employed for data to flow
between different storage tiers.
Figure 2-2 shows the OceanStor 9000 V5 hardware nodes.
All storage nodes of OceanStor 9000 V5 protect data from power failures through technical
means, allowing data in the cache to be persistently protected, and reducing the number of
data copies from the memory during network transmission using RDMA. These technologies
enhance the overall system response without compromising system reliability.
An OceanStor 9000 V5 storage system has its hardware platform and software system. The
hardware platform includes network devices and physical storage nodes. The software system
includes OceanStor DFS, management system, and Info-series value-added features.
OceanStor DFS provides NAS share service. The basis software package of OceanStor DFS
support NFS, CIFS, FTP, NDMP, and more as well as client load balancing and performance
acceleration software. The management system includes modules of system resource
management, storage device management, network device management (10GE networking),
system statistics report, trend analysis, capacity analysis forecast, performance comparison,
and diagnostic analysis.
Table 2-1 lists the software of OceanStor 9000 V5:
Name Function
OceanStor DFS Distributed file system software
DeviceManager Device management software
NAS storage InfoEqualizer Load balancing of client connections
value-added
features InfoTurbo Performance acceleration
InfoAllocator Quota management
InfoTier Automatic storage tiering
InfoLocker WORM
InfoStamper Snapshot
InfoReplicator Remote replication
InfoScanner Antivirus
InfoRevive Video image restore
InfoMigrator File migration
InfoStreamDS Direct stream storage
InfoContainer VM
The front-end service network is used to connect OceanStor 9000 V5 to the customer's
network.
The back-end storage network is used to interconnect all nodes in OceanStor 9000 V5.
For OceanStor 9000 V5, the cluster back-end network can be set up based on 10GE, 25GE, or
InfiniBand and the front-end network can be set up based on GE, 10GE, 25GE, or InfiniBand,
meeting various networking requirements. Network redundancy is implemented for each node
of OceanStor 9000 V5 in all network types, enabling OceanStor 9000 V5 to keep working
properly in case a single network port or switch fails.
The front-end network and back-end network can use different physical network adapters for
network isolation. The Intelligent Platform Management Interface (IPMI) ports provided by
OceanStor 9000 V5 allow users to access the device management interface.
The different OceanStor 9000 V5 nodes support the following network types:
2 x 10GE front-end + 2 x 10GE back-end
2 x GE front-end + 2 x 10GE back-end
2 x 100 Gbit/s IB front-end + 2 x 100 Gbit/s IB back-end
2 x 25GE front-end + 2 x 25GE back-end
2 x 10GE front-end + 2 x 25GE back-end
2 x 10GE front-end + 2 x 100 Gbit/s IB back-end
2 x 25GE front-end + 2 x 100 Gbit/s IB back-end
All nodes of OceanStor 9000 V5 can have NAS service inside to provide NAS access
interfaces. Figure 2-4 shows the deployment.
Figure 2-5 Ethernet switches at the front end and back end
Network description:
When OceanStor 9000 V5 uses an Ethernet network, the front-end network connects to
the customer's Ethernet switched network, and the back-end network uses internal
Ethernet switches. Front-end and back-end switches are configured in redundant mode.
GE switches are connected to management and IPMI ports through network cables for
device management only.
Figure 2-6 InfiniBand switches at the front end and back end
Network description:
When OceanStor 9000 V5 uses an InfiniBand network, the front-end network connects
to the customer's InfiniBand switched network, and the back-end network uses internal
InfiniBand switches. Front-end and back-end switches are configured in redundant
mode.
GE switches are connected to management and IPMI ports through network cables for
device management only.
Figure 2-7 Ethernet switches at the front end and InfiniBand switches at the back end
Network description:
When OceanStor 9000 V5 uses an Ethernet + InfiniBand network, the front-end network
connects to the customer's Ethernet switched network, and the back-end network uses
internal InfiniBand switches. Front-end and back-end switches are configured in
redundant mode.
GE switches are connected to management and IPMI ports through network cables for
device management only.
As shown in Figure 3-1, OceanStor 9000 V5 consists of three nodes that are transparent to
users. Users do not know which nodes are providing services for them. When users access
different files, different nodes provide services.
OceanStor DFS supports seamless horizontal expansion, from 3 to 288 nodes, and the
expansion process does not interrupt services. OceanStor 9000 V5 employs a Share Nothing
fully-symmetrical distributed architecture, where metadata and data are evenly distributed to
each node. Such an architecture eliminates performance bottlenecks. As the number of nodes
grows, the storage capacity and computing capability also grow, delivering linearly increased
throughput and concurrent processing capability for end users. OceanStor 9000 V5 supports
thin provisioning, which allocates storage capacity to applications on demand. When the
storage capacity of an application becomes insufficient due to the data growth of an
application, OceanStor 9000 V5 adds storage capacity to the application from the back-end
storage pool. The thin provisioning function makes best use of storage capacity.
OceanStor DFS provides CIFS, NFS, FTP access and a unified namespace, allowing users to
easily access the OceanStor 9000 V5 storage system. Additionally, OceanStor DFS offers
inter-node load balancing and cluster node management. Combined with a symmetrical
architecture, these functions enable each node of OceanStor 9000 V5 to provide global service
access, and failover occurs automatically against single points of failure.
Figure 3-3 shows the logical architecture of OceanStor DFS:
OceanStor DFS has three planes: service plane, storage resource pool plane, and management
plane.
Service plane: provides the distributed files system service.
The distributed file system service provides value-added features associated with NAS
access and file systems. It has a unified namespace to provide storage protocol–based
access as well as NDMP and FTP services.
Storage resource pool plane: allocates and manages all physical storage resources of
clustered storage nodes.
Data of NAS storage is stored in the unified storage resource pool. The storage resource
pool employs distributed technology to offer consistent, cross-node, and reliable key
value (KV) storage service for the service plane. The storage resource pool plane also
provides cross-node load balancing and data repair capabilities. With load balancing, the
storage system is able to leverage the CPU processing, memory cache, and disk capacity
capabilities of newly added nodes to make the system throughput and IOPS linearly
grow as new nodes join the cluster.
The storage resource pool plane provides the distributed file system service with data
read and write. That allows OceanStor 9000 V5 to offer NAS service in the same
physical cluster, sharing physical storage space with two services.
Management plane: provides a graphical user interface (GUI) and a command-line
interface (CLI) tool to manage cluster status and configure system data.
The functions provided by the management plane include hardware resource
configuration, performance monitoring, storage system parameter configuration, user
management, hardware node status management, and software upgrade.
with the number of nodes. When +M data protection is enabled, data corruption occurs only
when M+1 or more nodes in a node pool fail or M+1 or more disks fail. Also, the data
corruption possibility is dramatically reduced after the storage cluster is divided into multiple
node pools. Such a protection method enables files to be distributed to the whole cluster,
providing a higher concurrent data access capability and concurrent data reconstruction
capability. When disks or nodes fail, the system finds which segments of which files are
affected and assigns multiple nodes into the reconstruction. The number of disks and CPUs
that participate in the reconstruction is much larger than that supported by RAID technology,
shortening the fault reconstruction time.
OceanStor DFS provides different types of (intermixed) hardware nodes for applications. It
centralizes each type of nodes in a single file system, meeting multiple levels of capacity and
performance requirements, and the DST feature is employed for data to flow between
different storage tiers.
difference lies in that metadata protection uses mirroring. Specifically, each copy is
independent and complete. By default, metadata protection is one level higher than data
protection for OceanStor 9000 V5.
OceanStor DFS employs a unified namespace. The directory structure of the file system is a
tree structure, and the cluster consists of equal physical nodes. The file system tree is divided
into multiple sub trees, and the MDS module of each physical node manages a different sub
tree.
The directory tree structure is divided into multiple sub trees. Each sub tree belongs to one
MDS module and one MDS module can have multiple sub trees.
Sub tree splitting is dependent on directory splitting. A directory is split when either of the
following conditions is met:
Condition 1: The weighted access frequency of the directory has exceeded the threshold.
Each time metadata is accessed, the directory weighted access frequency is increased in
the memory based on the access type, and is decreased as time goes by. When the
weighted access frequency has exceeded the threshold, the directory is split.
Condition 2: The directory has an excessive number of files.
A split directory is marked as dir_frag. When the previous conditions are no long met, split
directories are merged to avoid too many directory segments.
If a split directory is the root of a sub tree, the directory splitting is actually sub tree splitting.
A split sub tree is still stored on the original metadata server and periodically experiences a
load balancing test. If load imbalance is detected, the split sub tree will be migrated from one
metadata server to another.
To sum up, when an ultra-high directory is accessed frequently, it is split into multiple
dir_frag directories, which correspond to multiple sub trees. Those sub trees will be
distributed to multiple metadata servers, eliminating metadata access bottlenecks.
As shown in Figure 3-5, OceanStor 9000 V5 consists of three nodes on which user data is
evenly distributed. During actual service running, user data distribution is dependent on the
system configuration.
OceanStor 9000 V5 uses erasure codes to store data and provides different data protection
methods for directories and files. Different data protection methods are implemented based on
different data striping mechanisms.
Each piece of data written to OceanStor 9000 V5 is allocated a strip (NAS options: 512
KB/256 KB/128 KB/32 KB/16 KB; OBS: 512 KB). The redundancy ratio can be configured
on a per directory basis. Each directory is divided into multiple original data strips. M
redundant data strips are calculated for each N original data strips, and N+M strips form a
stripe, which is then written to the system. In the event that a system exception causes loss of
some strips, as long as the number of lost strips in a stripe does not exceed M, data can still be
read and written properly. Lost strips can be retrieved from the remaining strips based on a
data reconstruction algorithm. In erasure code mode, the space utilization rate is about
N/(N+M), and data reliability is determined by M, where a larger value of M results in higher
reliability.
In the internal storage resource pool of OceanStor 9000 V5, all data is stored in the unit of
object (the object here is not the object concept in OBS), making OceanStor 9000 V5 a
distributed storage system. The object-based storage system of OceanStor 9000 V5 formats all
OceanStor 9000 V5 devices into object-based storage devices and interconnects them to form
a clustered system.
OceanStor DFS continuously monitors the node and disk status in the system.
If a bad sector exists, the system automatically detects it and rectifies the fault in the
background. Then, the system reconstructs the data of the corresponding bad sector in
memory and rewrites the data to the disk.
If a disk or node fails, the clustered object-based storage system automatically
discovers the failure and initiates object-level data reconstruction. In this type of data
reconstruction, only real data is restored, instead of performing full disk reconstruction as
traditional RAID does. Therefore, the reconstruction efficiency is higher. In addition, different
nodes and disks are selected as targets for concurrent reconstruction of damaged objects.
Compared with traditional RAID that reconstruction data to only one hot spare disk,
object-level data reconstruction is much faster.
different disks that belong to the same RAID group. If a disk fails, RAID reconstruction is
implemented to reconstruct data previously stored on the failed disk.
RAID levels commonly used by storage systems are RAID 0, 1, 5, and 6. RAID 6, which
provides the highest reliability among all RAID levels, tolerates a concurrent failure of two
disks at most. Besides, storage systems use controllers to execute RAID-based data storage.
To prevent a controller failure, a storage system is typically equipped with dual controllers to
ensure service availability. However, if both controllers fail, service interruption becomes
inevitable. Although such storage systems can further improve system reliability by
implementing inter-node synchronous or asynchronous data replication, the disk utilization
will become lower, causing a higher total cost of ownership (TCO).
The data protection technology employed by OceanStor 9000 V5 is based on distributed and
inter-node redundancy. Data written into OceanStor 9000 V5 is divided into N data strips, and
then M redundant data strips are generated (both N and M are an integer). These data strips
are stored on N+M nodes.
Data of one strip is saved on multiple nodes, so OceanStor 9000 V5 ensures data integrity in
not only disk failures but also node failures. As long as the number of concurrently failed
nodes is smaller than M, OceanStor 9000 V5 can continue to provide services properly.
Through data reconstruction, OceanStor 9000 V5 is able to reconstruct damaged data to
protect data reliability.
Also, OceanStor 9000 V5 provides N+M:B protection, allowing M disks or B nodes to fail
without damaging data integrity. This protection mode is particularly effective for a
small-capacity storage system whose has less than N+M nodes.
The data protection modes provided by OceanStor 9000 V5 achieve high reliability similar to
that provided by traditional RAID groups based on data replication among multiple nodes.
Furthermore, the data protection modes maintain a high disk utilization rate of up to N/(N +
M). Different from traditional RAID groups that require hot spare disks to be allocated in
advance, OceanStor 9000 V5 allows any available space to serve as hot spare space, further
improving storage system utilization.
OceanStor 9000 V5 provides multiple N+M or N+M:B redundancy ratios. A user can set a
redundancy ratio for any directory. The files in the directory are saved at the redundancy ratio.
It is important to note that users can configure a redundancy ratio for a sub directory different
from that for the parent directory. This means that data redundancy can be flexibly configured
based on actual requirements to obtain the desired reliability level.
Nodes of an OceanStor 9000 V5 storage system can form multiple node pools. Users can
establish node pools as needed for system deployment and expansion, and a node pool has 3
to 20 nodes.
OceanStor 9000 V5 allows intelligent configuration, where a user only needs to specify the
required data reliability (the maximum number of concurrently failed nodes or disks that a
user can tolerate). Simply speaking, users only need to set +M or +M:B for a directory or file.
OceanStor 9000 V5 automatically adopts the most suitable redundancy ratio based on the
number of nodes used in a node pool. The value range of M allowed by OceanStor 9000 V5 is
1 to 4 (1 to 3 for object-based storage). When +M:B is configured, B can be 1. Table 3-1 lists
N+M or N+M:B that corresponds to different configurations and number of nodes, where the
values in parentheses are storage utilization rates.
Number of nodes
3 2+1 (66.66%) 4+2:1 (66.66%) 6+3(:1) (66.66%) 6+4:1 (60%) 4+2:1 (66.66%) 6+3:1 (66.66%)
4 3+1 (75%) 4+2:1 (66.66%) 6+3(:1) (66.66%) 6+4:1 (60%) 6+2:1 (75%) 8+3:1 (72.72%)
5 4+1 (80%) 4+2:1 (66.66%) 6+3(:1) (66.66%) 6+4:1 (60%) 8+2:1 (80%) 12+3:1 (80%)
6 4+1 (80%) 4+2 (66.66%) 6+3(:1) (66.66%) 6+4:1 (60%) 10+2:1 (83.33%) 14+3:1 (82.35%)
7 6+1 (85.71%) 4+2 (66.66%) 6+3(:1) (66.66%) 6+4:1 (60%) 12+2:1 (85.71%) 16+3:1 (84.21%)
8 6+1 (85.71%) 6+2 (75%) 6+3(:1) (66.66%) 6+4:1 (60%) 14+2:1 (87.50%) 16+3:1 (84.21%)
9 8+1 (88.88%) 6+2 (75%) 6+3 (66.66%) 6+4:1 (60%) 16+2:1 (88.88%) 16+3:1 (84.21%)
10 8+1 (88.88%) 8+2 (80%) 6+3 (66.66%) 6+4 (60%) 16+2:1 (88.88%) 16+3:1 (84.21%)
11 10+1 (90.90%) 8+2 (80%) 8+3 (72.72%) 6+4 (60%) 16+2:1 16+3:1 (84.21%)
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
Number of nodes
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
(88.88%)/18+2:1
(90%)
Level-2 Cache
Level-2 cache provides data block metadata and data block caching. It consists of SSDs and
only caches hotspot data on all disks of the local node. Level-2 cache accelerates access to
strips and stripes on the local node, mitigates the disk stress caused by frequent hotspot data
access, and accelerates response to data block requests. For example, level-2 cache provides
caching for each disk's super blocks, object and object set descriptors, and descriptors of key
objects.
Cache Releasing
Data reclamation
After cached data is modified by a client, the client CA adds a write lock to the data.
When the nodes caching the data read the lock, their corresponding cache space is
reclaimed.
Data aging
When cache space reaches its aging threshold, the cached data that has not been accessed
for the longest period of time will be released according to the least recently used (LRU)
statistics.
The global cache function of OceanStor DFS consolidates the cache space of each storage
server to logically form a unified global cache resource pool. Only one copy of user data is
stored in the distributed storage system. For a file stripe, only its data strips are cached; parity
strips are not cached. As long as the file stripe data that a client agent attempts to access
resides in the cache of any storage server, the cached file stripe data can be hit, regardless
which storage server the client agent goes through to access the data. In this way, access
priority is given to the data cached in the global cache. If the requested data cannot be hit in
the global cache, data is read from disks.
Compared with existing technologies, the global cache function of OceanStor 9000 V5 allows
users to leverage the total cache space across the entire system. OceanStor 9000 V5 prevents
unnecessary disk I/Os and network I/Os related to hotspot data, maximizing access
performance.
As shown in Figure 3-11, the software layer consists of the upper file system service and the
lower storage resource pool. The file system service processes NAS protocol parsing, file
operation semantic parsing, and file system metadata management. The storage resource pool
allocates nodes' disk resources and processes persistent data storage.
When a client connects to a physical node to write a file, this write request is first processed
by the file system service. The file system queries the metadata of the file based on the file
path and file name to obtain the file layout and protection level information.
OceanStor DFS protects file data across nodes and disks. A file is first divided into stripes,
each of which consists of N strips and M redundancy parity strips. Different strips of a stripe
are stored on different disks on different nodes.
As illustrated in Figure 3-12, after the file system service obtains the file layout and protection
level information, it calculates redundancy data strips based on the stripe granularity. Then it
writes strips concurrently to different disks on different nodes over the back-end network,
with only one strip on each disk.
node and supports failover and failback of node IP addresses. A user only needs to configure
an IP address pool for OceanStor 9000 V5, instead of allocating an IP address to each node
one by one. This management method simplifies IP address management and facilitates
cluster expansion, as described below.
Each OceanStor 9000 V5 node has a static IP address and a dynamic IP address. After a
failed node recovers, its static IP address remains the same. However, its original
dynamic IP address is lost, and a new dynamic IP address will be assigned to the node.
During deployment, a deployment tool is used to configure static IP addresses. Dynamic
IP addresses are assigned by the load balancing service in a unified manner based on an
IP address pool. Figure 3-14 shows how IP addresses are assigned to nodes.
When a node is added, the load balancing service obtains an idle IP address from the IP
address pool and assigns it to the newly added node. If no idle IP address is available, the
load balancing service determines whether any existing clustered node has multiple IP
addresses. If yes, the load balancing service deprives the clustered node of one IP
address and assigns it to the newly added node, ensuring that the new node takes part in
load balancing. If no, an alarm is generated, asking the OceanStor 9000 V5 system
administrator to add idle IP addresses to the IP address pool. Figure 3-15 shows how IP
addresses are assigned to newly added nodes.
If some of the network adapters equipped on a node fail, and cause an IP address
problem, the system implements IP address failover within the node to switch IP
addresses from the failed network adapters to functional network adapters. If a node has
multiple network adapters, IP addresses are evenly assigned to them. If a node fails, the
node with the lowest load in the cluster is selected to take over, as shown in Figure 3-16.
Once the failed node recovers, the load balancing service obtains an idle IP address from
the IP address pool and assigns it to the recovered node. If no idle IP address is available,
the load balancing service determines whether any existing clustered node has multiple
IP addresses. If multiple addresses exist on a clustered node, the load balancing service
deprives the clustered node of one IP address and assigns it to the recovered node. If no
node has multiple addresses, an alarm is generated, asking the OceanStor 9000 V5
system administrator to add idle IP addresses to the IP address pool. Figure 3-17 shows
an IP address switchover when a node recovers.
4 System Advantages
minutes, without the need to modify system configurations, change server or client mount
points, or alter application programs.
OceanStor DFS provides different types of nodes to adapt to various application scenarios.
Those nodes can be configured on demand. They are centrally managed, simplifying system
management and enabling easy resource scheduling to reduce the customer's investment.
For an emerging enterprise, the business volume is not great at the early stage. Therefore,
such an enterprise typically needs a small-scale IT infrastructure and does not have a big IT
budget. However, the enterprise may require high performance. The initial configuration of
OceanStor 9000 V5 meets an enterprise's capacity and performance requirements at a
relatively low TCO. As the enterprise grows, it has increasing IT requirements. In this
scenario, the original IT investment will not be wasted. Instead, the enterprise can easily meet
increasingly demanding requirements by expanding the capacity of OceanStor 9000 V5 in a
simple way.
OceanStor DFS supports various interfaces, including NFS, CIFS, NDMP, and FTP. A single
system can carry multiple service applications to implement data management throughout the
data lifecycle. OceanStor 9000 V5 is open to connect to OpenStack Manila public cloud