0% found this document useful (0 votes)
13 views2 pages

Hyper File

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views2 pages

Hyper File

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Module 9, Hyperfile.

We will now switch our attention to the other complimentary


solution to Hyperstore, which is our Hyperfile solution.
If you remember, we mentioned Hyperfile was a solution that would allow legacy
applications using legacy network protocols such as NFS and SMB to store data on
Hyperstore, where the application was a good use case for object storage.
Archives, backup, and media for example. This provides a unified access for both
object and file.
Hyperfile provides a single metadata database for all data formats and can adjust
data using file-based protocols such as NFS and SMB and allows access to this data
in object format.
Hyperfile employs a caching architecture. The legacy application connects to a
share exported from Hyperfile and writes to that share.
The file is received by Hyperfile and stored in local cache storage.
20 seconds after the file is closed, Hyperfile cues the file for migration to
Hyperstore.
The reason for the 20 second delay is to truly ensure the file is closed as trying
to migrate a file that is open will fail.
Hyperfile will write the file to Hyperstore as an S3 object, ready for retrieval
either by through Hyperfile or directly from using Hyperstore using an S3 client.
If a cache read miss occurs as the file has been overwritten in cache, the file is
retrieved from Hyperstore back into the Hyperfile cache.
Hyperfile is capable of being highly available and an automatic failover will be
triggered should the master Hyperfile node become unavailable.
It will not, however, failover if the master node network services are still
responding.
In this scenario, manual intervention may be required.
When a failover occurs, all services are restarted on the other node, which will
effectively become the cluster master node.
Each node needs access to shared cache storage for this configuration.
An immensely powerful feature of Hyperfile is its ability to assist with data
migration.
This allows for a connection of an existing NFS or SMB network share to be
virtualized by Hyperfile, so that Hyperfile can bridge between the existing shares
and Hyperstore.
Hyperfile will, in the background, start ingesting the data on the original network
share and start moving it to Hyperstore.
clients access Hyperfile as they would have the original share. If the client
requests a file that has not yet been migrated, Hyperfile will redirect the request
to the existing network share.
And in the same process, bring it back to Hyperfile's cache and then migrate to
Hyperstore using the method already described.
This can reduce downtime for the original share from days to minutes as the only
downtime is the time taken to reassign the share names.
Please do keep in mind, however, Hyperfile is not designed to be a NAS filer
replacement.
The N use case must be a use case that object storage is a good fit for and not
seen as a direct replacement for a NAS filer, which is being used for a file use
case.
If these requirements are needed with an offload archive capability, Cloudion may
still be able to help by utilizing one of our many growing technology partners and
you should reach out to your Cloudion account representatives.
Hyperfile also supports snapshot facility.
With Hyperfile, a snapshot is a point in time view of the content in the data
repository.
It still protects data against logical corruption or deletion, but it does this by
utilizing Hyperstore's versioning feature, which we have already discussed.
With a versioning, any change to existing objects results in an additional full
copy and not, but some of you may know from snapshots with block and file.
If you are only ever creating new objects, then the storage usage won't be
affected. As the snapshot just freezes, the pointer is to the data at that point in
time.
However, if an object is deleted or overwritten by mistake, then it is possible to
restore the previous version with minimal overhead.
So, as in Hyperstore versioning, proper consideration to the potential knock on
effect to hyperstore capacity usage must be given if there is an expectation of a
high degree of data change.
Hyperfile also supports scaling out.
It achieves this in a variety of ways by combining multiple file systems onto a
single hyperfile instance, with each volume supporting access, base, enumeration,
ABE.
By using multiple hyperfile instances, using a shared back-end, each controller
maps to its own bucket and presents as a unique file system.
Multi-controller, which is similar to multi-hyperfile instances, has a single
shared S3 bucket or single back-end namespace.
Multi-controller does, however, have some prerequisites, and these should be
discussed with a clouding account representative in relation to the use case to
understand whether multi-controller would be an option.
We just mentioned access, base, and enumeration. But what does that mean?
Simply put, with ABE, if no read permissions are on the folder for the user, then
the folder will not be displayed in the Windows Explorer window without ABE.
Even folders that the user had no permission to use could still be seen, but the
contents would not be visible as expected.
Hyperfile has many excellent use cases, for example, archiving, CCTV, media and
entertainment, and more.
All of these are excellent use cases for object storage, and Hyperfile can help
here with legacy applications of this type who are not yet ready to write to an S3
object store.
Some use cases need due diligence to see if Hyperfile is a good fit, and these
would be things like backup and limited file sharing use cases.
And some use cases are currently just not good use cases at all, like databases,
running VM primary storage, and anything requiring global file locking for
environment with a file level requirement, it is possible to use some of the
relationships we have with our partners.
Technologies that fit well with Hyperstore and are actively being sold today are
solutions like Storage Made Easy and Comprise.
We also have technology partners with validated solutions, some for which are shown
here, and we are adding more on a regular basis.
Thank you, and I hope you have enjoyed and learned something new about object
storage in general, and how Cloudy and Hyperstore has implemented its object
storage solution to help our customers overcome some of the problems.
We have been able to overcome some of the challenges they are facing with their
unstructured data growth.

You might also like