0% found this document useful (0 votes)
2 views

oh

The document discusses optimizing the performance of hybrid storage systems that utilize NAND flash memory as a cache alongside traditional HDDs. It presents a dynamic scheme for managing the cache space and over-provisioned space, demonstrating that reserving space for garbage collection can enhance performance rather than filling the cache to capacity. Experimental results indicate that this approach can significantly improve performance and extend the lifespan of SSDs by reducing erase operations.

Uploaded by

vincentdftbg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

oh

The document discusses optimizing the performance of hybrid storage systems that utilize NAND flash memory as a cache alongside traditional HDDs. It presents a dynamic scheme for managing the cache space and over-provisioned space, demonstrating that reserving space for garbage collection can enhance performance rather than filling the cache to capacity. Experimental results indicate that this approach can significantly improve performance and extend the lifespan of SSDs by reducing erase operations.

Uploaded by

vincentdftbg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Caching less for better performance: Balancing cache size and update cost

of flash memory cache in hybrid storage systems

Yongseok Oh1 , Jongmoo Choi2 , Donghee Lee1 , and Sam H. Noh3


1 Universityof Seoul, Seoul, Korea, {ysoh, dhl express}@uos.ac.kr
2 Dankook University, Gyeonggi-do, Korea, [email protected]
3 Hongik University, Seoul, Korea, http://next.hongik.ac.kr

Abstract
Hybrid storage solutions use NAND flash memory based
Solid State Drives (SSDs) as non-volatile cache and tra-
ditional Hard Disk Drives (HDDs) as lower level stor-
age. Unlike a typical cache, internally, the flash memory
cache is divided into cache space and over-provisioned
space, used for garbage collection. We show that bal-
ancing the two spaces appropriately helps improve the
performance of hybrid storage systems. We show that
contrary to expectations, the cache need not be filled with
data to the fullest, but may be better served by reserving Figure 1: Balancing data in cache and update cost for
space for garbage collection. For this balancing act, we optimal performance
present a dynamic scheme that further divides the cache
space into read and write caches and manages the three viding SSD-like performance at HDD-like price, and
spaces according to the workload characteristics for op- achieving this goal requires near-optimal management
timal performance. Experimental results show that our of the flash memory cache. Unlike a typical cache, the
dynamic scheme improves performance of hybrid stor- flash memory cache is unique in that SSDs require over-
age solutions up to the off-line optimal performance of a provisioned space (OPS) in addition to the space for nor-
fixed partitioning scheme. Furthermore, as our scheme mal data. To make a clear distinction between OPS and
makes efficient use of the flash memory cache, it re- space for normal data, we refer to the space in flash mem-
duces the number of erase operations thereby extending ory cache used to keep normal data as the caching space.
the lifetime of SSDs. The OPS is used for garbage collection operations per-
formed during data updates. It is well accepted that given
1 Introduction a fixed capacity SSD, increasing the OPS size brings
about two consequences [11, 15, 26]. First, it reduces
Conventional Hard Disk Drives (HDDs) and state-of-the- the caching space resulting in a smaller data cache. Less
art Solid State Drives (SSDs) each has strengths and lim- data caching results in decreased overall flash memory
itations in terms of latency, cost, and lifetime. To alle- cache performance. Note Figure 1 (not to scale) where
viate limitations and combine their advantages, hybrid the x-axis represents the OPS size and the y-axis repre-
storage solutions that combine HDDs and SSDs are now sents the performance of the flash memory cache. The
available for purchase. For example, a hybrid disk that dotted line with triangle marks shows that as the OPS
comprises the conventional magnetic disk with NAND size increases, caching space decreases and performance
flash memory cache is commercially available [30]. We degrades.
consider hybrid storage that uses NAND flash memory In contrast, with a larger OPS, the update cost of data
based SSDs as a non-volatile cache and traditional HDDs in the cache decreases and, consequently, performance
as lower level storage. Specifically, we tackle the issue of the flash memory cache improves. This is represented
of managing the flash memory cache in hybrid storage. as the square marked dotted line in Figure 1. Note that
The ultimate goal of hybrid storage solutions is pro- as the two dotted lines cross, there exists a point where
performance of the flash memory cache is optimal. The arate read and write regions taking into consideration
goal of this paper is to find this optimal point and use it the fact that read and write costs are different in flash
in managing the flash memory cache. memory [11]. Chen et al. propose Hystor that integrates
To reiterate, the main contribution of this paper is in low-cost HDDs and high-speed SSDs [4]. To make bet-
presenting a dynamic scheme that finds the workload de- ter use of SSDs, Hystor identifies critical data, such as
pendent optimal OPS size of a given flash memory cache metadata, keeping them in SSDs. Also, it uses SSDs as
such that the performance of the hybrid storage system a write-back buffer to achieve better write performance.
is optimized. Specifically, we propose cost models that Pritchett and Thottethodi observe that reference patterns
are used to determine the optimal caching space and OPS are highly skewed and propose a highly-selective caching
sizes for a given workload. In our solution, the caching scheme for SSD cache [26]. These studies try to reduce
space is further divided into read and write caches, and expensive data allocation and write operations in flash
we use cost models to dynamically adjust the sizes of the memory storage as writes are much more expensive than
three spaces, that is, the read cache, write cache, and the reads. They are similar to ours in that flash memory stor-
OPS according to the workload for optimal hybrid stor- age is being used as a cache in hybrid storage solutions
age performance. These cost models form the basis of and that some of them split the flash memory cache into
the Optimal Partitioning Flash Cache Layer (OP-FCL) separate regions. However, our work is unique in that it
flash memory cache management scheme that we pro- takes into account the trade-off between caching benefit
pose. and data update cost as determined by the OPS size.
Experiments performed on a DiskSim-based hybrid The use of the flash memory cache with other objec-
storage system using various realistic server workloads tives in mind have been suggested. As SSDs have lower
show that OP-FCL performs comparatively to the off- energy consumption than HDDs, Lee et al. propose an
line optimal fixed partitioning scheme. The results indi- SSD-based cache to save energy of RAID systems [18].
cate that caching as much data as possible is not the best In this study, an SSD is used to keep recently referenced
solution, but caching an appropriate amount to balance data as well as for write buffering. Similarly, to save en-
the cache hit rate and the garbage collection cost is most ergy, Chen et al. suggest a flash memory based cache
appropriate. That is, caching less data in the flash mem- for caching and prefetching data of HDDs [3]. Saxena
ory cache can bring about better performance as the gains et al. use flash memory as a paging device for the vir-
from reduced overhead for data update compensates for tual memory subsystem [28] and Debnath et al. use it
losses from keeping less data in cache. Furthermore, our as a metadata store for their de-duplication system [5].
results indicate that as our scheme makes efficient use Combining SSDs and HDDs in the opposite direction has
of the flash memory cache, OP-FCL can significantly re- also been proposed. A serious concern of flash mem-
duce the number of erase operations in flash memory. ory storage is its relatively short lifetime and, to extend
For our experiments, this results in the lifetime of SSDs SSD lifetime, Soundararajan et al. suggest a hybrid stor-
being extended by as much as three times compared to age system called Griffin, which uses HDDs as a write
conventional uses of SSDs. cache [32]. Specifically, they use a log-structured HDD
The rest of the paper is organized as follows. In the cache, periodically destaging data to SSDs so as to re-
next section, we discuss previous studies that are rele- duce write requests and, consequently, to increase the
vant to our work with an emphasis on the design of hy- lifetime of SSDs.
brid storage systems. In Section 3, we start off with a
There have been studies that concentrate on finding
brief review of the HDD cost model. Then, we move on
cost-effective ways to employ SSDs in systems. To sat-
and describe cost models for NAND flash memory stor-
isfy high-performance requirements at a reasonable cost
age. Then, in Section 4, we derive cost models for hy-
budget, Narayanan et al. look into whether replacing disk
brid storage and discuss the existence of optimal caching
based storage with SSDs may be cost effective; they con-
space and OPS division. We explain the implementation
clude that replacing disks with SSDs is not yet so [22].
issues in Section 5 and then, present the experimental re-
Kim et al. suggest a hybrid system called HybridStore
sults in Section 6. Finally, we conclude with a summary
that combines both SSDs and HDDs [15]. The goal of
and directions for future work.
this study is in finding the most cost-effective configura-
tion of SSDs and HDDs.
2 Related Work Besides studies on flash memory caches, there are
many buffer cache management schemes that use the
Numerous hybrid storage solutions that integrate HDDs idea of splitting caching space. Kim et al. present a
and SSDs have been suggested [8, 11, 14, 29]. Kgil et buffer management scheme called Unified Buffer Man-
al. propose splitting the flash memory cache into sep- agement (UBM) that detects sequential and looping ref-

2
erences and stores those blocks in separate regions in the Cache Space OPS
buffer cache [13]. Park et al. propose CRAW-C (Clock
for Read And Write considering Compressed file system) (a)
that allocates three memory areas for read, write, and
compressed pages, respectively [24]. Shim et al. suggest
Victim for GC Reserved for GC
an adaptive partitioning scheme for the DRAM buffer in
SSDs. This scheme divides the DRAM buffer into the
caching and mapping spaces, dynamically adjusting their (b)
sizes according to the workload characteristics [31]. This
study is different from ours in that the notion of OPS is
Copy valid pages
necessary for flash memory updates, while for DRAM, it
is not.
(c)

3 Flash Memory Cache Cost Model


Reserved for GC
In this section, we present the cost models for SSDs
Clean Valid Invalid Write pointer
and HDDs [35]. HDD reading and writing are char-
acterized by seek time and rotational delay. Assume
that CD RPOS and CD W POS are sums of the average seek Figure 2: Garbage collection in flash memory storage
time and the average rotational delay for HDD reads and
writes, respectively. Let us also assume that P is the data to a clean page, and pages with old data become
data size in bytes and B is the bandwidth of the disk. invalid. The FTL recycles blocks with invalid pages by
Then, the data read and write cost of a HDD is derived performing garbage collection (GC) operations. For data
as CDR = CD RPOS + BP and CDW = CD W POS + PB , respec- updates and subsequent GCs, the FTL must always pre-
tively. (Detailed derivations are referred to Wang [35].) serve some number of empty blocks. As data updates
Before moving on to the cost model of flash mem- consume empty blocks, the FTL must produce more
ory based SSDs, we give a short review of NAND flash empty blocks by performing GCs that collect valid pages
memory and the workings of SSDs. NAND flash mem- scattered in used blocks to an empty block, marking the
ory, which is the storage medium of SSDs, consists of a used blocks as new empty blocks. The worst case and
number of blocks and each block consists of a number average GC costs are determined by the ratio of the ini-
of pages. Reads are done in page units and take con- tial OPS to the total storage space. It has been shown
stant time. Writes are also done in page units, but data that the worst case and average GC costs become lower
can be written to a page only after the block contain- as more over-provisioned blocks are reserved [9].
ing the page becomes clean, that is, after it is erased. If we assume that the FTL selects the block with the
This is called the erase-before-write property. Due to minimum number of valid pages for a GC operation,
this property, data update is usually done by relocating then the worst case GC occurs when all valid (or invalid)
new data to a clean page of an already erased block pages are evenly distributed to all flash memory blocks
and most flash memory storage devices employ a so- except for an empty block that is preserved for GC op-
phisticated software layer called the Flash Translation erations. For now, let us assume that u is the worst case
Layer (FTL) that relocates modified data to new loca- utilization determined from the initial number of over-
tions. The FTL also provides the same HDD interface provisioned blocks and data blocks. Then, in Fig. 2(a),
to SSD users. Various FTLs such as page mapping where there are 3 data blocks containing cached data and
FTL [7, 34], block mapping FTL [12], and many hy- 4 initial over-provisioned blocks, the worst case u is cal-
brid mapping FTLs [10, 17, 19, 23] have been proposed. culated as 3/(3 + 4 − 1). (We subtract 1 because the FTL
Among them, the page mapping FTL is used in many must preserve one empty block for GC as marked by the
high-end commercial SSDs that are used in hybrid stor- arrow in Fig. 2(b).) From u, the maximum number of
age solutions. Hence, in this paper, we focus on the page valid pages in the block selected for GC can be derived
mapping FTL. However, the methodology that follows as ⌈u · NP⌉, where NP is the number of pages in a block.
may be used with block and hybrid mapping FTLs as Then, the worst case GC cost for a given utilization u
well. The key difference would be in deriving garbage can be calculated from the following equation, where NP
collection and page write cost models appropriate for is the number of pages in a block, CE is the erase cost
these FTLs. (time) of a flash memory block, and CCP is the page copy
As previously mentioned, the FTL relocates modified cost (time). (We assume that the copyback operation is

3
being used. For flash memory chips that do not support 4 Hybrid Storage Cost Model
copyback, CCP may be expanded to a sequence of read,
CPR , and write, CPROG , operations.) In the previous section, the garbage collection and page
update cost of flash memory storage was derived. In
CGC (u) = ⌈u · NP⌉ ·CCP + CE (1) this section, we derive the cost models for hybrid stor-
age systems, which consist of a flash memory cache and
That is, as seen in Fig. 2(b) and (c), a GC opera- a HDD. Specifically, the cost models determine the op-
tion erases an empty block with cost CE and copies all timal size of the caching space and OPS minimizing the
valid pages from the block selected for GC to the erased overall data access cost of the hybrid storage system. In
empty block with cost ⌈u · NP ⌉ ·CCP . Then, the garbage- our derivation of the cost models, we first derive the read
collected block becomes an empty block that may be cache cost model and then, derive the read/write cache
used for the next GC. The remaining clean pages in the cost model used to determine the read cache size, write
previously empty block are used for subsequent write re- cache size and OPS size. Our models assume that the
quests. If all those clean pages are consumed, then an- cache management layer can measure the hit and miss
other GC operation will be performed. rates of read/write caches as well as the number of I/O
After GC, in the worst case, there are ⌊(1 − u) · NP ⌋ requests. These values can be easily measured in real
clean pages in what was previously an empty block (for environments.
example, the right-most block in Fig. 2(c)) and write re-
quests of that number can be served in the block. Let
us assume that CPROG is the page program time (cost)
4.1 Read cache cost model
of flash memory. (Note that “page program” and “page On a read request the storage examines whether the re-
write” are used interchangeably in the paper.) By divid- quested data is in the flash memory cache. If it is, the
ing GC cost and adding it to each write request, we can storage reads it and transfers it to the host system. If it
derive, CPW (u), the page write cost for worst case utiliza- is not in the cache, the system reads it from the HDD,
tion u as follows. stores it in the flash memory cache and transfers it to the
host system. If the flash memory cache is already full
CGC (u)
CPW (u) = + CPROG (2) with data (as will be the case in steady state), it must in-
⌊(1 − u) · NP⌋
validate the least valuable data in the cache to make room
Equation 2 is the worst case page update cost of flash for the new data. We use the LRU (Least Recently Used)
memory storage assuming valid data (or invalid data) replacement policy to select the least valuable data. In
are evenly distributed among all the blocks. Typically, the case of read caching, the selected data need only be
however, the number of valid pages in a block will invalidated, which can be done essentially for free. (We
vary. For example, the block marked “Victim for GC” discuss the issue of accommodating other replacement
in Fig. 2(b) has a smaller number of valid pages than the policies in Section 5.)
other blocks. Therefore, in cases where the FTL selects a Let us assume that HR (u) is the cache read hit rate for a
block with a small number of valid pages for the GC op- given cache size, which is determined by the worst case
eration, then utilization of the garbage-collected block, utilization u, as we will see later. With rate HR (u), the
u′ , would be lower than the worst case utilization, u. Pre- system reads the requested data from the cache with cost
vious LFS and flash memory studies derived and used the CPR , the page read operation cost (time) of flash memory,
following relation between u′ and u [17, 20, 35]. and transfers it to the host system. With rate 1 − HR(u),
the system reads data from disk with cost CDR and, after
u′ − 1 invalidating the least valuable data selected by the cache
u= replacement policy, stores it in the flash memory cache
ln u′
with cost CPW (u), which is the cost of writing new data
Let U(u) be the function that translates u to u′ . (In to cache including the possible garbage collection cost.
our implementation, we use a table that translates u to Then, CHR , the read cost of the hybrid storage system
u′ .) Then the average page update cost can be derived with a read cache, is as follows.
by applying U(u) for u in Equation 1 and 2 leading to
Equation 3 and 4. CHR (u) = HR (u) ·CPR+
(1 − HR(u)) · (CDR + CPW (u)) (5)
CGC (u) = U(u) · NP ·CCP + CE (3)
Let us now take the flash memory cache size into con-
CGC (u) sideration. For a given flash memory cache size, SF ,
CPW (u) = + CPROG (4)
(1 − U(u)) · NP the read cache size, SR and the OPS size SOPS can be

4
100 4 case of reading data in the write cache later.

Access Cost (ms)


80 In the following cost model derivation, we assume
Hit Rate (%)
3
60 write-back policy for the write cache. This choice is
2
40 more efficient than the write-through policy without any
20 1
loss in consistency as the flash cache is also non-volatile.
00 20 40 60 80 100 00 20 40 60 80 100 If the write-through policy must be used, our model
Caching Space (%) in SSD Caching Space (%) in SSD needs to be modified to reflect the additional write to
(a) Read hit rate (b) Read cost HDD that is incurred for each write to the flash cache.
Figure 3: (a) Read hit rate curve generated using the This will result in a far less efficient hybrid storage sys-
numpy.random.zipf Python function (Zipfian distribution tem.
with α = 1.2 and range = 120%) and (b) the hybrid stor- There can be three types of requests to the flash write
age read cost graph for this particular hit rate curve, with cache. The first is a write hit request, which is a write re-
optimal point at 92%. quest to existing data in the write cache. In this case, the
old data becomes invalidated and the new data is writ-
approximated from u such that SOPS ≈ (1 − u) · SF and ten to the write cache with cost CPW (u). The second
SR ≈ u · SF . These sizes are approximated values as they is a write miss request, which is a write request to data
do not take into account the empty block reserved for that does not exist in the write cache. In this case, the
GC. (Recall the empty block in Fig. 2.) Though calcu- cache replacement policy selects victim data that should
lating the exact size is possible by considering the empty be read from the write cache and destaged to the HDD
block, we choose to use these approximations as these with cost CPR + CDW to make room for the newly re-
are simpler, and their influence is negligible relative to quest data. (Note we are assuming the system is in steady
the overall performance estimation. state.) After evicting the data, the hybrid storage system
Let us now take an example. Assume that we have a hit writes the new data to the write cache with cost CPW (u).
rate curve HR (u) for read requests as shown in Fig. 3(a), The last type of request is a read hit request, which is a
where the x-axis is the cache size and the y-axis is the read request to existing (and possibly dirty) data in the
hit rate. Then, with Equation 5, we can redraw the hit write cache. This happens when a read request is to data
rate curve with u on the x-axis, and consequently, the that is already in the write cache. In this case, the request
access cost graph of the hybrid storage system becomes can be satisfied with cost CPR , that is, the flash memory
Fig. 3(b). The graph shows that the overall access cost page read cost. Note that there is no read miss request to
becomes lower as u increases until u reaches 92%, where the write cache because read requests to data not in cache
the access cost becomes minimal. Beyond this point, the are handled by the read cache.
access cost suddenly increases, because even though the Now we introduce a parameter r, which is the read
caching benefit is still high the data update cost soars as cache size ratio within the caching space, where 0 ≤ r ≤
the OPS shrinks. Once we find u with minimum cost, the 1. Naturally, 1 − r is the ratio of the write cache size. If
read cache size and OPS size can be found from SOPS ≈ r is 1, all caching space is used as a read cache and, if it
(1 − u) · SF and SR ≈ u · SF . is 0, all caching space is used as a write cache. Let SC
denote the total size of the caching space. Then, we can
4.2 Read and write cache cost model calculate the read cache size, SR , and write cache size,
SW , from SC such that SR = SC · r and SW = SC · (1 − r).
Previous studies have shown that due to their difference Note that SC is calculated from u such that SC ≈ u · SF .
in costs, separating read and write requests in flash mem- Then, SR and SW are determined by u and r.
ory storage has a significant effect on performance [11]. Let us assume that the cache management layer can
Hence, we now incorporate write cost to the model by measure the read hit rates of the read cache and draw
dividing the flash caching space into two areas, namely HR (u, r), the read cache hit rate curve, which now has
a write cache and a read cache. The read cache, whose two parameters u and r. (We will show that the hit rate
cost model was derived in the previous subsection, con- curve can be obtained by using ghost buffers in the next
tains data that has recently been read but never written section.) Then, the read cost of the hybrid storage system
back while the write cache keeps data that has recently is now modified as follows.
been written, but not yet destaged. Therefore, data in the
write cache are dirty and they must be written to the HDD CHR (u, r) = (1 − HR(u, r)) · (CDR + CPW (u))
when evicted from the cache. When a write is requested +HR (u, r) ·CPR
to data in the read cache, we regard it as a write miss.
In this case, we invalidate the data in the read cache and Let us also assume that we can measure the write hit,
write the new data in the write cache. We consider the the write miss, and the read hit rates of the write cache

5
100 100 the write hit is satisfied with cost CPW (u). Now we can
80 80
Hit Rate (%)

Hit Rate (%)


calculate the average cost for both read hit and write hit
60 60 such that CW H = (1 − h′ ) ·CPW (u) + h′ ·CPR . By assum-
40 40 ing HW (u, r) is the hit rate including both read and write
20 20 hits, the write cost of the hybrid storage system now can
00 20 40 60 80 100 00 20 40 60 80 100
be given as follows.
Caching Space (%) in SSD Caching Space (%) in SSD
(a) Read hit rate (b) Write hit rate CHW (u, r) = (1 − HW (u, r))
· (CPR + CDW + CPW (u))
100 > 2.0
Read Cache Ratio (%)

Normal. Access Cost


80 1.6 + HW (u, r) ·CW H
Optimal point
60 1.6 Now, let IOR and IOW , respectively, be the rate served
40 1.4 in the read and write caches among all requests. For ex-
20 1.2 ample, of a total of 100 requests, if 70 requests are served
00 20 40 60 80 100 1.0 in the read cache and 30 requests are served in the write
Caching Space (%) in SSD cache, then IOR is 0.7 and IOW is 0.3. Then we can de-
(c) Expected access cost
rive, CHY (u, r), the overall access cost of the hybrid stor-
age system that has separate read and write caches and
Figure 4: (a) Read and (b) write hit rate curves gener- OPS as follows.
ated using the numpy.random.zipf Python function ((a)
CHY (u, r) = CHR (u, r) · IOR +
Zipfian distribution with α = 1.2 and range = 120%, (b)
Zipfian distribution with α = 1.4 and range = 220%) and CHW (u, r) · IOW (6)
(c) the hybrid storage access cost graph for these hit rate Let us take an example. Assume that, at a certain time,
curves. the hybrid storage system finds IOR , IOW , h′ to be 0.2,
0.8, and 0.2, respectively, and the read and write hit rate
and draw the hit rate curves. For the moment, let us curves are estimated as shown in Fig. 4(a) and (b). In the
regard the read hit in the write cache as being part of graph, both read and write hit rates increase as caches be-
the write hit. Assume that HW (u, r) is the write cache come larger but slowly saturate beyond some point. As
hit rate for a given write cache size, and it has two the read and write cache sizes are determined by u and r,
parameters that determine the cache size. Then, with we can obtain the read and write cache hit rates for given
rate HW (u, r), a write request finds its data in the write u and r values from the hit rate curves. Then, with the
cache, and the cost of this action is HW (u, r) · CPW (u). cost model of Equation 6, we can draw the overall access
Otherwise, with rate of 1 − HW (u, r), the write request cost graph of the system as in Fig. 4(c). In the graph, the
does not find data in the write cache. Servicing this x-axis is u and the y-axis is r. These two parameters de-
request requires reading and evicting existing data and termine the read and write cache sizes as well as the OPS
writing new data to the write cache. Hence, the cost is size. In Fig. 4(c), darker shades reflect lower access cost
(1 − HW (u, r)) · (CPR +CDW +CPW (u)). In summary, the and we pinpoint the lowest access cost with the diamond
write cost of the hybrid storage system can be given as mark pointed to by the arrow.
follows. Specifically, the minimum overall access cost of the
hybrid storage system is when u is 0.64 and r is 0.25 for
CHW (u, r) = (1 − HW (u, r)) this particular configuration. For a 4GB flash memory
· (CPR + CDW + CPW (u)) cache, this translates to the read cache size of 0.64GB,
the write cache size of 1.92GB, and an OPS size of
+ HW (u, r) ·CPW (u)
1.44GB.
Now let us consider the read hit case within the write
cache. Although it is possible to maintain separate read 5 Implementation Issues of Flash Cache
hit and write hit curves for the write cache, this makes the Layer
cost model more complex without much benefits, espe-
cially in terms of implementation. Therefore, we devise a In this section, we describe some implementation is-
simple approximation method for incorporating the read sues related to our flash memory cache management
hit case in the write cache. Assume that h′ is the read scheme, which we refer to as OP-FCL (Optimal Parti-
hit rate in the write cache. (Then, naturally, 1 − h′ is the tioning of Flash Cache Layer). Fig. 5(a) shows the over-
write hit rate in the write cache.) Then, with rate h′ , the all structure of the hybrid storage system that we envi-
read hit is satisfied with cost CPR and with rate 1 − h′ , sion. The storage system has a HDD serving as main

6
File I/O
storage and an SSD, which we also refer to as the flash
cache layer (FCL), that is used as a non-volatile cache
File System
keeping recently read/written data as previous studies
have done [4, 11, 15]. As is common on SSDs, it has Sequential I/O Read
OP-FCL Area
DRAM for buffering I/O data and storing data struc- Detector

tures used by the SSD. The space at the flash cache layer Workload Tracker
is divided into three regions: the read cache area, the
write cache area, and the over-provisioned space (OPS) Page Partition Write
Replacer Resizer Area
as shown in Fig. 5(b). OP-FCL measures the read and
Miss Hit
write cache hit and miss rates and the I/O rates. Then,
Mapping Manager
it periodically calculates the optimal size of these cache
spaces and progressively adjusts their sizes during the OPS

next period. HDD SSD

To accurately simulate the operations and measure the (a) Main Architecture (b) SSD Logical Layout

costs of the hybrid storage system, we use DiskSim [2]


to emulate the HDD and DiskSim’s MSR SSD exten- Figure 5: OP-FCL architecture
sion [1] to emulate the SSD. Specifically, the simula-
detects sequential references. In our current implemen-
tor mimics the behaviour of Maxtor’s Atlas 10K IV disk
tation, consecutive I/O requests greater than 128KB are
whose average read and write latency is 4.4ms and 4.9ms,
regarded as sequential references, and those requests by-
respectively, with transfer speed of 72MB/s. Also, the
pass the flash cache layer and are sent directly to disk to
SSD simulator emulates SLC NAND flash memory chip
avoid cache pollution.
operations, and it takes 25us to read a page, 200us to
Besides the Page Replacer that manages the cached
write a page, 1.5ms to erase a block, and 100us to trans-
data, the Workload Tracker maintains LRU lists of ghost
fer data to/from a page of flash memory through the bus.
buffers to simultaneously measure hit rates of various
The page and block unit size is 4KB and 256KB, respec-
cache sizes, following the method proposed by Patter-
tively, and the flash cache layer manages data in 4KB
son et al. [25]. Ghost buffers maintain only logical ad-
units.
dresses, not the actual data and, thus, memory overhead
In the simulator, we modified the SSD management is minimal requiring roughly 1% of the total flash mem-
modules and implemented additional features as well as ory cache. Part of the ghost buffer represents data in
the OP-FCL. OP-FCL consists of several components, cache and others represent data that have already been
namely, the Page Replacer, Sequential I/O Detector, evicted from the cache. Keeping information of evicted
Workload Tracker, Partition Resizer, and Mapping Man- data in the ghost buffer makes it possible to measure the
ager. hit rate of a cache larger than the actual cache size. To
The Page Replacer has two LRU lists, one each for simulate various cache sizes simultaneously, we use N-
the read and write caches, and maintains LRU ordering segmented ghost buffers. In other words, we divide the
of data in the caches. In steady state when the cache is ghost buffer into N-segments corresponding to N cache
full, the LRU data is evicted from the cache to accom- sizes and thus, hit rates of N cache sizes can be obtained
modate newly arriving data. For the read cache, cache by combining the hit rates of the segments. From the hit
eviction simply means that the data is invalidated, while rates of N cache sizes, we obtain the read/write hit rate
for write cache, it means that data must be destaged, in- curves by interpolating the missing cache sizes.
curring a flash cache layer read and a disk write oper- Note that though we use the LRU cache replacement
ation. In the actual implementation, the Page Replacer policy for this study, our model can accommodate any
destages several dirty data altogether to minimize seek replacement policy so long as they can be implemented
distance by applying the elevator disk scheduling algo- in the flash cache and the ghost buffer management lay-
rithm. However, we do not consider group destaging in ers. Different replacement policies will generate dif-
our cost model as it has only minimal overall impact. ferent read/write hit rate curves and, in the end, affect
This is because the number of data destaged as a group the results. However, a replacement policy only affects
is relatively small compared to the total number of data the read/write hit rate curves, and thus, our overall cost
in the write cache. model is not affected.
Previous studies have taken notice of the impact of These hit rate curves are obtained per period. In the
sequential references on cache pollution and thus, have current implementation, a period is the logical time to
tried to detect and treat them separately [13]. The Se- process 65536 (216 ) read and write requests. When the
quential I/O Detector monitors the reference pattern and period ends, new hit rate curves are generated, while a

7
Algorithm 1 Optimal Partitioning Algorithm GC is performed to produce empty blocks. These empty
1: procedure O PTIMAL PARTITIONING blocks are then used by the read and/or write caches.
2: step ← segment size/total cache size The key role of our Mapping Manager is translating
3: INIT PARMS(op cost, op u, op r) the logical address to a physical location in the flash
4: for u ← step; u < 1.0; u ← u + step do cache layer. For this purpose, it maintains a mapping ta-
5: for r ← 0.0; r ≤ 1.0; r ← r + step do ble that keeps the translation information. In our imple-
6: cur cost ← CHY (u, r) ⊲ Call Eq. 6
mentation, we keep the mapping information at the last
7: if cur cost < op cost then
8: op cost ← cur cost
page of each block. As we consider flash memory blocks
9: op u ← u, op r ← r with 64 pages, the overhead is roughly 1.6%. Moreover,
10: end if we implement a crash recovery mechanism similar to
11: end for that of LFS [27]. If a power failure occurs, it searches
12: end for for the most up-to-date checkpoint and goes through a
13: ADJUST CACHE SIZE (op u, op r) recovery procedure to return to the checkpoint state.
14: end procedure

6 Performance Evaluation
new period starts. Then, with the hit rate curves gen- In this section, we evaluate OP-FCL. For comparison, we
erated by the Workload Tracker in the previous period, also implement two other schemes. The first is the Fixed
the Partition Resizer gradually adjusts the sizes of the Partition-Flash Cache Layer (FP-FCL) scheme. This is
three spaces, that is, the read and write cache space and the simplest scheme where the read and write cache is
the OPS for the next period. To make the adjustment, not distinguished, but unified as a single cache. The OPS
the Partition Resizer determines the optimal u and r as is available with a fixed size. This scheme is used to
described in Section 4, and those optimal values in turn mimic a typical SSD of today that may serve as a cache
decide the optimal size of the three spaces. in a hybrid storage system. Normally, the SSD would not
To obtain the optimal u and r, we devise an iterative al- distinguish read and write spaces and it would have some
gorithm presented in Algorithm 1. Starting from u=step, OPS, whose size would be unknown. We evaluate this
the outer loop iterates the inner loop increasing u in ‘step’ scheme as we vary the percentage of the caching space
increments while u is less than 1.0. The two extreme set aside for the (unified) cache. The best of these results
configurations that we do not consider are where OPS is will represent the most optimistic situation in real life
0% and 100%. These are unrealistic configurations as deployment.
OPS must be greater than 0% to perform garbage collec- The other scheme is the Read and Write-Flash Cache
tion, while OPS being 100% would mean that there is no Layer (RW-FCL) scheme. This scheme is in line with the
space to cache data. The inner loop starting from r=0 observation made by Kgil et al. [11] in that read and write
iterates, calculating the access cost of the hybrid stor- caches are distinguished. This scheme, however, goes a
age system as derived in Equation 6, while increasing r step further in that while the sum of the two cache sizes
in ‘step’ increments until r becomes greater or equal to remain constant, the size between the two are dynami-
1.0. The ‘step’ value can be calculated as the segment cally adjusted for best performance according to the cost
size divided by the total cache size, as shown in the sec- models described in Section 4. For this scheme, the OPS
ond line of Algorithm 1. The nested loop iterates N × M size would also be fixed as the total read and write cache
times to calculate the costs, where N is the outer loop size is fixed. We evaluate this scheme as we vary the per-
count, 1/step-1, and M is the inner loop count, 1/step+1. centage of the caching space set aside for the combined
A single cost calculation consists of 10 ADD, 4 SUB, 11 read and write cache. Initial, all three schemes start with
MUL, and 4 DIV operations. Finer ‘step’ values may be an empty data cache. For OP-FCL, the initial OPS size
used resulting in finer u and r values, but with increased is set to 5% of the total flash memory size.
cost calculation overhead. However, computational over- The experiments are conducted using two sets of
head for executing this algorithm is quite small because traces. We categorize them based on the size of requests.
they run once every period and the calculations are just The first one, ‘Small Scale’, are workloads that request
simple arithmetic operations. less than 100GBs of total data. The other set, ‘Large
Once the optimal u and r and, in turn, the optimal sizes Scale’, are workloads with over 100GBs of data requests.
are determined, the Partition Resizer starts to progres- Details of the characteristics of these workloads are in
sively adjust the sizes of the three spaces. To increase Table 1.
OPS size, it gradually evicts data in the read or write The first two subsections discuss the performance as-
caches. To increase cache space, that is, decrease OPS, pects of the two class of workloads. Then, in the next

8
Working Avg. Req. Request
Type Workload Set Size (GB) Size (KB) Amount (GB) Read Ratio
Total Read Write Read Write Read Write
Financial [33] 3.8 1.2 3.6 5.7 7.2 6.9 28.8 0.19
Small Scale Home [6] 17.2 13.5 5.0 22.2 3.9 15.3 66.8 0.18
Search Engine [33] 5.4 5.4 0.1 15.1 8.6 15.6 0.001 0.99
Exchange [22] 79.35 74.12 23.29 9.89 12.4 114.36 131.69 0.46
Large Scale
MSN [22] 37.98 30.93 23.03 11.48 11.12 107.23 74.01 0.59

Table 1: Characteristics of I/O workload traces


1.2 1.4 12
Mean Response Time (ms)

Mean Response Time (ms)

Mean Response Time (ms)


1 1.2 10
1
0.8 8
0.8
0.6 6
0.6
0.4 4
0.4
FP-FCL FP-FCL FP-FCL
0.2 RW-FCL 0.2 RW-FCL 2 RW-FCL
OP-FCL OP-FCL OP-FCL
0 0 0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Caching Space (%) in SSD Caching Space (%) in SSD Caching Space (%) in SSD

(a) Financial (b) Home (c) Search Engine


Figure 6: Mean response time of hybrid storage

Type Description Config. 1 Config. 2 SSD used in these experiments is shown in Table 2 de-
NP 64 noted as ‘Config. 1’. All other parameters not explicitly
CPROG 300us mentioned are set to default values. We assume a single
CPR 125us SSD is employed as the flash memory cache and a single
CCP 225us HDD as the main storage. This configuration is similar
CE 1.5ms to that of a real hybrid drive [30].
OP-FCL
CD RPOS 4.5ms
CD W POS 4.9ms For small scale workloads we use three traces, namely,
B 72MB/s Financial, Home, and Search Engine that have been used
P 4KB in numerous previous studies [7, 11, 15, 16, 17]. The Fi-
segment size 256MB nancial trace is a random write intensive I/O workload
Total Capacity 4GB 16GB obtained from an OLTP application running at a finan-
No. of Packages 1 4 cial institutions [33]. The Home trace is a random write
Blocks Per Package 16384 intensive I/O workload obtained from an NFS server that
SSD Planes Per Package 1 keeps home directories of researchers whose activities
Cleaning Policy Greedy are development, testing, and plotting [6]. The Search
GC Threshold 1% Engine trace is a random read intensive I/O workload ob-
Copyback On
tained from a web search engine [33]. The Search Engine
Model Maxtor Atlas 10K IV trace is unique in that 99% of the requests are reads while
HDD
No. of Disks 1 3 only 1% are writes.
Table 2: Configuration of Hybrid Storage System Fig. 6 shows the results of the cache partitioning
schemes, where the measure is the response time of the
hybrid storage system. The x-axis here denotes the ratio
subsection, we present the effect of OP-FCL on the life- of caching space (unified or read and write combined) for
time of SSDs. In the final subsection, we present a sen- the FP-FCL and RW-FCL schemes. For example, 60 in
sitivity analysis of two parameters that needs to be deter- the x-axis means that 60% of the flash memory capacity
mined for our model. is used as caching space and 40% is used as OPS. The
y-axis denotes the average response time of the read and
6.1 Small scale workloads write requests. In the figure, the response times of FP-
FCL and RW-FCL schemes vary according to the ratio
The experimental setting is as given in Fig. 5 described of the caching space. In contrast, the response time of
in Section 5. The specific configuration of the HDD and OP-FCL is drawn as a horizontal line because it reports

9
2000 16000 1200
1800 FP-FCL FP-FCL
14000 RW-FCL
FP-FCL
RW-FCL 1000 RW-FCL
1600 OP-FCL OP-FCL OP-FCL
12000
GC Time (sec)

GC Time (sec)

GC Time (sec)
1400 800
1200 10000
1000 8000 600
800 6000
600 400
4000
400 200
200 2000
0 0 0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Caching Space (%) in SSD Caching Space (%) in SSD Caching Space (%) in SSD

(a) Financial (b) Home (c) Search Engine


Figure 7: Cumulative garbage collection time
1 0.8 0.6
0.7
0.8 0.5
0.6
0.4
0.5
Hit Rate

Hit Rate

Hit Rate
0.6
0.4 0.3
0.4 0.3
0.2
FP-FCL 0.2 FP-FCL FP-FCL
0.2 0.1
RW-FCL 0.1 RW-FCL RW-FCL
OP-FCL OP-FCL OP-FCL
0 0 0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Cachng Space (%) in SSD Caching Space (%) in SSD Caching Space (%) in SSD

(a) Financial (b) Home (c) Search Engine


Figure 8: Hit rate

only one response time regardless of the ratio of caching For the FP-FCL and RW-FCL schemes, the response
space as it dynamically adjusts the three spaces accord- time at the optimal point can be regarded as the off-line
ing to the workload. optimal value because it is obtained after exploring all
Let us first compare FP-FCL and RW-FCL in Fig. 6. In possible configurations of the scheme. Let us now com-
cases of the Financial and Home traces, we see that RW- pare the response time of OP-FCL and the off-line opti-
FCL provides lower response time than FP-FCL. This is mal results of RW-FCL. In all traces, OP-FCL has almost
because RW-FCL is taking into account the different read the same response time as the off-line optimal value of
and write costs in the flash memory cache layer. This re- RW-FCL. This shows that the cost model based dynamic
sult is in accord with previous studies that considered dif- adaptation technique of OP-FCL is efficient in determin-
ferent read and write costs of flash memory [11]. How- ing the optimal OPS and the read and write cache sizes.
ever, in the case of the Search Engine trace, discriminat- We now discuss the trade-off between garbage collec-
ing read and write requests has no effect because 99% of tion (GC) cost and the hit rate at the flash cache layer.
the requests are reads. Naturally, FP-FCL and RW-FCL Fig. 7 and 8 depict these results. In Fig. 7, we see that
show almost identical response times. for all traces, GC cost increases, that is, performance de-
Now let us turn our focus to the relationship between grades, continuously as caching space increases. The hit
the size of caching space (or OPS size) and the response rate, on the other hand, increases, thus improving perfor-
time. In Fig. 6(a) and (b), we see that the response time mance as caching space increases for all the traces as we
decreases as the caching space increases (or OPS de- can see in Fig. 8. For clear comparisons, we report the
creases) until it reaches the minimal point, and then in- sum of the read and write hit rates for RW-FCL and OP-
creases beyond this point. Specifically, for FP-FCL and FCL. Note that both schemes measure read and write hit
RW-FCL, the minimal point is at 60% for the Financial rates separately.
trace and at 50% for the Home trace for both schemes. In These results show the existence of two contradicting
contrast, for the Search Engine trace, response time de- effects as caching space is increased, that is, increasing
creases continuously as the cache size increases and the cache hit rate, which is a positive effect, and increasing
optimal point is at 95%. The reason behind this is that GC cost, which is a negative effect. The interaction of
the trace is dominated by read requests with rare modi- these two contradicting effects leads to an optimal point
fications to the data. Thus, the optimal configuration for where the overall access cost of the hybrid storage sys-
this trace is to keep as large a read cache as possible with tem becomes minimal.
only a small amount of OPS and write cache. To investigate how well OP-FCL adjusts the caching

10
Caching Space Size Caching Space Size Caching Space Size
Cache Size (GB) 4 Read Cache 4 Read Cache 4 Read Cache

Cache Size (GB)

Cache Size (GB)


3 3 3

2 2 2

1 1 1

0 0 0
Logical Time Logical Time Logical Time

(a) Financial (b) Home (c) Search Engine


Figure 9: Dynamic size adjustment of read and write caches and OPS

space and OPS sizes, we continuously monitor their sizes

Mean Resp. Time (ms)

Mean Resp. Time (ms)


14 14
as the experiments are conducted. Fig. 9 shows these re- 12 12
10 10
sults. In the figure, the x-axis denotes logical time that 8 8
elapses upon each request and the y-axis denotes the to- 6 6
4 FP-FCL 4 FP-FCL
tal (read + write) caching space size and the read cache 2 RW-FCL
OP-FCL
2 RW-FCL
OP-FCL
0 0
size. For the Financial and Home traces, we see that 0 20 40 60 80 100 0 20 40 60 80 100
the caching space size increases and decreases repeat- Caching Space (%) in SSD Caching Space (%) in SSD
edly according to the reference pattern of each period as (a) Exchange (b) MSN
the cost models maneuver the caching space and OPS
sizes. Notice that out of the 4GB of flash memory cache Figure 10: Response time of hybrid storage
space, only 2 to 2.5GBs are being used for the Financial 10 10
FP-FCL FP-FCL
trace and less than half is used for the Home trace. Even
GC Time (hour)

GC Time (hour)
8 RW-FCL 8 RW-FCL
OP-FCL OP-FCL
though cache space is available, using less of it helps per- 6 6
formance as keeping space to reduce garbage collection 4 4
time is more beneficial. Note, though, that for the Search 2 2
Engine trace, most of the 4GB are being allotted to the 0 0
0 20 40 60 80 100 0 20 40 60 80 100
caching space, in particular, to the read cache. This is a Caching Space (%) in SSD Caching Space (%) in SSD
natural consequence as reads are dominant, garbage col- (a) Exchange (b) MSN
lection rarely happens. Also note that it is taking some
time for the system to stabilize to the optimal allocation Figure 11: Cumulative garbage collection time
setting.
0.7 0.7
0.6 0.6
0.5 0.5
Hit Rate

Hit Rate

6.2 Large scale workloads 0.4 0.4


0.3 0.3
0.2 FP-FCL 0.2 FP-FCL
Our experimental setting for large scale workloads is as 0.1 RW-FCL 0.1 RW-FCL
OP-FCL OP-FCL
shown in Fig. 5 with the configuration summarized as 0 0
0 20 40 60 80 100 0 20 40 60 80 100
‘Config. 2’ in Table 2. In this configuration the SSD Caching Space (%) in SSD Caching Space (%) in SSD
is 16GBs employing four packages of flash memory and (a) Exchange (b) MSN
the HDD consists of three 10K RPM drives.
To test our scheme for large scale enterprise work- Figure 12: Hit rate
loads, we use the Exchange and MSN traces that have
16 16
been used in previous studies [15, 21, 22]. The Exchange 14
Caching Space Size
Read Cache 14
Caching Space Size
Read Cache
trace is a random I/O workload obtained from the Mi-
Cache Size (GB)

Cache Size (GB)

12 12
10 10
crosoft employee e-mail server [22]. This trace is com- 8 8
posed of 9 volumes of which we select and use traces 6 6
4 4
of volumes 2, 4, and 8, and each volume is assigned to 2 2
each HDD. The MSN trace is extracted from 4 RAID-10 0
Logical Time
0
Logical Time
volumes on an MSN storage back-end file store [22]. We
(a) Exchange (b) MSN
choose and use the traces of volumes 0, 1, and 4, each as-
signed to one HDD. The characteristics of the two traces Figure 13: Dynamic size adjustment of read and write
are summarized in Table 1. caches and OPS

11
14 80 5
FP-FCL FP-FCL FP-FCL
12 RW-FCL 70 RW-FCL RW-FCL

Average Erase Count

Average Erase Count

Average Erase Count


OP-FCL 4
10 60 OP-FCL OP-FCL
50 3
8
40
6 2
30
4 20
1
2 10
0 0 0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Caching Space (%) in SSD Caching Space (%) in SSD Caching Space (%) in SSD

(a) Financial (b) Home (c) Search Engine

120 120
FP-FCL FP-FCL
100 RW-FCL 100 RW-FCL
Average Erase Count

Average Erase Count


OP-FCL OP-FCL
80 80

60 60

40 40

20 20

0 0
0 20 40 60 80 100 0 20 40 60 80 100
Caching Space (%) in SSD Caching Space (%) in SSD

(d) Exchange (e) MSN


Figure 14: Average erase count of flash memory blocks

Fig. 10, which depicts the response time for the two OP-FCL adjusts the cache and OPS sizes according to
large scale workloads, show similar trends that we ob- the reference pattern for the large scale workloads. Ini-
served with the small scale workloads, in that, as caching tially, the cache size starts to increase as we start with
space increases, response time decreases to a minimal an empty cache. Then, we see that the scheme stabilizes
point, and then increases again. The response time of with OP-FCL dynamically adjusting the caching space
OP-FCL, which is shown as a horizontal line in the fig- and OPS sizes to their optimal values.
ure, is close to the smallest response times of FP-FCL
and RW-FCL. From these results, we confirm again that
a trade-off between GC cost and hit rate exists at the flash 6.3 Effect on lifetime of SSDs
cache layer.
Now let us turn our attention to the effect of OP-FCL
Specifically, for the Exchange trace shown in on the lifetime of SSDs. Generally, block erase count,
Fig. 10(a), the minimal point for FP-FCL is at 70%, which is affected by the wear-levelling technique used by
while it is at 80% for RW-FCL. The reason behind this the SSDs, directly corresponds to SSD lifetime. There-
difference can be found in Fig. 11 and Fig. 12. Fig. 12(a) fore, we measure the average erase counts of flash mem-
shows that RW-FCL has a higher hit rate than FP-FCL ory blocks for all the workloads, and the results are
at cache size 80%. On the other hand, Fig. 11(a) shows shown in Fig. 14. With the exception of the Search En-
that for cache size of 70% to 80% the GC cost increase is gine, we see that, for FP-FCL and RW-FCL, the aver-
steeper for FP-FCL than for RW-FCL. These results im- age erase count is low when caching space is small. As
ply that, for RW-FCL, the positive effect of caching more caching space becomes larger, the average erase count
data is greater than the negative effect of increased GC increases only slightly until the caching space reaches
cost at 80% cache size, and vice versa for FP-FCL. These around 70%. Beyond that point, the erase count increases
differences in positive and negative effect relations for sharply as OPS size becomes small and GC cost rises. In
FP-FCL and RW-FCL result in different minimal points. contrast, OP-FCL has a low average erase count drawn
From the results of the MSN trace shown in as a horizontal line in Fig. 14.
Fig. 10(b), we observe that FP-FCL and RW-FCL have In contrast to the other traces, the average erase count
almost identical response times. This is because they for the Search Engine trace is rather unique. First, the
have almost the same hit rate curves, which means that overall average erase count is noticeably lower than that
discriminating read and write requests has no perfor- of the other traces. Also, instead of a sharp increase ob-
mance benefit for the MSN trace. The minimal points served for the other traces, we first see a noticeable drop
for FP-FCL and RW-FCL are at cache size 80% for this as the cache size approaches 80%, before a sharp in-
trace. crease. The reason behind this is that 99% of the Search
As with the small scale workloads, Fig. 13 shows how Engine trace are read requests and the footprint is so

12
9 Overall, the performance is stable. The Home trace per-
8 Financial
Home formance deteriorates somewhat for periods of 214 and
Normalized Time 7 Search Engine
6 Exchange below, with worse performance as the period shortens.
5 MSN The reason behind this is that the workload changes fre-
4 quently as observed in Fig. 9. As a result, by the time
3
2
OP-FCL adapts to the results of the previous period, the
1 new adjustment becomes stale, resulting in performance
0 reduction. We also see that performance is relatively
4 16 32 64 128 256 512
consistent and best for periods between 214 to 216 . For
Size of Sequential Unit (KB)
periods beyond 218 , OP-FCL performance deteriorates
(a) Effect of sequential unit size
slightly. As the period increases to 220 , performance of
3 the Exchange and MSN traces start to degrade. This is
Financial
Home because the change in the workload spans a relatively
Normalized Time

Search Engine
2 Exchange large range compared to those of small scale workloads
MSN as shown in Fig. 13. For this reason, OP-FCL of longer
periods is not dynamic enough to reflect these workload
1
changes effectively. Overall though, we find that for a
relatively broad range of periods performance is consis-
0 tent.
12 14 16 18 20
Length of Period (2n)
(b) Effect of period length
7 Conclusions
Figure 15: Sensitivity analysis of sequential unit size and
period length on OP-FCL performance NAND flash memory based SSDs are being used as non-
volatile caches in hybrid storage solutions. In flash based
huge that the cache hit rate continuously increases al- storage systems, there exists a trade-off between increas-
most linearly with larger caches as shown in Fig. 8(c). ing the benefits of caching data and increasing the ben-
This continuous increase in hit rate continuously reduces efit of reducing the update cost as garbage collection
new writes resulting in reduced garbage collection, and cost is involved. To increase the former, caching space,
then eventually to reduced block erases. Beyond the 80% which is cache space that holds normal data, must be
point, block erases increase because GC cost increases increased, while to increase the latter, over-provisioned
sharply as the OPS becomes smaller. space (OPS) must be increased. In this paper, we showed
that balancing the caching space and OPS sizes has a sig-
6.4 Sensitivity analysis nificant impact on the performance of hybrid storage so-
lutions. For this balancing act, we derived cost models
In this subsection, we present the effect on the choice to determine the optimal caching space and OPS sizes,
of the sequential unit size and the length of the period on and proposed a scheme that dynamically adjusts sizes of
the performance of OP-FCL. The results for all the work- these spaces. Through experiments we show that our dy-
loads are reported relative to the parameter settings used namic scheme performs comparatively to the off-line op-
for all the results presented in the previous subsections: timal fixed partitioning scheme. We also show that the
the sequential unit size of 128 and period length of 216 . lifetime of SSDs may be extended considerably as the
Recall that the sequential unit size determines the con- erase count at SSDs may be reduced.
secutive request size that the Sequential I/O Detector re- Many studies on non-volatile cache have focussed on
gards as being sequential, and that these requests are sent cache replacement and destaging policies. As a miss at
directly to the HDD. Fig. 15(a) show the effect of the se- the flash memory cache leads to HDD access, it is criti-
quential unit size. When the sequential unit size is 4 KB, cal that misses be reduced. When misses do occur at the
OP-FCL performs very poorly. This is because too many write cache, intelligent destaging should help ameliorate
requests are being considered to be sequential and are miss effects. Hence, we are currently focusing our ef-
sent directly to the HDD. However, when the sequential forts on developing better cache replacement and destag-
unit size is between 16 KB ∼ 512 KB, OP-FCL shows ing policies, and in combining these policies with our
similar performance showing that performance is rela- cache partitioning scheme. Another direction of research
tively insensitive to the parameter of choice. that we are pursuing is managing the flash memory cache
Fig. 15(b) shows the performance of OP-FCL as the layer to tune SSDs to trade-off between performance and
length of the period is varied from 212 to 220 requests. lifetime.

13
8 Acknowledgments [15] K IM , Y., G UPTA , A., U RGAONKAR , B., B ERMAN , P., AND
S IVASUBRAMANIAM , A. HybridStore: A Cost-Efficient, High-
We would like to thank our shepherd Margo Seltzer and Performance Storage System Combining SSDs and HDDs. In
Proc. of MASCOTS (2011), pp. 227–236.
anonymous reviewers for their insight and suggestions
[16] K OLLER , R., AND R ANGASWAMI , R. I/O Deduplication: Uti-
for improvement. This work was supported in part by lizing Content Similarity to Improve I/O Performance. In Proc.
the National Research Foundation of Korea (NRF) grant of FAST (2010).
funded by the Korea government (MEST) (No. R0A- [17] K WON , H., K IM , E., C HOI , J., L EE , D., AND N OH , S. H.
2007-000-20071-0), by the Korea Science and Engineer- Janus-FTL: Finding the Optimal Point on the Spectrum Between
Page and Block Mapping Schemes. In Proc. of EMSOFT (2010),
ing Foundation (KOSEF) grant funded by the Korea gov- pp. 169–178.
ernment (MEST) (No. 2009-0085883), and by Basic Sci- [18] L EE , H. J., L EE , K. H., AND N OH , S. H. Augmenting RAID
ence Research Program through the National Research with an SSD for Energy Relief. In Proc. of HotPower (2008).
Foundation of Korea(NRF) funded by the Ministry of [19] L EE , S.-W., PARK , D.-J., C HUNG , T.-S., L EE , D.-H., PARK ,
Education, Science and Technology(2010-0025282). S., AND S ONG , H.-J. A Log Buffer-Based Flash Translation
Layer Using Fully-Associative Sector Translation. ACM Trans.
on Embedded Computer Systems 6, 3 (2007).
References [20] M ENON , J. A Performance Comparison of RAID-5 and Log-
Structured Arrays. In Proc. of HPDC (1995).
[1] A GRAWAL , N., P RABHAKARAN , V., W OBBER , T., D AVIS ,
J. D., M ANASSE , M., AND PANIGRAHY, R. Design Tradeoffs [21] N ARAYANAN , D., D ONNELLY, A., T HERESKA , E., E LNIKETY,
for SSD Performance. In Proc. of USENIX ATC (2008), pp. 57– S., AND ROWSTRON , A. Everest: Scaling Down Peak Loads
70. Through I/O Off-Loading. In Proc. of OSDI (2008), pp. 15–28.
[2] B UCY, J. S., S CHINDLER , J., S CHLOSSER , S. W., AND
[22] N ARAYANAN , D., T HERESKA , E., D ONNELLY, A., E LNIKETY,
G ANGER , G. R. DiskSim 4.0. S., AND ROWSTRON , A. Migrating Server Storage to SSDs:
http://www.pdl.cmu.edu/DiskSim/. Analysis of Tradeoffs. In Proc. of EuroSys (2009), pp. 145–158.
[23] PARK , C., C HEON , W., K ANG , J., ROH , K., C HO , W., AND
[3] C HEN , F., J IANG , S., AND Z HANG , X. SmartSaver: Turning
K IM , J.-S. A Reconfigurable FTL (Flash Translation Layer) Ar-
Flash Drive into a Disk Energy Saver for Mobile Computers. In
chitecture for NAND Flash-Based Applications. ACM Trans. on
Proc. of ISLPED (2006), pp. 412–417.
Embedded Computer Systems 7, 4 (2008).
[4] C HEN , F., K OUFATY, D. A., AND Z HANG , X. Hystor: Making [24] PARK , J., L EE , H., H YUN , S., K OH , K., AND BAHN , H.
the Best Use of Solid State Drives in High Performance Storage A Cost-aware Page Replacement Algorithm for NAND Flash
Systems. In Proc. of ICS (2011), pp. 22–32. Based Mobile Embedded Systems. In Proc. of EMSOFT (2009),
[5] D EBNATH , B., S ENGUPTA , S., AND L I , J. ChunkStash: Speed- pp. 315–324.
ing Up Inline Storage Deduplication using Flash Memory. In [25] PATTERSON , R. H., G IBSON , G. A., G INTING , E., S TODOL -
Proc. of ATC (2010). SKY, D., AND Z ELENKA , J. Informed Prefetching and Caching.
[6] FIU T RACE R EPOSITORY. In Proc. of SOSP (1995), pp. 79–95.
http://sylab.cs.fiu.edu/projects/iodedup. [26] P RITCHETT, T., AND T HOTTETHODI , M. SieveStore: A Highly-
[7] G UPTA , A., K IM , Y., AND U RGAONKAR , B. DFTL: A Flash Selective, Ensemble-level Disk Cache for Cost-Performance. In
Translation Layer Employing Demand-Based Selective Caching Proc. of ISCA (2010), pp. 163–174.
of Page-Level Address Mappings. In Proc. of ASPLOS (2009), [27] ROSENBLUM , M., AND O USTERHOUT, J. K. The Design and
pp. 229–240. Implementation of a Log-Structured File System. ACM Trans. on
[8] H ONG , S., AND S HIN , D. NAND Flash-Based Disk Cache Us- Computer Systems 10, 1 (1992), 26–52.
ing SLC/MLC Combined Flash Memory. In Proc. of SNAPI [28] S AXENA , M., AND S WIFT, M. M. FlashVM: Virtual Memory
(2010), pp. 21–30. Management on Flash. In Proc. of ATC (2010).
[9] H U , X.-Y., E LEFTHERIOU , E., H AAS , R., I LIADIS , I., AND [29] S CHINDLER , J., S HETE , S., AND S MITH , K. A. Improving
P LETKA , R. Write Amplification Analysis in Flash-based Solid throughput for small disk requests with proximal I/O. In Proc. of
State Drives. In Proc. of SYSTOR (2009). FAST (2011).
[10] K ANG , J.-U., J O , H., K IM , J.-S., AND L EE , J. A Superblock- [30] S EAGATE M OMETUS R XT.
based Flash Translation Layer for NAND Flash Memory. In Proc. http://www.seagate.com/www/en-us/products/laptops/laptop-
of EMSOFT (2006), pp. 161–170. hdd.
[11] K GIL , T., ROBERTS , D., AND M UDGE , T. Improving NAND [31] S HIM , H., S EO , B.-K., K IM , J.-S., AND M AENG , S. An Adap-
Flash Based Disk Caches. In Proc. of ISCA (2008), pp. 327–338. tive Partitioning Scheme for DRAM-based Cache in Solid State
Drives. In Proc. of MSST (2010).
[12] K IM , J., K IM , J. M., N OH , S. H., M IN , S. L., AND C HO , Y. A
[32] S OUNDARARAJAN , G., P RABHAKARAN , V., BALAKRISHNAN ,
Space-Efficient Flash Translation Layer for CompactFlash Sys-
M., AND W OBBER , T. Extending SSD Lifetimes with Disk-
tems. IEEE Trans. on Consumer Electronics 48, 2 (2002), 366–
Based Write Caches. In Proc. of FAST (2010).
375.
[33] UMASS T RACE R EPOSITORY.
[13] K IM , J. M., C HOI , J., K IM , J., N OH , S. H., M IN , S. L., C HO , http://traces.cs.umass.edu.
Y., AND K IM , C. S. A Low-Overhead High-Performance Uni-
fied Buffer Management Scheme that Exploits Sequential and [34] U NDERSTANDING THE F LASH T RANSLATION L AYER (FTL)
Looping References. In Proc. of OSDI (2000). S PECICATION. Intel Corporation, 1998.
[35] WANG , W., Z HAO , Y., AND B UNT, R. HyLog: A High Per-
[14] K IM , S.-H., J UNG , D., K IM , J.-S., AND M AENG , S. Hetero-
formance Approach to Managing Disk Layout. In Proc. of FAST
Drive: Reshaping the Storage Access Pattern of OLTP Workload
(2004), pp. 145–158.
Using SSD. In Proc. of IWSSPS (2009), pp. 13–17.

14

You might also like