0% found this document useful (0 votes)
52 views74 pages

23 24 M3T4b - AlmacenamientoEvolucionCintaMagnetica

The document discusses solid state drives and how they work. SSDs use flash memory instead of spinning disks, providing advantages like higher speed, durability and lower power usage. However, flash memory has limitations like write endurance and fragmentation that require techniques like wear leveling and garbage collection to optimize performance.

Uploaded by

Pol Sanchz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views74 pages

23 24 M3T4b - AlmacenamientoEvolucionCintaMagnetica

The document discusses solid state drives and how they work. SSDs use flash memory instead of spinning disks, providing advantages like higher speed, durability and lower power usage. However, flash memory has limitations like write endurance and fragmentation that require techniques like wear leveling and garbage collection to optimize performance.

Uploaded by

Pol Sanchz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Almacenamiento de Estado Sólido

Visión General

Periféricos e interfases Almacenamiento SSD 1


Beyond Spinning Disks
• Hard drives have been around since 1956
– The cheapest way to store large amounts of data
– Sizes are still increasing rapidly

• However, hard drives are typically the slowest component in most computers
– CPU and RAM operate at GHz
– PCI-X and Ethernet are GB/s

• Hard drives are not suitable for mobile devices


– Fragile mechanical components can break
– The disk motor is extremely power hungry

2
Solid State Drives
• NAND flash memory-based drives
– High voltage is able to change the configuration of a floating-gate transistor
– State of the transistor interpreted as binary data

Data is striped
Flash memory across all chips
chip

3
Advantages of SSDs 
• More resilient against physical damage
– No sensitive read head or moving parts
– Immune to changes in temperature

• Greatly reduced power consumption


– No mechanical, moving parts

• Much faster than hard drives


– >500 MB/s vs ~200 MB/s for hard drives
– No penalty for random access
• Each flash cell can be addressed directly
• No need to rotate or seek
– Extremely high throughput
• Although each flash chip is slow, they are RAIDed

4
Challenges with Flash 
• Flash memory is written in pages, but erased in blocks
– Big problem: Blocks bigger than pages  Pages: 4 – 16 KB, Blocks: 128 – 256 KB
– Thus, flash memory can become fragmented
– Leads to the write amplification problem

• Flash memory can only be written a fixed number of times


– Typically 3000 – 5000 cycles for MLC
– SSDs use wear leveling to evenly distribute writes across all flash cells

5
Write Amplification
G moved to new block by
the garbage collector
Block X Block Y
Stale pages
Cleaned A
K D G C’ F’
G C’’ F’’ J
cannot be
block can
overwritten or B
L E A’ D’ A’’ D’’ H A’’’
now be
erased
rewritten C F B’ E’ B’’ E’’ I B’’’
individually

• Once all pages have been written, valid pages must be consolidated to free up
space

• Write amplification: a write triggers garbage collection/compaction


– One or more blocks must be read, erased, and rewritten before the write can proceed

6
Garbage Collection (GC)
• Garbage collection (GC) is vital for the performance of SSDs

• Older SSDs had fast writes up until all pages were written once
– Even if the drive has lots of “free space,” each re-write is amplified, thus reducing
performance

• Many SSDs over-provision to help the GC


– E.g.: 240 GB SSDs actually have 256 GB of memory

• Modern SSDs implement background GC


– However, this doesn’t always work correctly 

7
The Ambiguity of Delete
• Goal: the SSD wants to perform background GC, to improve performance
– But this assumes the SSD controller “knows” which pages are invalid

• Problem: most file systems don’t actually delete data


– On Linux, the “delete” function is unlink()
– Removes the file meta-data, but not the file itself

8
Delete Example
Block X
File metadata Meta File File Meta Metadata is
(inode, name, overwritten, but
etc.) Meta File File Meta
the file remains
File File File

1. File is written to SSD • Lack of explicit delete


means the GC wastes effort
2. File is deleted copying useless pages
3. The GC executes • Hard drives are not GCed,
– 9 pages look valid to the SSD so this was never a problem
– The OS knows only 2 pages are valid

9
TRIM
• New SATA command TRIM (SCSI – UNMAP)
– Allows the OS to tell the SSD that specific LBAs are invalid,
may be GCed
Block X
TRIM Meta File File Meta

Meta File File Meta

File File File

• OS support for TRIM


– Win 7, OSX Snow Leopard, Linux 2.6.33, Android 4.3
• Must be supported by the SSD firmware
10
Almacenamiento de Estado Sólido

Limitación del número de ciclos de borrado

Periféricos e interfases Almacenamiento SSD 12


Block Usage
•Need to maintain supply of empty blocks to add to write allocation pool.

•Cleaning involves moving valid pages from one block to another block.

(Source – SSD USENIX, 08)


Wear Leveling (Nivelación del desgaste)
• Recall: each flash cell wears out after several thousand
writes

• SSDs use wear leveling to spread writes across all cells


– Typical consumer SSDs should last ~5 years

14
Wear-leveling

•Write/Erase cycle of NAND is limited to 100K for SLC and 10K for MLC.

•Reducing Wear Level:

• Write data to be evenly distributed over the entire storage.

• Count # of Write/Erase cycles of each NAND block.

• Based on the Write/Erase count, NAND controller re-map the logical address to the
different physical address.

• Wear-leveling is done by the NAND controller (FTL – Flash Translation Layer), not
by the host system.

(Source - Ken Takeuchi INRET, 08)


Static Vs. Dynamic wear-leveling

Static data
Data that does not change such as system data (OS, application SW).

Dynamic data
Data that are rewritten often such as user data.

Dynamic wear-leveling
Wear-level only over empty and dynamic data.

Static wear-leveling
Wear-level over all data including static data.

(Source - Ken Takeuchi INRET, 08)


If the GC runs now, page G
Wear Leveling must
Examples
be copied
Block X Block Y
Dynamic Wear Leveling

Wait as long
as possible A
K D G C’ F’ C’’ F’’ G
before B
L E A’ D’ A’’ D’’ H A’’’
garbage
collecting C F B’ E’ B’’ E’’ I B’’’

Block X Block Y
Blocks with
M*
A D G J M*
M
A M’
D M’’
G M’’’
J
Static Wear

long lived
Leveling

data receive N*
B E H K N*
N
B N’
E N’’
H N’’’
K
less wear
O*
C F I L O*
O
C O’
F O’’
I O’’’
L

SSD controller periodically swap long


lived data to different blocks
17
Dynamic wear-leveling

oBlock with static data is NOT used for wear-leveling.

oWrite and erase concentrate on the dynamic data block.


N.Balan, MEMCON2007.
SiliconSystems, SSWP02
(Source - Ken Takeuchi INRET, 08)
Static wear-leveling

10

oWear-level more effectively than dynamic wear-leveling.

oSearch for the least used physical block and write the data to
the location. If that location Is empty, the write occurs normally.

oContains static data, the static data moves to a heavily


used block and then the new data is written. N.Balan, MEMCON2007.
SiliconSystems, SSWP02

(Source - Ken Takeuchi INRET, 08)


Summary
SSD Advantage :

oLower power consumption

oHigh mechanical reliability, no spinning parts. No noisy.

oFast Read performance

oNo data loss as failure occurs on write (another cell can be used for write), rather than read on HDD

SSD Disadvantage :

oHigh cost compared to HDD

oLow capacity compared to HDD

oSlow random write (due to slow block erase). Complex

oLimited write/erase cycles


SSD Controllers
• SSDs are extremely complicated
internally
• All operations handled by the SSD controller
– Maps LBAs to physical pages
– Keeps track of free pages, controls the GC
– May implement background GC (garbage collector)
– Performs wear leveling via data rotation

• Controller performance is crucial for overall SSD performance

21
SSD Controller
HIL – Support host interconnect
(USB/PCI/SATA/PCIe).

Buffer Manager – Holds pending and


satisfied request along primary data
path.

Flash Demux/Mux – emits command


and handles transport of data along
serial connection to flash.

Processing engine – manages request


flow and mapping from Logic block
address to physcial flash location.

(Source – SSD USENIX, 08)


Flash Technology
• Fujio Masuoka invents flash memory
in 1984 while working for Toshiba.
– Capable of being erased and re-
programmed multiple times, flash
memory quickly gained a loyal
following in the computer memory
industry.
– Toshiba’s failure to reward his work,
and Masuoka quit to become a
professor at Tohoku University.
– Bucking Japan’s culture of company
loyalty, he sued his former employer
demanding compensation, settling in
2006 for a one-time payment of
¥87m ($758,000).
http://www.computerhistory.org/timeline/1984/#169ebbe2ad45559efbc
6eb357202d1e7 NAND flash memory-based drives

23
Flash Technology overview

• Two major forms NAND flash and NOR flash

• NOR Flash has typically been used for code storage and direct execution in portable
electronics devices, such as cellular phones and PDAs.

• NAND Flash, which was designed with a very small cell size to enable a low cost-
per-bit of stored data, has been used primarily as a high-density data storage medium
for consumer devices such as digital still cameras and USB solid-state disk drives.

• Toshiba was a principal innovator of both NOR type and NAND-type Flash
technology in the 1980’s.

(Source: Toshiba)
NAND vs. NOR Flash Memory

(Source Toshiba)
When should one choose NAND over NOR ?
For a system that needs to boot out of Flash, execute code from the Flash, or if
read latency is an issue, NOR Flash may be the answer.

For storage applications, NAND Flash’s higher density, and high programming
and erase speeds make it the best choice.

Power is another important concern for many applications. For any write-
intensive applications, NAND Flash will consume significantly less power.

What if a system, such as a camera phone, has a requirement both for


code execution and high capacity data storage?

(Source Toshiba)
Flavors of NAND Flash Memory

Multi-Level Cell (MLC) Single-Level Cell (SLC)


• Multiple bits per flash cell • One bit per flash cell
– For two-level: 00, 01, 10, 11 – 0 or 1
– 2, 3, and 4-bit MLC is available

• Higher capacity and cheaper • Lower capacity and more


than SLC flash    expensive than MLC flash 
• Lower throughput due to the • Higher throughput than
need for error correction 
MLC 
• 3000 – 5000 write cycles 
• 10000 – 100000 write cycles
• Consumes more power 

Consumer-grade drives Expensive, enterprise drives

27
NAND SLC vs. MLC Technology

(Source Toshiba)
HDD vs. SDD
Random access

Sequential access

(Source - Ken Takeuchi INRET, 08)


Flash NAND alternativas
• SLC (Capa Simple)
– Cada Celda almacena 1 bit de información

• MLC (Multi Capa)


– 2 bits por celda
– 4 estados posibles 00, 01, 10, 11.
– Son mas lentas porque tenemos que distinguir más estados

• TLC (Triple Capa)


– tres bits por celda
– 8 estados, 000, 001, 010, 011, 100, 101, 110, 111

• QLC (Quadrupple Level Cell)


– 4 bits por celda de datos
– Capacidades de hasta 128 TB

• 3D NAND (Apiladas Verticalmente)


– Celdas Apiladas Verticalmente que llegan hasta las 32 Capas

Periféricos e interfases Almacenamiento Magnético 30


31
Evolución de Precios HDD vs HDD
Introducción de nuevas Tecnologías

Periféricos e Interfaces Principios de almacenamiento magnético 32


SSD market Trends

(Source -Toshiba, 08)


SSD market Trends

(Source -Toshiba, 08)


SSD market Trends

(Source - Ken Takeuchi INRET, 08)


NAND Flash Internals

https://www.usenix.org/legacy/events/usenix08/tech/full_papers/agrawal/agrawal.pdf
https://c59951.ssl.cf2.rackcdn.com/usenix08/tech/full_papers/agrawal/agrawal.mp3 (Source – SSD USENIX, 08)
NAND Flash Internals – Key points
4GB package consisting of 2GB dies, share 8-bit serial I/O bus and common
control signals.

Two dies have separate chip enable and ready/busy signals – One of them can
accept commands while the other is carrying out another operation.

Two plane-commands can be executed on either plane 0 & 1 or 2 & 3.

(Source – SSD USENIX, 08)


NAND Flash Internals – Key points

Each page includes 128 byte region to store meta data (Identification and error
detection information).

Data read/write at the granularity of flash pages, thru 4KB data register.

 Erase at block level.

Each block can be erased only finite number of time 100K for SLC.

(Source – SSD USENIX, 08)


Hasta aquí hemos explicado en clase en 2023/24
Las transparencias posteriores a esta se han mantenido para quien pueda tener
interés en este tema.

Periféricos e Interfaces Almacenamiento Magnético 39


Hasta aquí hemos explicado en clase en 2023/24

Las transparencias posteriores a esta se han mantenido


para quien pueda tener interés en este/os contenido/s.

Información adicional

Periféricos e Interfaces Almacenamiento Magnético 40


Limited Serial Bandwidth
Exploiting parallelism: Interleaving

Inherent parallelism : multiple packages, dies, planes


Stripping across and within packages

(Source – SSD USENIX, 08)


Interleaving Within Package

(Source – SSD USENIX, 08)


Copy‐back
Copy‐back : copy pages within a flash package Cleaning and wear‐leveling

(Source – SSD USENIX, 08)


Concluding remark

"There have been few times in the history of computing when a


new technology becomes pivotal to completely changing
the PC platform and user experience, Solid State Drive
have this capability."
- Gordon Moore.
Almacenamiento Magnético en Cintas

Visión General

Periféricos e interfases Almacenamiento Magnético 46


Repaso:
Historia del almacenamiento magnético
● Sistemas magnéticos

- La historia del almacenamiento magnético, se remonta a 1949,


cuando un grupo ingenieros y científicos de IBM, empezaron a
desarrollar un nuevo dispositivo de almacenamiento, que
revolucionaria la industria.

- En 1952, IBM anunció su primer dispositivo de almacenamiento


magnético, la IBM 726 que fue la primera cinta magnética, junto
con la IBM 701, que fue el primer computador para aplicaciones
científicas.

Almacenamiento Magnético 49
Repaso:
Historia del almacenamiento magnético
IBM 726

IBM 701

Almacenamiento Magnético 50
Repaso:
Uso de campos magnéticos para almacenar datos
• Si representamos el valor del campo magnético en función del
valor de la corriente que circula, tenemos el llamado ciclo de
histéresis

Almacenamiento Magnético 55
MÓDULO 4
Principios básicos
• Características importantes:
– Capacidad
– Coste
– Densidad de información
– Velocidad de Transferencia
– Tiempo de acceso
– Otros: Fiabilidad, durabilidad…

• Soporte Magnético

• Cabezal de lectura y de escritura


Densidad, capacidad y coste
• Es mejor cuanto:
– Mayor Capacidad (cantidad de información máxima que podemos almacenar)
– Menor Coste (estático o fabricación + dinámico o explotación)
– Mas alta sea la relación Capacidad/Coste

• Densidad de Información:
– Cantidad de información por unidad de volumen (y consecuentemente área, longitud)

• Mayor densidad de información suele implicar:


– Mayor Capacidad
– Menor Coste
– Mejor relación Capacidad/Coste

•  Siempre se buscará aumentar la densidad de información


(manteniendo el resto de especificaciones de fiabilidad y durabilidad de la información
almacenada)

Periféricos e interfases Almacenamiento Magnético 57


Densidad, velocidad de transferencia y tiempo de acceso
• Es mejor cuanto:
– Mayor Velocidad de transferencia (cantidad de información por unidad de tiempo que
podemos leer y/o escribir)
– Menor Tiempo de Acceso (tiempo que se tarda en alcanzar la posición de elemento
buscado y estar en condiciones de leer)

• Mayor Densidad de Información implica mayor cercanía espacial de los bits y


por lo tanto … 
– Mayor velocidad de transferencia potencial
– Menor tiempo de acceso
– Menor coste energético

•  Siempre se buscará aumentar la densidad de información


(manteniendo el resto de especificaciones de fiabilidad y durabilidad de la información
almacenada)

Periféricos e interfases Almacenamiento Magnético 58


Densidad de información en el medio magnético
• Depende determinantemente de la tecnología del soporte magnético y las
cabezas de lectura y de escritura (estando ambas íntimamente imbricadas)

• La información se almacena codificada en un “dominio magnético” cuyas


características fundamentales son
– la dimensión espacial (superficie o volumen que ocupan)
– la intensidad magnética
– la orientación espacial

• El dominio magnético 
– Es “creado” sobre el medio magnético mediante el “cabezal grabador”
– Es “leido” mediante el “cabezal lector”
– En algunos casos es “destruido” o “borrado” por el “cabezal de borrado”,
especialmente frecuentes en cintas de almacenamiento magnético (donde el peso no es
un factor limitante y puede evitar la necesidad de “formateo previo” del medio a usar)

• Las tecnología de las cabezas grabadoras y lectoras condicionan de forma


determinante el las características del dominio magnético, y por tanto, las de la
densidad de almacenamiento
Periféricos e interfases Almacenamiento Magnético 59
Medios de almacenamiento magnético:
• Dos tipos han dominado históricamente el
almacenamiento magnético puro:
– Cintas magnéticas
• Grabación longitudinal (muy robusto)
– Derivado de los sistemas de grabación de audio analógico
– Cabezal único (una sola pista) uni-y/o-bi-direccional
– Cabezal múltiple (e.g. 9 pistas) uni-y/o-bi-direccional
• Grabación helicoidal (más densidad pero más frágil)
– Derivado de los sistemas de grabación de video analógico
(VCR) y audio digital (DAT).
– Cabezal principal rotatorio único o múltiple.
– Discos magnéticos
Periféricos e interfases Almacenamiento Magnético 60
Medios de almacenamiento magnético:
• Dos tipos han dominado históricamente el
almacenamiento magnético puro:
– Cintas magnéticas
• Gran capacidad de almacenamiento
• Mínimo coste
• Usado típicamente para “archivo”, “copia de seguridad” y
distribución de software (históricamente).
• Muy lento en términos relativos, debido al acceso secuencial
a la información
– Discos magnéticos
• Buena capacidad de almacenamiento y velocidad de acceso
• Coste tradicionalmente elevado, pero ha ido disminuyendo
rápida y constantemente

Periféricos e interfases Almacenamiento Magnético 61


La Visión de la Historia….Univac-1

Periféricos e interfases Almacenamiento Magnético 62


Cintas Magnéticas:
• Escritura

• Lectura

• Borrado

Periféricos e interfases Almacenamiento Magnético 63


Tipos de Cintas Magnéticas:

Periféricos e interfases Almacenamiento Magnético 64


Cinta Magnética de Bobina Abierta:

Periféricos e Interfaces Almacenamiento Magnético 65


Cintas magnéticas

Periféricos e interfases Almacenamiento Magnético 66


Cintas Magnéticas:
Cabezal de Escritura – Lectura -Borrado
• Escritura

• Lectura

• Borrado

Periféricos e interfases Almacenamiento Magnético 67


Cinta Magnética Formato de múltiples pistas:

Periféricos e Interfaces Almacenamiento Magnético 68


Ejemplo Especificaciones bobina abierta (reel-to-reel):

Periféricos e Interfaces Almacenamiento Magnético 69


Cinta Magnética Formato bidireccional (serpentina):

Periféricos e Interfaces Almacenamiento Magnético 70


Cinta Magnética Formato bidireccional (serpentina):

Periféricos e Interfaces Almacenamiento Magnético 71


Ejemplo especificaciones:

Periféricos e Interfaces Almacenamiento Magnético 72


Cintas Magnéticas:
Cabezal de Escritura – Lectura helicoidal

Periféricos e interfases Almacenamiento Magnético 73


Cintas Magnéticas (DAT):
Detalle del Cabezal de Escritura – Lectura helicoidal

Periféricos e interfases Almacenamiento Magnético 74


Cintas Magnéticas:
Cabezal de Escritura – Lectura Helicoidal

Periféricos e interfases Almacenamiento Magnético 75


Cintas Magnéticas (DAT):
Detalle del formato helicoidal

Periféricos e interfases Almacenamiento Magnético 76


Cintas Magnéticas:
Cabezal de Escritura – Lectura helicoidal

Periféricos e interfases Almacenamiento Magnético 77


Cintas Magnéticas:
Cabezal de Escritura – Lectura helicoidal

Periféricos e interfases Almacenamiento Magnético 78


Ejemplo de especificaciones

Periféricos e interfases Almacenamiento Magnético 79


Cintas Magnéticas. Realizaciones Actuales:
Cabezal de Escritura – Lectura de dominio transversal

• Hasta 35 Tbytes por unidad de cinta, gracias a la utilización de las últimas


tecnologías desarrolladas para discos magnéticos en cabezas y material de
soporte magnético
http://www.zurich.ibm.com/news/10/storage.html
http://www.flickr.com/photos/ibm_research_zurich/sets/72157623247462714/
Periféricos e interfases Almacenamiento Magnético 80
IBM-35TBytes:
http://www.flickr.com/photos/ibm_research_zurich/4268767654/in/set-72157623247462714/lightbox/

Periféricos e interfases Almacenamiento Magnético 81

You might also like