var.harshit@gmail.
com
Concept 03 (SAP HANA ABAP Concept)
…………………………………………………………..
SAP HANA In-Memory DB concept …
As the name suggests, the In-Memory Database uses a technology wherein all
the data collated from the source system, instead of being stored in cumbersome
hard disks is stored in a RAM memory. In SAP HANA, the technology of In-
Memory Database is used to store huge databases in a manner that whenever
required for analysis or information processing, the data can be accessed by the
CPUs within nanoseconds.
Key characteristics of SAP HANA as an in-memory database:
• In-Memory Data Storage:
Data resides primarily in RAM, enabling extremely fast data access and
processing speeds compared to disk-based systems.
1|Page
[email protected]
• Column-Oriented Storage:
SAP HANA employs a column-oriented storage design, which is highly efficient
for analytical queries and data compression, further enhancing performance.
• Combined OLAP and OLTP:
It integrates Online Analytical Processing (OLAP) and Online Transactional
Processing (OLTP) capabilities into a single system, allowing for real-time
analytics on live transactional data.
• Advanced Analytics and Application Platform:
Beyond being a database, SAP HANA is a comprehensive platform offering
advanced analytics, machine learning capabilities, application development tools,
and data integration features.
• Scalability and Flexibility:
SAP HANA supports various deployment options including on-premise, cloud,
and hybrid models, providing scalability and adaptability for evolving data
needs.
• Data Persistence:
To mitigate the volatility of RAM, SAP HANA incorporates mechanisms like
transactional logging and periodic saving of database images to disk, ensuring
data persistence and recovery in case of system failures.
2|Page
[email protected]
Benefits of in-memory
Speed of reading and writing data is the primary characteristic of in-memory data,
which enables faster processing and improved response in business applications.
But application developers have been quick to realize that this faster response and
increased capability are also valuable in allowing the re-design of several other
tools and programs that deliver more value. When the database is architected and
built from the ground up on an in-memory database, numerous improvements can
be made in the design of internal data models and processes.
Data model: A number of different database structures have been developed for
legacy technologies to optimize data access for different tasks:
• Data stored in rows (traditional schema)
• Column-oriented architecture, which provides high-volume, fast access
response for a limited subset of data
• Special databases for unstructured data, and
• Others that may speed up access in limited use cases or accommodate
special requirements.
A modern in-memory database allows all types of data to be stored in a single
system, including structured transactions and unstructured data such as voice,
video, free-form documents, and emails—all with the same fast access capability.
Faster processing: In-memory databases are faster than legacy databases
because they require fewer CPU instructions to retrieve data. Developers can
exploit this benefit by adding more function without the accompanying drag on
system response. Also, using parallel processing so that multiple subsets
(columns) can be processed simultaneously adds even more speed and capacity.
3|Page
[email protected]
Combined tools: Traditional systems store transaction data in a legacy database
that is accessed by online transactional processing (OLTP). Then, to get a view
for analytics, the data is often moved to a separate database (data warehouse)
where online analytical processing (OLAP) tools can be used to analyze large
data sets (or Big Data). Modern, in-memory databases can support both OLAP
and OLTP, eliminating the need for redundant storage and the delays between
data transfers, which in turn eliminates any concerns about completeness or
timeliness of the warehouse data.
Smaller digital footprint: Traditional databases store a large amount of
redundant data. For example, the system creates a copy of each row that is
updated, and it adds tables of combined data sets that increase space needs and
maintenance requirements. In addition to the redundancy avoided for
OLAP/OLTP mentioned above, column-oriented databases save changes as they
are applied to the database.
Immediate insight: A modern, in-memory database provides embedded
analytics to deliver business insight for real-time alerts and operational reporting
on live transactional data.
4|Page
[email protected]
How does a modern, in-memory database work?
It would be inefficient and unnecessary to hold all a company’s data in memory;
some information is held in-memory (called hot storage) while other data is stored
on disk (cold storage). The hot and cold designations derive from information
handling paradigms developed by the cloud computing industry.
Hot data is deemed mission-critical and is accessed frequently, so it is held in
memory for fast retrieval and modification.
Example of hot vs cold storage for an ERP system.
Data that is more static—in other words, data that is requested infrequently and
is not normally required for active use—can be stored in a less expensive (and
infinitely expandable) way on disk drives or solid-state devices (SSD). Cold
storage data does not benefit from the fast access of an in-memory database, but
it is still readily available when needed for less time-critical applications. Cold
storage is best for historical data, closed activities, old projects, and the like.
5|Page
[email protected]
In planning the migration to an in-memory database, the implementation team
decides how to sort existing data into cold storage for past requirements and hot
storage for ongoing activities. Archiving criteria for keeping the active systems
and data in top condition must also be determined.
In-memory database systems are designed with “persistence” for logging all the
transactions and changes to provide standard data backup, and system restore.
Persistence in modern systems allows them to run at full speed while maintaining
data in the event of power failure.
Traditional Databases vs. SAP HANA
Before HANA, most databases were disk-based.
• Data was stored on hard drives (HDD/SSD).
• To access it, data was read from disk → loaded into RAM → processed
→ written back to disk.
• Disk I/O (Input/Output) is slow compared to RAM access.
This caused delays for real-time analytics, large reports, or processing billions
of rows.
SAP HANA’s In-Memory Concept
HANA = High-Performance Analytic Appliance.
Key principle:
• Store all data directly in RAM instead of disks.
• Processing happens where the data resides (in memory), not by moving
data back and forth.
6|Page
[email protected]
• Disk is used only for backup and recovery.
Result:
• Millions of times faster access compared to traditional databases.
• Enables real-time reporting + transaction processing on the same
system (OLTP + OLAP).
How HANA Achieves Speed
1. In-Memory Storage – Data stored in RAM.
o RAM access ≈ nanoseconds
o Disk access ≈ milliseconds
o (1 millisecond = 1,000,000 nanoseconds).
2. Columnar Storage – Instead of storing data row-by-row (like RDBMS),
HANA stores data column-by-column.
o Better compression (saves memory).
o Faster aggregation (e.g., SUM, AVG) since it scans only relevant
columns.
3. Data Compression – Repeated values are stored efficiently.
o Example: Gender column (M/F) for millions of rows is stored as
dictionary (M=1, F=2).
4. Parallel Processing – Multiple CPUs and cores work simultaneously on
different partitions of data.
7|Page
[email protected]
Real-Life Analogy:
Let’s say you are preparing for an exam:
Traditional Database (Disk-based):
• All books are kept in a cupboard in another room (disk).
• Every time you want to check a page, you walk to the cupboard, pick the
book, read it, then keep it back.
• Very time-consuming if you need to check many references.
SAP HANA (In-Memory):
• You keep all books open on your study table (RAM).
• You just look directly at the book and get the answer instantly.
• Much faster for continuous study and analysis.
Real-Life Business Example:
Scenario 1: Retail Company
• A supermarket chain wants to analyze today’s sales data from all stores
to adjust promotions in real-time.
• In traditional DB: Report might take hours to generate → decision
delayed.
• In SAP HANA: All transactional sales data is stored in-memory. Report
runs in seconds → managers can change discounts instantly.
Scenario 2: Airline Industry
• Airlines want to optimize ticket pricing dynamically (based on demand).
• Traditional DB: Calculation of available seats + pricing rules = slow.
8|Page
[email protected]
• HANA: Instantly calculates and updates prices as bookings happen.
Scenario 3: Healthcare
• A hospital needs to analyze patient medical records (blood tests, MRI
scans, prescriptions) in real-time.
• With HANA, doctors can quickly see trends and anomalies instantly,
leading to faster diagnoses.
Why It Matters
• Real-time analytics → Faster decision-making.
• No need for separate OLAP systems (reporting + transactions on same
system).
• Supports Big Data, IoT, AI, and Predictive analytics.
In short:
SAP HANA’s in-memory database removes the bottleneck of slow disk access
by keeping everything in RAM, combined with columnar storage +
compression + parallelism, making it possible to analyze billions of rows in
seconds.
9|Page