Hello, Ehcache: Locality of Reference
Hello, Ehcache: Locality of Reference
ORG
EHCACHE.ORG
QUARTZ-SCHEDULER.ORG
About
Modules
Code
Community
Documentation
News
DOWNLOAD
Hello, Ehcache
Introduction Definitions Why caching works Locality of Reference The Long Tail Will an Application Benefit from Caching? Speeding up CPU-bound Applications Speeding up I/O-bound Applications Increased Application Scalability How much will an application speed up with Caching? The short answer Applying Amdahl's Law Cache Efficiency Cluster Efficiency A cache version of Amdahl's law Web Page example
Configuration BigMemory Auto Resource Control (ARC) APIs Operations Replication Modules Hibernate 2nd-Level Cache Integrations WAN Replication Recipes Code Samples FAQ Documentation (PDF) Javadoc
Introduction
Ehcache is a cache library introduced in October 2003 with the key goal of improving performance by reducing the load on underlying resources. Ehcache is not just for general-purpose caching, however, but also for caching Hibernate (secondlevel cache), data access objects, security credentials, web pages. It can also be used for SOAP and RESTful server caching, application persistence, and distributed caching.
Definitions
cache: Wiktionary defines a cache as "a store of things that will be required in future, and can be retrieved rapidly." That is the nub of it. In computer science terms, a cache is a collection of temporary data which either duplicates data located elsewhere or is the result of a computation. Once in the cache, the data can be repeatedly accessed inexpensively. cache-hit : When a data element is requested of the cache and the element exists for the given key, it is referrred to as a cache hit (or simply 'hit').
Previous Versions
cache-miss: When a data element is requested of the cache and the element does not exist for the given key, it is referred to as a cache miss (or simply 'miss'). system-of-record: The core premise of caching assumes that there is a source of truth for the data. This is often referred to as a system-of-record (SOR). The cache acts as a local copy of data retrieved from or stored to the system-of-record. This is often a traditional database, although it may be a specialized file system or some other reliable long-term storage. For the purposes of using Ehcache, the SOR is assumed to be a database. SOR: See system-of-record.
Chris Anderson, of Wired Magazine, coined the term "The Long Tail" to refer to Ecommerce systems. The idea that a small number of items may make up the bulk of sales, a small number of blogs might get the most hits and so on. While there is a small list of popular items, there is a long tail of less popular ones.
The Long Tail The Long Tail is itself a vernacular term for a Power Law probability distribution. They don't just appear in ecommerce, but throughout nature. One form of a Power Law distribution is the Pareto distribution, commonly know as the 80:20 rule. This phenomenon is useful for caching. If 20% of objects are used 80% of the time and a way can be found to reduce the cost of obtaining that 20%, then the system performance will improve.
converted by Web2PDFConvert.com
In this case, caching may be able to reduce the workload required. If caching can cause 90 of that 100 to be cache hits and not even get to the database, then the database can scale 10 times higher than otherwise.
The following examples show how to apply Amdahl's law to common situations. In the interests of simplicity, we assume: a single server a system with a single thing in it, which when cached, gets 100% cache hits and lives forever.
converted by Web2PDFConvert.com
Let's say we have a 1000 fold improvement on a page fragment that taking 40% of the page render time. The expected system speedup is thus:
Cache Efficiency
In real life cache entrie do not live forever. Some examples that come close are "static" web pages or fragments of same, like page footers, and in the database realm, reference data, such as the currencies in the world. Factors which affect the efficiency of a cache are: livenesshow live the data needs to be. The less live the more it can be cached proportion of data cachedwhat proportion of the data can fit into the resource limits of the machine. For 32 bit Java systems, there was a hard limit of 2GB of address space. While now relaxed, garbage collection issues make it harder to go a lot large. Various eviction algorithms are used to evict excess entries. Shape of the usage distributionIf only 300 out of 3000 entries can be cached, but the Pareto distribution applies, it may be that 80% of the time, those 300 will be the ones requested. This drives up the average request lifespan. Read/Write ratioThe proportion of times data is read compared with how often it is written. Things such as the number of rooms left in a hotel will be written to quite a lot. However the details of a room sold are immutable once created so have a maximum write of 1 with a potentially large number of reads. Ehcache keeps these statistics for each Cache and each element, so they can be measured directly rather than estimated.
Cluster Efficiency
Also in real life, we generally do not find a single server? Assume a round robin load balancer where each hit goes to the next server. The cache has one entry which has a variable lifespan of requests, say caused by a time to live. The following table shows how that lifespan can affect hits and misses.
Server 1 M H H H H ...
Server 2 M H H H H ...
Server 3 M H H H H ...
Server 4 M H H H H ...
The cache hit ratios for the system as a whole are as follows:
Entry Lifespan Hit Ratio in Hits 1 Server 2 1/2 4 3/4 10 9/10 20 19/20 50 49/50
Hit Ratio Hit Ratio 2 Servers 3 Servers 0/2 0/2 2/4 1/4 8/10 7/10 18/20 17/20 48/50 47/20
Where the lifespan is large relative to the number of standalone caches, cache efficiency is not much affected. However when the lifespan is short, cache efficiency is dramatically affected. (To solve this problem, Ehcache supports distributed caching, where an entry put in a local cache is also propagated to other servers in the cluster.)
converted by Web2PDFConvert.com
1 / ((1 - Proportion Sped Up * effective cache efficiency) + (Proportion Sped Up * effective cache efficiency)/ Speed up) effective cache efficiency = (cache efficiency) * (cluster efficiency)
cache efficiency = .35 cluster efficiency = .(10 - 1) / 10 = .9 effective cache efficiency = .35 * .9 = .315 1 / ((1 - 1 * .315) + 1 * .315 / 1000) = 1 / (.685 + .000315) = 1.45 times system speedup
What if, instead the cache efficiency is 70%; a doubling of efficiency. We keep to two servers.
cache efficiency = .70 cluster efficiency = .(10 - 1) / 10 = .9 effective cache efficiency = .70 * .9 = .63 1 / ((1 - 1 * .63) + 1 * .63 / 1000) = 1 / (.37 + .00063) = 2.69 times system speedup
What if, instead the cache efficiency is 90%. We keep to two servers.
cache efficiency = .90 cluster efficiency = .(10 - 1) / 10 = .9 effective cache efficiency = .9 * .9 = .81 1 / ((1 - 1 * .81) + 1 * .81 / 1000) = 1 / (.19 + .00081) = 5.24 times system speedup
Why is the reduction so dramatic? Because Amdahl's law is most sensitive to the proportion of the system that is sped up.
Projects
Ehcache Quartz Scheduler
How to get it
Dow nload Now Join the Community Sign Up for Training
Follow Us
Linkedin Facebook Tw itter
Terracotta, Inc., a wholly-owned subsidiary of Software AG USA, Inc. All rights reserved.
converted by Web2PDFConvert.com