Advanced Spark Training
Advanced Spark Training
This Talk
Formalize RDD concept
Life of a Spark Application
Performance Debugging
Mechanical sympathy by Jackie Stewart: a driver does not
need to know how to build an engine but they need to know
* the
Assumes
you can write word count, knows what
fundamentals of how one works to get the best out of it
transformation/action is
Reynold Xin
Apache Spark committer (worked on almost every
module: core, sql, mllib, graph)
Product & open-source eng @ Databricks
On leave from PhD @ UC Berkeley AMPLab
Example Application
val sc = new SparkContext(...)
Resilient distributed
datasets (RDDs)
Action
lineage
optimized
execution
Example: HadoopRDD
partitions = one per HDFS block
dependencies = none
compute(part) = read corresponding block
preferredLocations(part) = HDFS block location
partitioner = none
file:
Partition-level view:
HadoopRDD"
path = hdfs://...
errors:
FilteredRDD"
func = _.contains()"
shouldCache = true
Example: JoinedRDD
partitions = one per reduce task
dependencies = shuffle on each parent
compute(partition) = read and join shuffled data
preferredLocations(part) = none"
partitioner = HashPartitioner(numTasks)
Spark will now know
this data is hashed!
Dependency Types
Narrow (pipeline-able)
Wide (shuffle)
groupByKey on"
non-partitioned data
map, filter
Recap
Each RDD consists of 5 properties:
1. partitions
2. dependencies
3. compute
4. (optional) partitioner
5. (optional) preferred locations
Spark Application
Your program
(JVM / Python)
Spark driver"
(app master)
RDD graph
sc = new SparkContext
f = sc.textFile()"
Scheduler
Spark executor
(multiple of them)
Cluster"
manager
Task
threads
"
f.filter()"
.count()"
"
Block tracker
...
Shuffle tracker
Block
manager
HDFS, HBase,
RDD Objects
Executors
Threads
Task
DAG
rdd1.join(rdd2)
.groupBy()
.filter()
.count()
Block
manager
execute tasks
submit
each
stage
as
ready
DAG Scheduler
Input: RDD and partitions to compute
Output: output from actions on those partitions
Roles:
> Build stages of tasks
> Submit them to lower level scheduler (e.g. YARN,
Mesos, Standalone) as ready
> Lower level scheduler will schedule data based on
locality
> Resubmit failed stages if outputs are lost
Scheduler Optimizations
Pipelines operations
within a stage
Picks join algorithms
based on partitioning
(minimize shuffles)
Reuses previously
cached data
B:
A:
G:
Stage 1
groupBy
Task
D:
C:
F:
map
E:
Stage
2
join
union
Stage 3
Task
Unit of work to execute on in an executor thread
Unlike MR, there is no map vs reduce task
Each task either partitions its output for shuffle, or
send the output back to the driver
Shuffle
Redistributes data among partitions
Partition keys into buckets
(user-defined partitioner)
Optimizations:
> Avoided when possible, if"
data is already properly"
partitioned
> Partial aggregation reduces"
data movement
Stage 1
Stage 2
Shuffle
Write intermediate files to disk
Fetched by the next stage of tasks (reduce in MR)
Stage
1
Disk
Stage 2
RDD Objects
Executors
Threads
Task
DAG
rdd1.join(rdd2)
.groupBy()
.filter()
.count()
Block
manager
execute tasks
submit
each
stage
as
ready
Performance Debugging
Performance Debugging
Distributed performance: program slow due to
scheduling, coordination, or data distribution)
Local performance: program slow because whatever
Im running is just slow on a single node
Two useful tools:
> Application web UI (default port 4040)
> Executor logs (spark/work)
Stragglers?
Some tasks are just slower than others.
Easy to identify from summary metrics:
Speculation is not going to help because the
problem is inherent in the algorithm/data.
Pick a different algorithm or restructure the data.
Demo Time
Garbage Collection
Look at the GC Time column in the web UI
Reduce GC impact
class DummyObject(var i: Int) {
def toInt = i
}
sc.parallelize(1 to 100 * 1000 * 1000, 1).map { i =>
new DummyObject(i) // new object every record
obj.toInt
}
sc.parallelize(1 to 100 * 1000 * 1000, 1).mapPartitions { iter =>
val obj = new DummyObject(0) // reuse the same object
iter.map { i =>
obj.i = i
obj.toInt
}
}
Local Performance
Each Spark executor runs a JVM/Python process
Insert your favorite JVM/Python profiling tool
> jstack
> YourKit
> VisualVM
> println
> (sorry I dont know a whole lot about Python)
>
Demo Time
jstack
jstack
Debugging Tip
Local Debugging
Run in local mode (i.e. Spark master local) and
debug with your favorite debugger
> IntelliJ
> Eclipse
> println
With a sample dataset
Execution process (from RDD to tasks)
Performance & debugging
Thank You!