Spark submit提交求PI

本文介绍了使用Apache Spark计算圆周率(PI)的两种常见方法:通过spark-submit提交任务和使用官方自带的run-example命令。这两种方法都利用了Spark的并行计算能力,通过对大量随机点进行采样,来估算圆周率的值。文章详细展示了每种方法的具体步骤和运行结果,以及参数对计算精度的影响。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一. spark提交任务方式:

 

1.第一种方法:spark-submit:提交任务的,使用spark Demo

求PI,蒙特卡洛求PI(圆周率)

[root@bigdata111 spark-2.1.0-bin-hadoop2.7]# ./bin/spark-submit --master spark://bigdata111:7077 --class org.apache.spark.examples.SparkPi /opt/module/spark-2.1.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.0.jar 1000

结果:

20/01/21 11:16:26 INFO TaskSetManager: Starting task 997.0 in stage 0.0 (TID 997, 192.168.1.122, executor 1, partition 997, PROCESS_LOCAL, 6029 bytes)
20/01/21 11:16:26 INFO TaskSetManager: Finished task 995.0 in stage 0.0 (TID 995) in 11 ms on 192.168.1.122 (executor 1) (996/1000)
20/01/21 11:16:26 INFO TaskSetManager: Starting task 998.0 in stage 0.0 (TID 998, 192.168.1.123, executor 0, partition 998, PROCESS_LOCAL, 6029 bytes)
20/01/21 11:16:26 INFO TaskSetManager: Starting task 999.0 in stage 0.0 (TID 999, 192.168.1.122, executor 1, partition 999, PROCESS_LOCAL, 6029 bytes)
20/01/21 11:16:26 INFO TaskSetManager: Finished task 996.0 in stage 0.0 (TID 996) in 21 ms on 192.168.1.123 (executor 0) (997/1000)
20/01/21 11:16:26 INFO TaskSetManager: Finished task 997.0 in stage 0.0 (TID 997) in 16 ms on 192.168.1.122 (executor 1) (998/1000)
20/01/21 11:16:26 INFO TaskSetManager: Finished task 999.0 in stage 0.0 (TID 999) in 14 ms on 192.168.1.122 (executor 1) (999/1000)
20/01/21 11:16:26 INFO TaskSetManager: Finished task 998.0 in stage 0.0 (TID 998) in 20 ms on 192.168.1.123 (executor 0) (1000/1000)
20/01/21 11:16:26 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 15.844 s
20/01/21 11:16:26 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
20/01/21 11:16:26 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 16.392315 s
Pi is roughly 3.14141099141411
20/01/21 11:16:26 INFO SparkUI: Stopped Spark web UI at http://192.168.1.121:4040
20/01/21 11:16:26 INFO StandaloneSchedulerBackend: Shutting down all executors
20/01/21 11:16:26 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
20/01/21 11:16:26 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/01/21 11:16:26 INFO MemoryStore: MemoryStore cleared
20/01/21 11:16:26 INFO BlockManager: BlockManager stopped
20/01/21 11:16:26 INFO BlockManagerMaster: BlockManagerMaster stopped
20/01/21 11:16:26 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/01/21 11:16:26 INFO SparkContext: Successfully stopped SparkContext
20/01/21 11:16:26 INFO ShutdownHookManager: Shutdown hook called
20/01/21 11:16:26 INFO ShutdownHookManager: Deleting directory /tmp/spark-5d93ad3b-3ffd-416d-acfa-fe3a933e5b13

 

2.第二中方法:

官方自带:./bin/run-example SparkPi 10

[root@bigdata111 spark-2.1.0-bin-hadoop2.7]# ./bin/run-example SparkPi 10

结果:

20/01/21 11:19:05 INFO Executor: Running task 8.0 in stage 0.0 (TID 8)
20/01/21 11:19:05 INFO Executor: Finished task 9.0 in stage 0.0 (TID 9). 1128 bytes result sent to driver
20/01/21 11:19:06 INFO Executor: Finished task 8.0 in stage 0.0 (TID 8). 1114 bytes result sent to driver
20/01/21 11:19:06 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 184 ms on localhost (executor driver) (9/10)
20/01/21 11:19:06 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 192 ms on localhost (executor driver) (10/10)
20/01/21 11:19:06 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 1.025 s
20/01/21 11:19:06 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
20/01/21 11:19:06 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 1.577132 s
Pi is roughly 3.1381671381671383
20/01/21 11:19:06 INFO SparkUI: Stopped Spark web UI at http://192.168.1.121:4040
20/01/21 11:19:06 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/01/21 11:19:06 INFO MemoryStore: MemoryStore cleared
20/01/21 11:19:06 INFO BlockManager: BlockManager stopped
20/01/21 11:19:06 INFO BlockManagerMaster: BlockManagerMaster stopped
20/01/21 11:19:06 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/01/21 11:19:06 INFO SparkContext: Successfully stopped SparkContext
20/01/21 11:19:06 INFO ShutdownHookManager: Shutdown hook called
20/01/21 11:19:06 INFO ShutdownHookManager: Deleting directory /tmp/spark-8b32ae91-8c93-4560-acf5-cd29991425b1

 

二.对比分析

是概率的事件,没次结果可能不一样,但都在一个范围。

参数越大,测试结果越准确。

 

 

                                                                                                                                   ————保持饥饿,保持学习

                                                                                                                                                    Jackson_MVP

 

 

 

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

SuperBigData~

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值