环境
hadoop 2.2.0 + Scala 2.10.3 + Spark 0.9 + Idea 13
单机伪分布式的Yarn
Idea SBT插件使用:建立SBT项目,然后在Setting中设置SBT autoimport 和 auto 创建目录结构,ok后 refresh
build.sbt
name := "WordCount"
version := "1.0"
scalaVersion := "2.10.3"
libraryDependencies += "org.apache.spark" %% "spark-core" % "0.9.0-incubating"
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
libraryDependencies += "org.apache.spark" % "spark-bagel_2.10" % "0.9.0-incubating"
libraryDependencies += "org.apache.spark" % "spark-mllib_2.10" % "0.9.0-incubating"
libraryDependencies += "org.apache.spark" % "spark-graphx_2.10" % "0.9.0-incubating"
libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "0.9.0-incubating"
CODE
package myclass
import org.apache.spark._
import