Spark Windows 调试环境搭建教程
本教程介绍的是Windows环境下的Spark调试环境的搭建步骤。
主要参考文章:http://wenku.baidu.com/link?url=ZVIXNbwGZE4Z41zvG6UBO911urnYDRzNJgc6LfcMyh-u896L92lAV1qitmeTsdMREb2hJAcfGjOd3ZI67X9CjkDS7CjchyhGXMuxtmhe2yC
http://www.jdon.com/bigdata/sparkinstall.html
一、组件介绍
首先列举搭建此环境需要的各个组件:
· JDK,安装JDK 6或者JDK 7(必备条件)
· IDEA,有两个版本:Ultimate Edition & Community Edition,后者是free的,而且完全能满足学习者所有的需求
· Scala,Spark是用Scala语言写成的,在本地编译执行需要这个包(2.10.4)
· SBT,scala工程构建的工具(0.13.8)
· Git,IDEA自动下载SBT插件时可能会用到的工具(1.8.4)
二、安装步骤
1、安装jdk
首先安装jdk,并配置好环境变量,这个网上有好多教程,参考一下。我用的jdk1.6.0
2、安装Scala。(我用的2.10.4版本)
Windows用户建议现在scala-2.10.4.msi,直接安装。
完成后,在windows命令行中输入scala,检查是否识别此命令。
如果不识别,查看环境变量Path中是否有....\scala\bin(我的电脑右键,属性 -> 高级系统设置 -> 环境变量),没有的手动将Scala文件夹下的bin目录的路径
Scala 2.10.4已亲测可行。
2、安装SBT
运行SBT的安装程序,运行完成后,重新打开windows命令行,输入sbt,检查是否识别此命令。没有的话,手动配置环境变量,添加...\sbt\bin
运行完SBT的安装程序之后,并不意味着完成了sbt的安装,在windows命令放下输入sbt后,SBT会自动的下载安装它所需要的程序包,请耐心等待全部下载成功。
这个地方可能会出现错误,PKIX path building failed 的问题(点击即可跳到解决方案地址)
具体的解决方案,参考:http://lyh7609.iteye.com/blog/509064,运行一个java程序,将主网站的地址写入即可。
3、安装Git
运行Git的安装程序,安装完成后,重新打开windows命令行,检查是否识别git命令。如果不识别,再手动配置环境变量。
具体参考文章:
三、搭建Spark开发调试环境
以上步骤完成之后,请参考
http://www.jdon.com/bigdata/sparkinstall.html
首先下载spark,我用的是spark-0.8.1-incubating
按照上面网址中的步骤,解压编译
然后解压编译,完成时间如下 :
D:\study\Spark\spark-0.8.1-incubating>sbt\sbt.cmd assembly
(省略)
[info] Done packaging.
[info] Packaging D:\study\Spark\spark-0.8.1-incubating\examples\target\
scala-2.9.3\spark-examples-assembly-0.8.0-incubating.jar ...
[info] Done packaging.
[success] Total time: 1265 s, completed 2013/11/04 21:36:04
上面需要的时间比较长,请耐心等待。
Spark Shell的运行
如下启动Spark:
D:\study\Spark\spark-0.8.1-incubating>spark-shell.cmd
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 0.8.0
/_/
(省略)
13/11/04 21:45:18 INFO ui.SparkUI: Started Spark Web UI at http://haumea:4040
Spark context available as sc.
Type in expressions to have them evaluated.
Type :help for more information.
scala>
可能有些不同,我这有一些警告,关于log4j的,没找到解决方案。
操作Shell的命令如下:
scala> val textFile = sc.textFile("README.md")
textFile: org.apache.spark.rdd.RDD[String] = MappedRDD[1] at textFile at <console>:12
scala> textFile.count()
res1: Long = 111
scala> textFile.first()
res2: String = # Apache Spark
scala> textFile.foreach(println(_))
再次运行下面命令:
scala> val linesWithSpark = textFile.filter(line => line.contains("Spark"))
linesWithSpark: org.apache.spark.rdd.RDD[String] = FilteredRDD[2] at filter at <console>:14
scala> linesWithSpark.foreach(println(_))
# Apache Spark
You can find the latest Spark documentation, including a programming
Spark requires Scala 2.9.3 (Scala 2.10 is not yet supported). The project is
Spark and its example programs, run:
Once you've built Spark, the easiest way to start using it is the shell:
Spark also comes with several sample programs in the `examples` directory.
./run-example org.apache.spark.examples.SparkLR local[2]
All of the Spark samples take a `<master>` parameter that is the cluster URL
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported
Hadoop, you must build Spark against the same version that your cluster runs.
when building Spark.
When developing a Spark application, specify the Hadoop version by adding the
in the online documentation for an overview on how to configure Spark.
Apache Spark is an effort undergoing incubation at The Apache Software
## Contributing to Spark
scala> exit
下面创建一个TextCount,然后在其下创建子目录:
直接在D:\study\Spark\spark-0.8.1-incubating目录下创建下面的各层目录。
TextCount/src/main/scala/TextCountApp.scala
TextCount/count.sbtTextCount/src/main/scala/TextCountApp.scala内容如下:
/*** TextCountApp.scala ***/
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._object TextCountApp {
def main(args: Array[String]) { val logFile = "D:/study/Spark/spark-0.8.1-incubating/README.md"
val sc = new SparkContext("local", "TextCountApp", "C:/Develop/Source/Spark/spark-.8.1-incubating",
List("target/scala-2.9.3/count-project_2.9.3-1.0.jar"))
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
val numSparks = logData.filter(line => line.contains("Spark")).count()
println("Lines with a: %s, Lines with b: %s, Lines with Spark: %s".format(numAs, numBs, numSparks)) }
}
SparkContext是个参数的意义:
第一个参数 : URL (本地单机)
第二参数 : 应用程序名称
第三个参数 : spark安装目标目录
第四个参数 : 应用程序依赖的库
创建extCount / count.sbt
name := "Count Project"
version := "1.0"
scalaVersion := "2.9.3"
libraryDependencies += "org.apache.spark" %% "spark-core" % "0.8.1-incubating"
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
运行下面的命令。 然后进行编译 ,生成JAR文件:
D:\study\Spark\spark-0.8.1-incubating\TextCount>..\sbt\sbt.cmd package
(省略)
[info] Packaging D:\study\Spark\spark-0.8.1-incubating\TextCount\target\scala-2.9.3\count-project_2.9.3-1.0.jar ...
[info] Done packaging.
[success] Total time: 7 s, completed 2013/11/04 22:29:24
这个地方编译的时候可能会出问题,就是找不到包,当前目录中不存在org.apache.spark什么东西,在D:\study\Spark\spark-0.8.0-incubating找不到,org.apache.spark,提示下面两句有误
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
,需要spark-core_2.9.3,这样就去搜spark-core_2.9.3,然后把包放到所需要的目录下面,(主要是这句代码中的依赖找不到libraryDependencies += "org.apache.spark" %% "spark-core" % "0.8.1-incubating"),完成之后再执行..\sbt\sbt.cmd package命令。
完成之后:
尝试运行建立的应用程序,看到结果如下:
D:\study\Spark\spark-0.8.1-incubating\TextCount>..\sbt\sbt.cmd run
[info] Set current project to Count Project (in build file:/D:/study/Spark/spark-0.8.1-incubating/TextCount/)
[info] Running TextCountApp
13/11/04 22:33:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/11/04 22:33:27 WARN snappy.LoadSnappy: Snappy native library not loaded
13/11/04 22:33:27 INFO mapred.FileInputFormat: Total input paths to process : 1
Lines with a: 66, Lines with b: 35, Lines with Spark: 15 //执行結果
[success] Total time: 6 s, completed 2013/11/04 22:33:28
好了,我们已经可以在单机情况下运行Spark。