共计 7990 个字符,预计需要花费 20 分钟才能阅读完成。
首先解压 scala,本次选用版本 scala-2.11.1
[Hadoop@CentOS software]$ tar -xzvf scala-2.11.1.tgz
[hadoop@centos software]$ su –
[root@centos ~]# vi /etc/profile
添加如下内容 :
SCALA_HOME=/home/hadoop/software/scala-2.11.1
PATH=$SCALA_HOME/bin
EXPORT SCALA_HOME
[root@centos ~]# source /etc/profile
[root@centos ~]# scala -version
Scala code runner version 2.11.1 — Copyright 2002-2013, LAMP/EPFL
然后解压 spark,本次选用版本 spark-1.0.0-bin-hadoop1.tgz,这次用的是 hadoop 1.0.4
[hadoop@centos software]$ tar -xzvf spark-1.0.0-bin-hadoop1.tgz
————————————– 分割线 ————————————–
CentOS 6.2(64 位) 下安装 Spark0.8.0 详细记录 http://www.linuxidc.com/Linux/2014-06/102583.htm
Spark 简介及其在 Ubuntu 下的安装使用 http://www.linuxidc.com/Linux/2013-08/88606.htm
安装 Spark 集群 (在 CentOS 上) http://www.linuxidc.com/Linux/2013-08/88599.htm
Hadoop vs Spark 性能对比 http://www.linuxidc.com/Linux/2013-08/88597.htm
Spark 安装与学习 http://www.linuxidc.com/Linux/2013-08/88596.htm
Spark 并行计算模型 http://www.linuxidc.com/Linux/2012-12/76490.htm
————————————– 分割线 ————————————–
进入到 spark 的 conf 目录下
[hadoop@centos conf]$ cp spark-env.sh.template spark-env.sh
[hadoop@centos conf]$ vi spark-env.sh
添加如下内容 :
export SCALA_HOME=/home/hadoop/software/scala-2.11.1
export SPARK_MASTER_IP=centos.host1
export SPARK_WORKER_MEMORY=5G
export Java_HOME=/usr/software/jdk
启动
[hadoop@centos spark-1.0.0-bin-hadoop1]$ sbin/start-master.sh
可以通过 http://centos.host1:8080/ 看到对应界面
[hadoop@centos spark-1.0.0-bin-hadoop1]$ sbin/start-slaves.sh park://centos.host1:7077
可以通过 http://centos.host1:8081/ 看到对应界面
下面在 spark 上运行第一个例子:与 Hadoop 交互的 WordCount
首先将 word.txt 文件上传到 HDFS 上,这里路径是 hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt
进入交互模式
[hadoop@centos spark-1.0.0-bin-hadoop1]$ master=spark://centos.host1:7077 ./bin/spark-shell
scala>val file=sc.textFile(“hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt”)
scala>val count=file.flatMap(line=>line.split(” “)).map(word=>(word,1)).reduceByKey(_+_)
scala>count.collect()
可以看到控制台有如下结果:
res0: Array[(String, Int)] = Array((hive,2), (zookeeper,1), (pig,1), (spark,1), (hadoop,4), (hbase,2))
同时也可以将结果保存到 HDFS 上
scala>count.saveAsTextFile(“hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/result.txt”)
接下来再来看下如何运行 Java 版本的 WordCount
这里需要用到一个 jar 文件:spark-assembly-1.0.0-hadoop1.0.4.jar
WordCount 代码如下 :
public class WordCount {
private static final Pattern SPACE = Pattern.compile(” “);
@SuppressWarnings(“serial”)
public static void main(String[] args) throws Exception {
if (args.length < 1) {
System.err.println(“Usage: JavaWordCount <file>”);
System.exit(1);
}
SparkConf sparkConf = new SparkConf().setAppName(“JavaWordCount”);
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
JavaRDD<String> lines = ctx.textFile(args[0], 1);
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterable<String> call(String s) {
return Arrays.asList(SPACE.split(s));
}
});
JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2<?, ?> tuple : output) {
System.out.println(tuple._1() + ” : ” + tuple._2());
}
ctx.stop();
}
}
更多详情见请继续阅读下一页的精彩内容 :http://www.linuxidc.com/Linux/2014-06/103210p2.htm
导出类文件生成 jar 包,这里生成为 mining.jar。然后执行下面命令,其中 –class 指定主类,–master 指定 spark master 地址,后面是执行的 jar 和需要的参数。
[Hadoop@CentOS spark-1.0.0-bin-hadoop1]$ bin/spark-submit –class org.project.modules.spark.Java.WordCount –master spark://centos.host1:7077 /home/hadoop/project/mining.jar hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt
可以看到控制台有如下结果:
spark : 1
hive : 2
hadoop : 4
zookeeper : 1
pig : 1
hbase : 2
最后再来看下如何运行 Python 版本的 WordCount
WordCount 代码如下:
import sys
from operator import add
from pyspark import SparkContext
if __name__ == “__main__”:
if len(sys.argv) != 2:
print >> sys.stderr, “Usage: wordcount <file>”
exit(-1)
sc = SparkContext(appName=”PythonWordCount”)
lines = sc.textFile(sys.argv[1], 1)
counts = lines.flatMap(lambda x: x.split(‘ ‘)) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print “%s: %i” % (word, count)
输入文件路径可以是本地也可以是 HDFS 上文件,命令如下:
[hadoop@centos spark-1.0.0-bin-hadoop1]$ bin/spark-submit –master spark://centos.host1:7077 /home/hadoop/project/WordCount.py /home/hadoop/temp/word.txt
[hadoop@centos spark-1.0.0-bin-hadoop1]$ bin/spark-submit –master spark://centos.host1:7077 /home/hadoop/project/WordCount.py hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt
可以看到控制台有如下结果:
spark: 1
hbase: 2
hive: 2
zookeeper: 1
hadoop: 4
pig: 1
首先解压 scala,本次选用版本 scala-2.11.1
[Hadoop@CentOS software]$ tar -xzvf scala-2.11.1.tgz
[hadoop@centos software]$ su –
[root@centos ~]# vi /etc/profile
添加如下内容 :
SCALA_HOME=/home/hadoop/software/scala-2.11.1
PATH=$SCALA_HOME/bin
EXPORT SCALA_HOME
[root@centos ~]# source /etc/profile
[root@centos ~]# scala -version
Scala code runner version 2.11.1 — Copyright 2002-2013, LAMP/EPFL
然后解压 spark,本次选用版本 spark-1.0.0-bin-hadoop1.tgz,这次用的是 hadoop 1.0.4
[hadoop@centos software]$ tar -xzvf spark-1.0.0-bin-hadoop1.tgz
————————————– 分割线 ————————————–
CentOS 6.2(64 位) 下安装 Spark0.8.0 详细记录 http://www.linuxidc.com/Linux/2014-06/102583.htm
Spark 简介及其在 Ubuntu 下的安装使用 http://www.linuxidc.com/Linux/2013-08/88606.htm
安装 Spark 集群 (在 CentOS 上) http://www.linuxidc.com/Linux/2013-08/88599.htm
Hadoop vs Spark 性能对比 http://www.linuxidc.com/Linux/2013-08/88597.htm
Spark 安装与学习 http://www.linuxidc.com/Linux/2013-08/88596.htm
Spark 并行计算模型 http://www.linuxidc.com/Linux/2012-12/76490.htm
————————————– 分割线 ————————————–
进入到 spark 的 conf 目录下
[hadoop@centos conf]$ cp spark-env.sh.template spark-env.sh
[hadoop@centos conf]$ vi spark-env.sh
添加如下内容 :
export SCALA_HOME=/home/hadoop/software/scala-2.11.1
export SPARK_MASTER_IP=centos.host1
export SPARK_WORKER_MEMORY=5G
export Java_HOME=/usr/software/jdk
启动
[hadoop@centos spark-1.0.0-bin-hadoop1]$ sbin/start-master.sh
可以通过 http://centos.host1:8080/ 看到对应界面
[hadoop@centos spark-1.0.0-bin-hadoop1]$ sbin/start-slaves.sh park://centos.host1:7077
可以通过 http://centos.host1:8081/ 看到对应界面
下面在 spark 上运行第一个例子:与 Hadoop 交互的 WordCount
首先将 word.txt 文件上传到 HDFS 上,这里路径是 hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt
进入交互模式
[hadoop@centos spark-1.0.0-bin-hadoop1]$ master=spark://centos.host1:7077 ./bin/spark-shell
scala>val file=sc.textFile(“hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/word.txt”)
scala>val count=file.flatMap(line=>line.split(” “)).map(word=>(word,1)).reduceByKey(_+_)
scala>count.collect()
可以看到控制台有如下结果:
res0: Array[(String, Int)] = Array((hive,2), (zookeeper,1), (pig,1), (spark,1), (hadoop,4), (hbase,2))
同时也可以将结果保存到 HDFS 上
scala>count.saveAsTextFile(“hdfs://centos.host1:9000/user/hadoop/data/wordcount/001/result.txt”)
接下来再来看下如何运行 Java 版本的 WordCount
这里需要用到一个 jar 文件:spark-assembly-1.0.0-hadoop1.0.4.jar
WordCount 代码如下 :
public class WordCount {
private static final Pattern SPACE = Pattern.compile(” “);
@SuppressWarnings(“serial”)
public static void main(String[] args) throws Exception {
if (args.length < 1) {
System.err.println(“Usage: JavaWordCount <file>”);
System.exit(1);
}
SparkConf sparkConf = new SparkConf().setAppName(“JavaWordCount”);
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
JavaRDD<String> lines = ctx.textFile(args[0], 1);
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterable<String> call(String s) {
return Arrays.asList(SPACE.split(s));
}
});
JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2<?, ?> tuple : output) {
System.out.println(tuple._1() + ” : ” + tuple._2());
}
ctx.stop();
}
}
更多详情见请继续阅读下一页的精彩内容 :http://www.linuxidc.com/Linux/2014-06/103210p2.htm