当前位置: 首页>编程语言>正文

multiparfileresource的依赖 com.spark.libsparkapplist依赖源

Scala编写Spark的WorkCount

创建一个Maven项目

在pom.xml中添加依赖和插件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.doit</groupId>
    <artifactId>spark</artifactId>
    <version>1.0-SNAPSHOT</version>
    <!-- 定义了一些常量 -->
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <scala.version>2.12.12</scala.version>
        <spark.version>3.0.1</spark.version>
        <hbase.version>2.2.5</hbase.version>
        <hadoop.version>3.1.1</hadoop.version>
        <encoding>UTF-8</encoding>
    </properties>

    <dependencies>
        <!-- 导入scala的依赖 -->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
            <!-- 编译时会引入依赖,打包是不引入依赖 -->
          <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>${spark.version}</version>
            <!-- 编译时会引入依赖,打包是不引入依赖 -->
           <scope>provided</scope>
           
 <!-- 出现jar包冲突时,exclusions可以排除某一个或多个依赖,避免jar包冲突 -->
    <!--        <exclusions>
                <exclusion>
                    <groupId>org.apache.hadoop</groupId>
                    <artifactId>hadoop-client</artifactId>
                </exclusion>

                <exclusion>
                    <groupId>com.google.guava</groupId>
                    <artifactId>guava</artifactId>
                </exclusion>
            </exclusions>                -->


        </dependency>
        
 <!-- 导入Hadoop依赖 -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop.version}</version>
        </dependency>

    </dependencies>

    <build>
        <pluginManagement>
            <plugins>
                <!-- 编译scala的插件 -->
                <plugin>
                    <groupId>net.alchim31.maven</groupId>
                    <artifactId>scala-maven-plugin</artifactId>
                    <version>3.2.2</version>
                </plugin>
                <!-- 编译java的插件 -->
                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <version>3.5.1</version>
                </plugin>
            </plugins>
        </pluginManagement>
        <plugins>
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>add-source</goal>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>scala-test-compile</id>
                        <phase>process-test-resources</phase>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <executions>
                    <execution>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <!-- 打jar插件 -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.3</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>


</project>

编写Spark程序

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

/**
  * 1.创建SparkContext
  * 2.创建RDD
  * 3.调用RDD的Transformation(s)方法
  * 4.调用Action
  * 5.释放资源
  */
object WordCount {

  def main(args: Array[String]): Unit = {

    val conf: SparkConf = new SparkConf().setAppName("WordCount")
    //创建SparkContext,使用SparkContext来创建RDD
    val sc: SparkContext = new SparkContext(conf)
    //spark写Spark程序,就是对抽象的神奇的大集合【RDD】编程,调用它高度封装的API
    //使用SparkContext创建RDD
    val lines: RDD[String] = sc.textFile(args(0))

    //Transformation 开始 //
    //切分压平
    val words: RDD[String] = lines.flatMap(_.split(" "))
    //将单词和一组合放在元组中
    val wordAndOne: RDD[(String, Int)] = words.map((_, 1))
    //分组聚合,reduceByKey可以先局部聚合再全局聚合
    val reduced: RDD[(String, Int)] = wordAndOne.reduceByKey(_+_)
    //排序
    val sorted: RDD[(String, Int)] = reduced.sortBy(_._2, false)
    //Transformation 结束 //

    //调用Action将计算结果保存到HDFS中
    sorted.saveAsTextFile(args(1))
    //释放资源
    sc.stop()
  }
}

使用maven打包

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_spark,第1张


提交任务

上传jar包到服务器,然后使用sparksubmit命令提交任务

/bin/spark-submit --master spark://linux01:7077 --executor-memory 1g --total-executor-cores 4 
--class cn._51doit.spark.day01.WordCount /root/spark15-1.0-SNAPSHOT.jar hdfs://node-1.51doit.com:8020/words.txt hdfs://linux01:8020/out

参数说明:
–master 指定masterd地址和端口,协议为spark://,端口是RPC的通信端口
–executor-memory 指定每一个executor的使用的内存大小
–total-executor-cores指定整个application总共使用了cores
–class 指定程序的main方法全类名
jar包路径 args0 args1

Java编写Spark的WordCount

使用匿名实现类方式

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;

import java.util.Arrays;
import java.util.Iterator;

public class JavaWordCount {

    public static void main(String[] args) {
        SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");
        //创建JavaSparkContext
        JavaSparkContext jsc = new JavaSparkContext(sparkConf);
        //使用JavaSparkContext创建RDD
        JavaRDD<String> lines = jsc.textFile(args[0]);
        //调用Transformation(s)
        //切分压平
        JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public Iterator<String> call(String line) throws Exception {
                return Arrays.asList(line.split(" ")).iterator();
            }
        });
        //将单词和一组合在一起
        JavaPairRDD<String, Integer> wordAndOne = words.mapToPair(
                new PairFunction<String, String, Integer>() {
                    @Override
                    public Tuple2<String, Integer> call(String word) throws Exception {
                        return Tuple2.apply(word, 1);
                    }
        });
        //分组聚合
        JavaPairRDD<String, Integer> reduced = wordAndOne.reduceByKey(
                new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer v1, Integer v2) throws Exception {
                return v1 + v2;
            }
        });
        //排序,先调换KV的顺序VK
        JavaPairRDD<Integer, String> swapped = reduced.mapToPair(
                new PairFunction<Tuple2<String, Integer>, Integer, String>() {
            @Override
            public Tuple2<Integer, String> call(Tuple2<String, Integer> tp) throws Exception {
                return tp.swap();
            }
        });
        //再排序
        JavaPairRDD<Integer, String> sorted = swapped.sortByKey(false);
        //再调换顺序
        JavaPairRDD<String, Integer> result = sorted.mapToPair(
                new PairFunction<Tuple2<Integer, String>, String, Integer>() {
            @Override
            public Tuple2<String, Integer> call(Tuple2<Integer, String> tp) throws Exception {
                return tp.swap();
            }
        });
        //触发Action,将数据保存到HDFS
        result.saveAsTextFile(args[1]);
        //释放资源
        jsc.stop();
    }
}

使用Lambda表达式方式

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;

import java.util.Arrays;

public class JavaLambdaWordCount {

    public static void main(String[] args) {
        SparkConf conf = new SparkConf().setAppName("JavaLambdaWordCount");
        //创建SparkContext
        JavaSparkContext jsc = new JavaSparkContext(conf);
        //创建RDD
        JavaRDD<String> lines = jsc.textFile(args[0]);
        //切分压平
        JavaRDD<String> words = lines.flatMap(line ->
                Arrays.asList(line.split(" ")).iterator());
        //将单词和一组合
        JavaPairRDD<String, Integer> wordAndOne = words.mapToPair(
                word -> Tuple2.apply(word, 1));
        //分组聚合
        JavaPairRDD<String, Integer> reduced = wordAndOne.reduceByKey((a, b) -> a + b);
        //调换顺序
        JavaPairRDD<Integer, String> swapped = reduced.mapToPair(tp -> tp.swap());
        //排序
        JavaPairRDD<Integer, String> sorted = swapped.sortByKey(false);
        //调换顺序
        JavaPairRDD<String, Integer> result = sorted.mapToPair(tp -> tp.swap());
        //将数据保存到HDFS
        result.saveAsTextFile(args[1]);
        //释放资源
        jsc.stop();
    }
}

本地运行Spark和Debug

spark程序每次都打包上在提交到集群上比较麻烦且不方便调试,Spark还可以进行Local模式运行,方便测试和调试

在本地运行

//Spark程序local模型运行,local[*]是本地运行,并开启多个线程
val conf: SparkConf = new SparkConf().setAppName("WordCount").setMaster("local[*]") 
//设置为local模式执行

在本地模式运行还需要将以下两行provided给注掉,否则本地会找不到jar包依赖

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_spark_02,第2张

在本地运行并读取HDFS中的数据
由于往HDFS中的写入数据存在权限问题,所以在代码中设置用户为HDFS目录的所属用户

//往HDFS中写入数据,将程序的所属用户设置成更HDFS一样的用户
System.setProperty("HADOOP_USER_NAME", "root")
val conf: SparkConf = new SparkConf().setAppName("WordCount").setMaster("local[*]")

输入运行参数

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_spark_03,第3张

使用PySpark

配置python环境
① 在所有节点上按照python3,版本必须是python3.6及以上版本

yum install -y python3

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_apache_04,第4张

② 修改所有节点的环境变量

export JAVA_HOME=/usr/local/jdk1.8.0_251
export PYSPARK_PYTHON=python3
export HADOOP_HOME=/bigdata/hadoop-3.2.1
export HADOOP_CONF_DIR=/bigdata/hadoop-3.2.1/etc/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

使用pyspark shell

/bigdata/spark-3.0.0-bin-hadoop3.2/bin/pyspark --master spark://node-1.51doit.com:7077 --executor-memory 1g --total-executor-cores 10

pyspark shell客户端的wordCount

sc.textFile("hdfs://linux01:8020/wc.txt").flatMap(lambda x : x.split(" ")).map(lambda w : (w,1)).reduceByKey(lambda a,b : a+b).sortBy(lambda x : x[1],False).collect()

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_spark_05,第5张

配置PyCharm开发环境
①配置python的环境

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_apache_06,第6张

②配置pyspark的依赖

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_apache_07,第7张

③添加环境变量

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_apache_08,第8张


入门程序的书写:

multiparfileresource的依赖 com.spark.libsparkapplist依赖源,multiparfileresource的依赖 com.spark.libsparkapplist依赖源_spark_09,第9张


https://www.xamrdz.com/lan/5t21920565.html

相关文章: