1.下载安装包
Index of /dist/flink
2.上传flink-1.12.0-bin-scala_2.12.tgz到node01的指定目录
3.解压:tar -zxvf flink-1.12.0-bin-scala_2.12.tgz
4、修改名称 mv flink-1.12.0-bin-scala_2.12 flink
5、添加系统环境变量 并source生效
export FLINK_HOME=/opt/flink
export PATH=$PATH:$FLINK_HOME/bin
6、启动flink
bin/start-cluster.sh
7、使用jps可以查看到下面两个进程
TaskManagerRunner
StandaloneSessionClusterEntrypoint
8、访问Flink的Web UI
http://node01:8081/#/overview
slot在Flink里面可以认为是资源组,Flink是通过将任务分成子任务并且将这些子任务分配到slot来并行执行程序。
9、停止Flink
/usr/apps/flink/bin/stop-cluster.sh
10、集群规划
服务器: node1(Master + Slave): JobManager + TaskManager
服务器: node2(Slave): TaskManager
服务器: node3(Slave): TaskManager
11、修改flink-conf.yaml
vim /usr/apps/flink/conf/flink-conf.yaml
jobmanager.rpc.address: node1
taskmanager.numberOfTaskSlots: 16(根据机器cpu数定)
web.submit.enable: true
#历史服务器
jobmanager.archive.fs.dir: hdfs://node1:8020/flink/completed-jobs/
historyserver.web.address: node1
historyserver.web.port: 8082
historyserver.archive.fs.dir: hdfs://node1:8020/flink/completed-jobs/
#开启HA,使用文件系统作为快照存储
state.backend: filesystem
#启用检查点,可以将快照保存到HDFS
state.backend.fs.checkpointdir: hdfs://node1:8020/flink-checkpoints
#使用zookeeper搭建高可用
high-availability: zookeeper
# 存储JobManager的元数据到HDFS
high-availability.storageDir: hdfs://node1:8020/flink/ha/
# 配置ZK集群地址
high-availability.zookeeper.quorum: node1:2181,node2:2181,node3:2181
12、修改master
#vim /usr/apps/flink/conf/masters
node1:8081
node2:8081 修改slaves,zoo.cfg
#vim /usr/apps/flink/conf/slaves,没有slaves的就用workers
node1
node2
node3 vim /opt/flink/conf/zoo.cfg
server.103=node03:2888:3888
server.104=node04:2888:3888
server.105=node05:2888:3888
13,修改flink的pid路径,vim /opt/flink/bin/config.sh
修改export DEFAULT_ENV_PID_DIR="/opt/flink/pid
不修改flink启动后pid默认会放在/tmp/下,tmp是临时文件七天一清理,
到时候关闭flink集群会报错:no StandaloneSessionClusterEntrypoint to stop无法正常关闭
14、添加HADOOP_CONF_DIR环境变量
#vim /etc/profile
export hadoop_conf_dir=/usr/apps/hadoop/etc/hadoop
15、分发到node2,node3节点
scp -r /usr/apps/flink/ node2:/usr/apps/flink
scp -r /usr/apps/flink node3:/usr/apps/flink
scp /etc/profile node2:/etc/profile
scp /etc/profile node3:/etc/profile
16、在各个节点加载一下资源
source /etc/profile
17、修改node2上的flink-conf.yaml
vim /usr/apps/flink/conf/flink-conf.yaml
jobmanager.rpc.address: node2
16、启动集群测试,在node1上执行如下命令
#1.启动hadoop
/usr/apps/hadoop/sbin/start-dfs.sh
#2.启动Zookeeper
/usr/apps/zookeeper/bin/zkServer.sh start
#查看zookeeper是否启动:/usr/apps/zookeeper/bin/zkServer.sh status
#3.启动Flink
/usr/apps/flink/bin/start-cluster.sh
#查看flink是否启动 jps
17、如果jps没启动,log日志里报错
Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see //nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/filesystems/overview/.
因为在Flink1.8版本后,Flink官方提供的安装包里没有整合HDFS的jar
18、下载jar包并在Flink的lib目录下放入该jar包并分发使Flink能够支持对Hadoop的操作
下载地址:Apache Flink: Downloads
放入lib目录:cd /usr/apps/flink/lib
分发
scp -r /usr/apps/flink/lib/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar node2:/usr/apps/flink/lib
scp -r /usr/apps/flink/lib/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar node3:/usr/apps/flink/lib
19、关闭yarn的内存检查
vim /usr/apps/hadoop/etc/hadoop/yarn-site.xml
<!-- 关闭yarn内存检查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
分发
scp -r /usr/apps/hadoop/etc/hadoop/yarn-site.xml node2:/usr/apps/hadoop/etc/hadoop/yarn-site.xml
scp -r /usr/apps/hadoop/etc/hadoop/yarn-site.xml node3:/usr/apps/hadoop/etc/hadoop/yarn-site.xml
重启yarn
/usr/apps/hadoop/sbin/stop-all.sh
/usr/apps/hadoop/sbin/start-all.sh
20、yarn中的会话模式
在yarn上启动一个Flink集群/会话,node1上执行以下命令
/usr/apps/flink/bin/yarn-session.sh -n 2 -tm 800 -s 1 -d
说明:
申请2个CPU、1600M内存
# -n 表示申请2个容器,这里指的就是多少个taskmanager
# -tm 表示每个TaskManager的内存大小
# -s 表示每个TaskManager的slots数量
# -d 表示以后台程序方式运行
注意:该警告不用管
WARN org.apache.hadoop.hdfs.DFSClient - Caught exception
java.lang.InterruptedException
访问UI
使用flink run提交任务:
/usr/apps/flink/bin/flink run /usr/apps/flink/examples/batch/WordCount.jar
完成
关闭yarn-session:
yarn application -kill application_1650268924251_0001
21、yarn中的Job分离模式--企业用的更多
针对每个Flink任务在Yarn上启动一个独立的Flink集群并运行,结束后自动关闭并释放资源,----适用于大任务
直接提交job
/usr/apps/flink/bin/flink run -m yarn-cluster -yjm 1024 -ytm 1024
/usr/apps/flink/examples/batch/WordCount.jar
参数解释:
# -m jobmanager的地址
# -yjm 1024 指定jobmanager的内存信息
# -ytm 1024 指定taskmanager的内存信息
查看UI界面
http://node1:8088/cluster