当前位置: 首页>数据库>正文

无依赖单机尝鲜 Nebula Exchange 的 SST 导入

无依赖单机尝鲜 Nebula Exchange 的 SST 导入,第1张
鏃犱緷璧栧崟鏈哄皾椴?Nebula Exchange 鐨?SST 瀵煎叆

鏈枃灏濊瘯鍒嗕韩涓嬩互鏈€灏忔柟寮忥紙鍗曟満銆佸鍣ㄥ寲 Spark銆丠adoop銆丯ebula Graph锛夛紝蹇€熻稛涓€涓?Nebula Exchange 涓?SST 鍐欏叆鏂瑰紡鐨勬楠ゃ€傛湰鏂囬€傜敤浜?v2.5 浠ヤ笂鐗堟湰鐨?Nebula- Exchange銆?/p>

鍘熸枃閾炬帴锛?/p>

  • 鍥藉璁块棶锛歨ttps://siwei.io/nebula-exchange-sst-2.x/
  • 鍥藉唴璁块棶锛歨ttps://cn.siwei.io/nebula-exchange-sst-2.x/

浠€涔堟槸 Nebula Exchange锛?/h2>

涔嬪墠鎴戝湪 Nebula Data Import Options 涔嬩腑浠嬬粛杩囷紝Nebula Exchange 鏄竴涓?Nebula Graph 绀惧尯寮€婧愮殑 Spark Applicaiton锛屽畠涓撻棬鐢ㄦ潵鏀寔鎵归噺鎴栬€呮祦寮忓湴鎶婃暟鎹鍏?Nebula Graph Database 涔嬩腑銆?/p>

Nebula Exchange 鏀寔澶氱澶氭牱鐨勬暟鎹簮锛堜粠 Apache Parquet銆丱RC銆丣SON銆丆SV銆丠Base銆丠ive MaxCompute 鍒?Neo4j銆丮ySQL銆丆lickHouse锛屽啀鏈?Kafka銆丳ulsar锛屾洿澶氱殑鏁版嵁婧愪篃鍦ㄤ笉鏂鍔犱箣涓級銆?/p>

无依赖单机尝鲜 Nebula Exchange 的 SST 导入,第2张

濡備笂鍥炬墍绀猴紝鍦?Exchange 鍐呴儴锛屼粠闄や簡涓嶅悓 Reader 鍙互璇诲彇涓嶅悓鏁版嵁婧愪箣澶栵紝鍦ㄦ暟鎹粡杩?Processor 澶勭悊涔嬪悗閫氳繃 Writer鍐欏叆锛坰ink锛?Nebula Graph 鍥炬暟鎹簱鐨勬椂鍊欙紝闄や簡璧版甯哥殑 ServerBaseWriter 鐨勫啓鍏ユ祦绋嬩箣澶栵紝瀹冭繕鍙互缁曡繃鏁翠釜鍐欏叆娴佺▼锛屽埄鐢?Spark 鐨勮绠楄兘鍔涘苟琛岀敓鎴愬簳灞?RocksDB 鐨?SST 鏂囦欢锛屼粠鑰屽疄鐜拌秴楂樻€ц兘鐨勬暟鎹鍏ワ紝杩欎釜 SST 鏂囦欢瀵煎叆鐨勫満鏅氨鏄湰鏂囧甫澶у涓婃墜鐔熸倝鐨勯儴鍒嗐€?/p>

璇︾粏淇℃伅璇峰弬闃咃細Nebula Graph 鎵嬪唽:浠€涔堟槸 Nebula Exchange

Nebula Graph 瀹樻柟鍗氬涔熸湁鏇村 Nebula Exchange 鐨勫疄璺垫枃绔?/p>

姝ラ姒傝

  • 瀹為獙鐜
  • 閰嶇疆 Exchange
  • 鐢熸垚 SST 鏂囦欢
  • 鍐欏叆 SST 鏂囦欢鍒?Nebula Graph

瀹為獙鐜鍑嗗

涓轰簡鏈€灏忓寲浣跨敤 Nebula Exchange 鐨?SST 鍔熻兘锛屾垜浠渶瑕侊細

  • 鎼缓涓€涓?Nebula Graph 闆嗙兢锛屽垱寤哄鍏ユ暟鎹殑 Schema锛屾垜浠€夋嫨浣跨敤 Docker-Compose 鏂瑰紡銆佸埄鐢?Nebula-Up 蹇€熼儴缃诧紝骞剁畝鍗曚慨鏀瑰叾缃戠粶锛屼互鏂逛究鍚屾牱瀹瑰櫒鍖栫殑 Exchange 绋嬪簭瀵瑰叾璁块棶銆?/li>
  • 鎼缓瀹瑰櫒鍖栫殑 Spark 杩愯鐜
  • 鎼缓瀹瑰櫒鍖栫殑 HDFS

1. 鎼缓 Nebula Graph 闆嗙兢

鍊熷姪浜?Nebula-Up 鎴戜滑鍙互鍦?Linux 鐜涓嬩竴閿儴缃蹭竴濂?Nebula Graph 闆嗙兢锛?/p>

curl -fsSL nebula-up.siwei.io/install.sh | bash
无依赖单机尝鲜 Nebula Exchange 的 SST 导入,第3张
鏃犱緷璧栧崟鏈哄皾椴?Nebula Exchange 鐨?SST 瀵煎叆

寰呴儴缃叉垚鍔熶箣鍚庯紝鎴戜滑闇€瑕佸鐜鍋氫竴浜涗慨鏀癸紝杩欓噷鎴戝仛鐨勪慨鏀瑰叾瀹炲氨鏄袱鐐癸細

  1. 鍙繚鐣欎竴涓?metaD 鏈嶅姟
  2. 璧风敤 Docker 鐨勫閮ㄧ綉缁?/li>

璇︾粏淇敼鐨勯儴鍒嗗弬鑰冮檮褰曚竴

搴旂敤 docker-compose 鐨勪慨鏀癸細

cd ~/.nebula-up/nebula-docker-compose
vim docker-compose.yaml # 鍙傝€冮檮褰曚竴
docker network create nebula-net # 闇€瑕佸垱寤哄閮ㄧ綉缁?
docker-compose up -d --remove-orphans

涔嬪悗锛屾垜浠潵鍒涘缓瑕佹祴璇曠殑鍥剧┖闂达紝骞跺垱寤哄浘鐨?Schema锛屼负姝わ紝鎴戜滑鍙互鍒╃敤 nebula-console 锛屽悓鏍凤紝Nebula-Up 閲岃嚜甯︿簡瀹瑰櫒鍖栫殑 nebula-console銆?/p>

  • 杩涘叆 Nebula-Console 鎵€鍦ㄧ殑瀹瑰櫒
~/.nebula-up/console.sh
/ #
  • 鍦?console 瀹瑰櫒閲屽彂璧烽摼鎺ュ埌鍥炬暟鎹簱锛屽叾涓?192.168.x.y 鏄垜鎵€鍦ㄧ殑 Linux VM 鐨勭涓€涓綉鍗″湴鍧€锛岃鎹㈡垚鎮ㄧ殑
/ # nebula-console -addr 192.168.x.y -port 9669 -user root -p password
[INFO] connection pool is initialized successfully

Welcome to Nebula Graph!
  • 鍒涘缓鍥剧┖闂达紙鎴戜滑璧峰悕瀛楀彨 sst 锛夛紝浠ュ強 schema
create space sst(partition_num=5,replica_factor=1,vid_type=fixed_string(32));
:sleep 20
use sst
create tag player(name string, age int);

绀轰緥杈撳嚭

(root@nebula) [(none)]> create space sst(partition_num=5,replica_factor=1,vid_type=fixed_string(32));
Execution succeeded (time spent 1468/1918 us)

(root@nebula) [(none)]> :sleep 20

(root@nebula) [(none)]> use sst
Execution succeeded (time spent 1253/1566 us)

Wed, 18 Aug 2021 08:18:13 UTC

(root@nebula) [sst]> create tag player(name string, age int);
Execution succeeded (time spent 1312/1735 us)

Wed, 18 Aug 2021 08:18:23 UTC

2. 鎼缓瀹瑰櫒鍖栫殑 Spark 鐜

鍒╃敤 big data europe 鍋氱殑宸ヤ綔锛岃繖涓繃绋嬮潪甯稿鏄撱€?/p>

鍊煎緱娉ㄦ剰鐨勬槸锛?/p>

  • 鐜板湪鐨?Nebula Exchange 瀵?Spark 鐨勭増鏈湁瑕佹眰锛屽湪鐜板湪鐨?2021 骞?8 鏈堬紝鎴戞槸鐢ㄤ簡 spark-2.4.5-hadoop-2.7 鐨勭増鏈€?/li>
  • 涓轰簡鏂逛究锛屾垜璁?Spark 杩愯鍦?Nebula Graph 鐩稿悓鐨勬満鍣ㄤ笂锛屽苟涓旀寚瀹氫簡杩愯鍦ㄥ悓涓€涓?Docker 缃戠粶涓?/li>
docker run --name spark-master --network nebula-net \
    -h spark-master -e ENABLE_INIT_DAEMON=false -d \
    bde2020/spark-master:2.4.5-hadoop2.7

鐒跺悗锛屾垜浠氨鍙互杩涘叆鍒扮幆澧冧腑浜嗭細

docker exec -it spark-master bash

杩涘埌 Spark 瀹瑰櫒涓箣鍚庯紝鍙互鍍忚繖鏍峰畨瑁?maven:

export MAVEN_VERSION=3.5.4
export MAVEN_HOME=/usr/lib/mvn
export PATH=$MAVEN_HOME/bin:$PATH

wget http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz && \
  tar -zxvf apache-maven-$MAVEN_VERSION-bin.tar.gz && \
  rm apache-maven-$MAVEN_VERSION-bin.tar.gz && \
  mv apache-maven-$MAVEN_VERSION /usr/lib/mvn

杩樺彲浠ヨ繖鏍峰湪瀹瑰櫒閲屼笅杞?nebula-exchange 鐨?jar 鍖咃細

cd ~
wget https://repo1.maven.org/maven2/com/vesoft/nebula-exchange/2.1.0/nebula-exchange-2.1.0.jar

3. 鎼缓瀹瑰櫒鍖栫殑 HDFS

鍚屾牱鍊熷姪 big-data-euroupe 鐨勫伐浣滐紝杩欓潪甯哥畝鍗曪紝涓嶈繃鎴戜滑瑕佸仛涓€鐐逛慨鏀癸紝璁╁畠鐨?docker-compose.yml 鏂囦欢閲屼娇鐢?nebula-net 杩欎釜涔嬪墠鍒涘缓鐨?Docker 缃戠粶銆?/p>

璇︾粏淇敼鐨勯儴鍒嗗弬鑰冮檮褰曚簩

git clone https://github.com/big-data-europe/docker-hadoop.git
cd docker-hadoop
vim docker-compose.yml
docker-compose up -d

閰嶇疆 Exchange

杩欎釜閰嶇疆涓昏濉叆鐨勪俊鎭氨鏄?Nebula Graph 闆嗙兢鏈韩鍜屽皢瑕佸啓鍏ユ暟鎹殑 Space Name锛屼互鍙婃暟鎹簮鐩稿叧鐨勯厤缃紙杩欓噷鎴戜滑鐢?csv 浣滀负渚嬪瓙锛夛紝鏈€鍚庡啀閰嶇疆杈撳嚭锛坰ink锛変负 sst

  • Nebula Graph
    • GraphD 鍦板潃
    • MetaD 鍦板潃
    • credential
    • Space Name
  • 鏁版嵁婧?
    • source: csv
      • path
      • fields etc.
    • ink: sst

璇︾粏鐨勯厤缃弬鑰冮檮褰曚簩

娉ㄦ剰锛岃繖閲?metaD 鐨勫湴鍧€鍙互杩欐牱鑾峰彇锛屽彲浠ョ湅鍒?0.0.0.0:49377->9559 琛ㄧず 49377 鏄閮ㄧ殑鍦板潃銆?/p>

$ docker ps | grep meta
887740c15750   vesoft/nebula-metad:v2.0.0                               "./bin/nebula-metad 鈥?   6 hours ago    Up 6 hours (healthy)    9560/tcp, 0.0.0.0:49377->9559/tcp, :::49377->9559/tcp, 0.0.0.0:49376->19559/tcp, :::49376->19559/tcp, 0.0.0.0:49375->19560/tcp, :::49375->19560/tcp                  nebula-docker-compose_metad0_1

鐢熸垚 SST 鏂囦欢

1. 鍑嗗婧愭枃浠躲€侀厤缃枃浠?/h3>
docker cp exchange-sst.conf spark-master:/root/
docker cp player.csv spark-master:/root/

鍏朵腑 player.csv 鐨勪緥瀛愶細

1100,Tim Duncan,42
1101,Tony Parker,36
1102,LaMarcus Aldridge,33
1103,Rudy Gay,32
1104,Marco Belinelli,32
1105,Danny Green,31
1106,Kyle Anderson,25
1107,Aron Baynes,32
1108,Boris Diaw,36
1109,Tiago Splitter,34
1110,Cory Joseph,27
1111,David West,38

2. 鎵ц exchange 绋嬪簭

杩涘叆 spark-master 瀹瑰櫒锛屾彁浜ゆ墽琛?exchange 搴旂敤銆?/p>

docker exec -it spark-master bash
cd /root/
/spark/bin/spark-submit --master local \
    --class com.vesoft.nebula.exchange.Exchange nebula-exchange-2.1.0.jar\
    -c exchange-sst.conf

妫€鏌ユ墽琛岀粨鏋滐細

spark-submit 杈撳嚭锛?/p>

21/08/17 03:37:43 INFO TaskSetManager: Finished task 31.0 in stage 2.0 (TID 33) in 1093 ms on localhost (executor driver) (32/32)
21/08/17 03:37:43 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
21/08/17 03:37:43 INFO DAGScheduler: ResultStage 2 (foreachPartition at VerticesProcessor.scala:179) finished in 22.336 s
21/08/17 03:37:43 INFO DAGScheduler: Job 1 finished: foreachPartition at VerticesProcessor.scala:179, took 22.500639 s
21/08/17 03:37:43 INFO Exchange$: SST-Import: failure.player: 0
21/08/17 03:37:43 WARN Exchange$: Edge is not defined
21/08/17 03:37:43 INFO SparkUI: Stopped Spark web UI at http://spark-master:4040
21/08/17 03:37:43 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!

楠岃瘉 HDFS 涓婄敓鎴愮殑 SST 鏂囦欢锛?/p>

docker exec -it namenode /bin/bash

root@2db58903fb53:/# hdfs dfs -ls /sst
Found 10 items
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/1
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/10
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/2
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/3
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/4
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/5
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/6
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/7
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/8
drwxr-xr-x   - root supergroup          0 2021-08-17 03:37 /sst/9

鍐欏叆 SST 鍒?Nebula Graph

杩欓噷鐨勬搷浣滃疄闄呬笂閮芥槸鍙傝€冩枃妗o細SST 瀵煎叆锛屽緱鏉ャ€傚叾涓氨鏄粠 console 涔嬩腑鎵ц浜嗕袱姝ユ搷浣滐細

  • Download
  • Ingest

鍏朵腑 Download 瀹為檯涓婃槸瑙﹀彂 Nebula Graph 浠庢湇鍔$鍙戣捣 HDFS Client 鐨?download锛岃幏鍙?HDFS 涓婄殑 SST 鏂囦欢锛岀劧鍚庢斁鍒?storageD 鑳借闂殑鏈湴璺緞涓嬶紝杩欓噷锛岄渶瑕佹垜浠湪鏈嶅姟绔儴缃?HDFS 鐨勪緷璧栥€傚洜涓烘垜浠槸鏈€灏忓疄璺碉紝鎴戝氨鍋锋噿鎵嬪姩鍋氫簡杩欎釜 Download 鐨勬搷浣溿€?/p>

1. 鎵嬪姩涓嬭浇

杩欓噷杈规墜鍔ㄤ笅杞芥垜浠氨瑕佺煡閬?Nebula Graph 鏈嶅姟绔笅杞界殑璺緞锛屽疄闄呬笂鏄?/data/storage/nebula/<space_id>/download/锛岃繖閲岀殑 Space ID 闇€瑕佹墜鍔ㄨ幏鍙栦竴涓嬶細

杩欎釜渚嬪瓙閲岋紝鎴戜滑鐨?Space Name 鏄?sst锛岃€?Space ID 鏄?49銆?/p>

(root@nebula) [sst]> DESC space sst
+----+-------+------------------+----------------+---------+------------+--------------------+-------------+-----------+
| ID | Name  | Partition Number | Replica Factor | Charset | Collate    | Vid Type           | Atomic Edge | Group     |
+----+-------+------------------+----------------+---------+------------+--------------------+-------------+-----------+
| 49 | "sst" | 10               | 1              | "utf8"  | "utf8_bin" | "FIXED_STRING(32)" | "false"     | "default" |
+----+-------+------------------+----------------+---------+------------+--------------------+-------------+-----------+

浜庢槸锛屼笅杈圭殑鎿嶄綔灏辨槸鎵嬪姩鎶?SST 鏂囦欢浠?HDFS 涔嬩腑 get 涓嬫潵锛屽啀鎷疯礉鍒?storageD 涔嬩腑銆?/p>

docker exec -it namenode /bin/bash

$ hdfs dfs -get /sst /sst
exit
docker cp namenode:/sst .
docker exec -it nebula-docker-compose_storaged0_1 mkdir -p /data/storage/nebula/49/download/
docker exec -it nebula-docker-compose_storaged1_1 mkdir -p /data/storage/nebula/49/download/
docker exec -it nebula-docker-compose_storaged2_1 mkdir -p /data/storage/nebula/49/download/
docker cp sst nebula-docker-compose_storaged0_1:/data/storage/nebula/49/download/
docker cp sst nebula-docker-compose_storaged1_1:/data/storage/nebula/49/download/
docker cp sst nebula-docker-compose_storaged2_1:/data/storage/nebula/49/download/

2. SST 鏂囦欢瀵煎叆

  • 杩涘叆 Nebula-Console 鎵€鍦ㄧ殑瀹瑰櫒
~/.nebula-up/console.sh
/ #
  • 鍦?console 瀹瑰櫒閲屽彂璧烽摼鎺ュ埌鍥炬暟鎹簱锛屽叾涓?192.168.x.y 鏄垜鎵€鍦ㄧ殑 Linux VM 鐨勭涓€涓綉鍗″湴鍧€锛岃鎹㈡垚鎮ㄧ殑
/ # nebula-console -addr 192.168.x.y -port 9669 -user root -p password
[INFO] connection pool is initialized successfully

Welcome to Nebula Graph!
  • 鎵ц INGEST 寮€濮嬭 StorageD 璇诲彇 SST 鏂囦欢
(root@nebula) [(none)]> use sst
(root@nebula) [sst]> INGEST;

鎴戜滑鍙互鐢ㄥ涓嬫柟娉曞疄鏃舵煡鐪?Nebula Graph 鏈嶅姟绔殑鏃ュ織

tail -f ~/.nebula-up/nebula-docker-compose/logs/*/*

鎴愬姛鐨?INGEST 鏃ュ織锛?/p>

I0817 08:03:28.611877   169 EventListner.h:96] Ingest external SST file: column family default, the external file path /data/storage/nebula/49/download/8/8-6.sst, the internal file path /data/storage/nebula/49/data/000023.sst, the properties of the table: # data blocks=1; # entries=1; # deletions=0; # merge operands=0; # range deletions=0; raw key size=48; raw average key size=48.000000; raw value size=40; raw average value size=40.000000; data block size=75; index block size (user-key0, delta-value0)=66; filter block size=0; (estimated) table size=141; filter policy name=N/A; prefix extractor name=nullptr; column family ID=N/A; column family name=N/A; comparator name=leveldb.BytewiseComparator; merge operator name=nullptr; property collectors names=[]; SST file compression algo=Snappy; SST file compression options=window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ; creation time=0; time stamp of earliest key=0; file creation time=0;
E0817 08:03:28.611912   169 StorageHttpIngestHandler.cpp:63] SSTFile ingest successfully

闄勫綍

闄勫綍涓€

docker-compose.yaml

diff --git a/docker-compose.yaml b/docker-compose.yaml
index 48854de..cfeaedb 100644
--- a/docker-compose.yaml
+++ b/docker-compose.yaml
@@ -6,11 +6,13 @@ services:
       USER: root
       TZ:   "${TZ}"
     command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
+      - --meta_server_addrs=metad0:9559
       - --local_ip=metad0
       - --ws_ip=metad0
       - --port=9559
       - --ws_http_port=19559
+      - --ws_storage_http_port=19779
       - --data_path=/data/meta
       - --log_dir=/logs
       - --v=0
@@ -34,81 +36,14 @@ services:
     cap_add:
       - SYS_PTRACE

-  metad1:
-    image: vesoft/nebula-metad:v2.0.0
-    environment:
-      USER: root
-      TZ:   "${TZ}"
-    command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
-      - --local_ip=metad1
-      - --ws_ip=metad1
-      - --port=9559
-      - --ws_http_port=19559
-      - --data_path=/data/meta
-      - --log_dir=/logs
-      - --v=0
-      - --minloglevel=0
-    healthcheck:
-      test: ["CMD", "curl", "-sf", "http://metad1:19559/status"]
-      interval: 30s
-      timeout: 10s
-      retries: 3
-      start_period: 20s
-    ports:
-      - 9559
-      - 19559
-      - 19560
-    volumes:
-      - ./data/meta1:/data/meta
-      - ./logs/meta1:/logs
-    networks:
-      - nebula-net
-    restart: on-failure
-    cap_add:
-      - SYS_PTRACE
-
-  metad2:
-    image: vesoft/nebula-metad:v2.0.0
-    environment:
-      USER: root
-      TZ:   "${TZ}"
-    command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
-      - --local_ip=metad2
-      - --ws_ip=metad2
-      - --port=9559
-      - --ws_http_port=19559
-      - --data_path=/data/meta
-      - --log_dir=/logs
-      - --v=0
-      - --minloglevel=0
-    healthcheck:
-      test: ["CMD", "curl", "-sf", "http://metad2:19559/status"]
-      interval: 30s
-      timeout: 10s
-      retries: 3
-      start_period: 20s
-    ports:
-      - 9559
-      - 19559
-      - 19560
-    volumes:
-      - ./data/meta2:/data/meta
-      - ./logs/meta2:/logs
-    networks:
-      - nebula-net
-    restart: on-failure
-    cap_add:
-      - SYS_PTRACE
-
   storaged0:
     image: vesoft/nebula-storaged:v2.0.0
     environment:
       USER: root
       TZ:   "${TZ}"
     command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
+      - --meta_server_addrs=metad0:9559
       - --local_ip=storaged0
       - --ws_ip=storaged0
       - --port=9779
@@ -119,8 +54,8 @@ services:
       - --minloglevel=0
     depends_on:
       - metad0
-      - metad1
-      - metad2
     healthcheck:
       test: ["CMD", "curl", "-sf", "http://storaged0:19779/status"]
       interval: 30s
@@ -146,7 +81,7 @@ services:
       USER: root
       TZ:   "${TZ}"
     command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
+      - --meta_server_addrs=metad0:9559
       - --local_ip=storaged1
       - --ws_ip=storaged1
       - --port=9779
@@ -157,8 +92,8 @@ services:
       - --minloglevel=0
     depends_on:
       - metad0
-      - metad1
-      - metad2
     healthcheck:
       test: ["CMD", "curl", "-sf", "http://storaged1:19779/status"]
       interval: 30s
@@ -184,7 +119,7 @@ services:
       USER: root
       TZ:   "${TZ}"
     command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
+      - --meta_server_addrs=metad0:9559
       - --local_ip=storaged2
       - --ws_ip=storaged2
       - --port=9779
@@ -195,8 +130,8 @@ services:
       - --minloglevel=0
     depends_on:
       - metad0
-      - metad1
-      - metad2
     healthcheck:
       test: ["CMD", "curl", "-sf", "http://storaged2:19779/status"]
       interval: 30s
@@ -222,17 +157,19 @@ services:
       USER: root
       TZ:   "${TZ}"
     command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
+      - --meta_server_addrs=metad0:9559
       - --port=9669
       - --ws_ip=graphd
       - --ws_http_port=19669
+      - --ws_meta_http_port=19559
       - --log_dir=/logs
       - --v=0
       - --minloglevel=0
     depends_on:
       - metad0
-      - metad1
-      - metad2
     healthcheck:
       test: ["CMD", "curl", "-sf", "http://graphd:19669/status"]
       interval: 30s
@@ -257,17 +194,19 @@ services:
       USER: root
       TZ:   "${TZ}"
     command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
+      - --meta_server_addrs=metad0:9559
       - --port=9669
       - --ws_ip=graphd1
       - --ws_http_port=19669
+      - --ws_meta_http_port=19559
       - --log_dir=/logs
       - --v=0
       - --minloglevel=0
     depends_on:
       - metad0
-      - metad1
-      - metad2
     healthcheck:
       test: ["CMD", "curl", "-sf", "http://graphd1:19669/status"]
       interval: 30s
@@ -292,17 +231,21 @@ services:
       USER: root
       TZ:   "${TZ}"
     command:
-      - --meta_server_addrs=metad0:9559,metad1:9559,metad2:9559
+      - --meta_server_addrs=metad0:9559
       - --port=9669
       - --ws_ip=graphd2
       - --ws_http_port=19669
+      - --ws_meta_http_port=19559
       - --log_dir=/logs
       - --v=0
       - --minloglevel=0
+      - --storage_client_timeout_ms=60000
+      - --local_config=true
     depends_on:
       - metad0
-      - metad1
-      - metad2
     healthcheck:
       test: ["CMD", "curl", "-sf", "http://graphd2:19669/status"]
       interval: 30s
@@ -323,3 +266,4 @@ services:

 networks:
   nebula-net:
+    external: true

闄勫綍浜?/h3>

https://github.com/big-data-europe/docker-hadoop 鐨?docker-compose.yml

diff --git a/docker-compose.yml b/docker-compose.yml
index ed40dc6..66ff1f4 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -14,6 +14,8 @@ services:
       - CLUSTER_NAME=test
     env_file:
       - ./hadoop.env
+    networks:
+      - nebula-net

   datanode:
     image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
@@ -25,6 +27,8 @@ services:
       SERVICE_PRECONDITION: "namenode:9870"
     env_file:
       - ./hadoop.env
+    networks:
+      - nebula-net

   resourcemanager:
     image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8
@@ -34,6 +38,8 @@ services:
       SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864"
     env_file:
       - ./hadoop.env
+    networks:
+      - nebula-net

   nodemanager1:
     image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
@@ -43,6 +49,8 @@ services:
       SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode:9864 resourcemanager:8088"
     env_file:
       - ./hadoop.env
+    networks:
+      - nebula-net

   historyserver:
     image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8
@@ -54,8 +62,14 @@ services:
       - hadoop_historyserver:/hadoop/yarn/timeline
     env_file:
       - ./hadoop.env
+    networks:
+      - nebula-net

 volumes:
   hadoop_namenode:
   hadoop_datanode:
   hadoop_historyserver:
+
+networks:
+  nebula-net:
+    external: true

闄勫綍涓?/h3>

nebula-exchange-sst.conf

{
  # Spark relation config
  spark: {
    app: {
      name: Nebula Exchange 2.1
    }

    master:local

    driver: {
      cores: 1
      maxResultSize: 1G
    }

    executor: {
        memory:1G
    }

    cores:{
      max: 16
    }
  }

  # Nebula Graph relation config
  nebula: {
    address:{
      graph:["192.168.8.128:9669"]
      meta:["192.168.8.128:49377"]
    }
    user: root
    pswd: nebula
    space: sst

    # parameters for SST import, not required
    path:{
        local:"/tmp"
        remote:"/sst"
        hdfs.namenode: "hdfs://192.168.8.128:9000"
    }

    # nebula client connection parameters
    connection {
      # socket connect & execute timeout, unit: millisecond
      timeout: 30000
    }

    error: {
      # max number of failures, if the number of failures is bigger than max, then exit the application.
      max: 32
      # failed import job will be recorded in output path
      output: /tmp/errors
    }

    # use google's RateLimiter to limit the requests send to NebulaGraph
    rate: {
      # the stable throughput of RateLimiter
      limit: 1024
      # Acquires a permit from RateLimiter, unit: MILLISECONDS
      # if it can't be obtained within the specified timeout, then give up the request.
      timeout: 1000
    }
  }

  # Processing tags
  # There are tag config examples for different dataSources.
  tags: [

    # HDFS csv
    # Import mode is sst, just change type.sink to client if you want to use client import mode.
    {
      name: player
      type: {
        source: csv
        sink: sst
      }
      path: "file:///root/player.csv"
      # if your csv file has no header, then use _c0,_c1,_c2,.. to indicate fields
      fields: [_c1, _c2]
      nebula.fields: [name, age]
      vertex: {
        field:_c0
      }
      separator: ","
      header: false
      batch: 256
      partition: 32
    }

  ]
}

鏈枃涓鏈変换浣曢敊璇垨鐤忔紡锛屾杩庡幓 GitHub锛歨ttps://github.com/vesoft-inc/nebula issue 鍖哄悜鎴戜滑鎻?issue 鎴栬€呭墠寰€瀹樻柟璁哄潧锛歨ttps://discuss.nebula-graph.com.cn/ 鐨?寤鸿鍙嶉 鍒嗙被涓嬫彁寤鸿 馃憦锛涗氦娴佸浘鏁版嵁搴撴妧鏈紵鍔犲叆 Nebula 浜ゆ祦缇よ鍏堝~鍐欎笅浣犵殑 Nebula 鍚嶇墖锛孨ebula 灏忓姪鎵嬩細鎷変綘杩涚兢~~


https://www.xamrdz.com/database/6t31995602.html

相关文章: