我的集群配置为7台,其中5台机子都是8g内存,另外两台为虚拟机。
在别写程序之后通过spark-submit进行提交,可以成功跑完。但是今天在进行重跑的时候出现了一个问题,问题如下:
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_1730_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_1732_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_1733_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
一直卡在这个地方,我尝试过很多方法都没办法解决,可能不知道出现的原因,所以需要哪位大神看看,给点建议,谢谢了
在别写程序之后通过spark-submit进行提交,可以成功跑完。但是今天在进行重跑的时候出现了一个问题,问题如下:
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_1730_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_1732_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
17/03/26 10:10:32 INFO BlockManagerInfo: Added broadcast_1733_piece0 in memory on 196.168.168.104:59612 (size: 119.0 B, free: 3.0 GB)
一直卡在这个地方,我尝试过很多方法都没办法解决,可能不知道出现的原因,所以需要哪位大神看看,给点建议,谢谢了
解决方案 »
- 关于openstack的几点疑问
- nova-volume卷使用问题
- 以公司实际应用讲解OpenStack到底是什么(入门篇) ?
- 华为fusionSphere虚拟化能力上头牌
- keystone 同步数据库keystone-manage db_sync出错
- 安装OpenStack neutron时出现的问题,已经折磨我很久了,望大家帮忙
- 2014年云存储战场如何?
- 是否amazon对免费的ec2有通信的限制?
- 服务器可以远程登录 但是没有网络,怎么设置
- 求助!我自定义一个spark sql的函数
- dokcer仓库下的ubuntu镜像没有vi等基本指令怎么办
- Clound Foundry V2 部署问题,cf_nise_installer,报DNS未启动,先谢谢大佬们
17/03/25 22:52:32 INFO ExternalSorter: Thread 82 spilling in-memory map of 473.6 MB to disk (25 times so far)
17/03/25 22:52:37 INFO ExternalSorter: Thread 71 spilling in-memory map of 392.0 MB to disk (26 times so far)
17/03/25 22:52:52 INFO ExternalSorter: Thread 80 spilling in-memory map of 392.0 MB to disk (22 times so far)
17/03/25 22:53:07 INFO ExternalSorter: Thread 70 spilling in-memory map of 392.0 MB to disk (24 times so far)
17/03/25 22:53:38 INFO ExternalSorter: Thread 79 spilling in-memory map of 401.9 MB to disk (27 times so far)
17/03/25 22:53:49 INFO ExternalSorter: Thread 83 spilling in-memory map of 416.0 MB to disk (24 times so far)
17/03/25 22:53:53 INFO ExternalSorter: Thread 82 spilling in-memory map of 396.8 MB to disk (26 times so far)
不知道怎么解决?
首先spark版本号,应用代码,卡在哪个task,内存配置情况?
看样子像是内存不足频繁写磁盘造成的。
下面是sumbit提交的内容
/root/spark-2.1.0-bin-hadoop2.6/bin/spark-submit \
--class com.sirc.zwz.CSRJava.ChangeDataStruction.SCSR \
--num-executors 100 \
--driver-memory 6g \
--executor-memory 6g \
--executor-cores 8 \
/root/jars/SparkCSR_JAVA-0.0.1-SNAPSHOT.jar \
7台集群,1台master,6台slave,其中4台各8g内存,可提供spark运行的最大内存为6g(每台),另外2台是虚拟机各2g内存,各提供1g进行计算下面是部分日志信息:
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-017dbc57-6553-43ea-8a2d-3555fccd663d:NORMAL:196.168.168.103:50010|RBW], ReplicaUnderConstruction[[DISK]DS-6eb004b2-b3dc-42df-b212-ffa2fd6b5572:NORMAL:196.168.168.27:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5785ace1-a611-479b-b360-79562081feb1:NORMAL:196.168.168.104:50010|RBW]]}
2017-03-28 11:21:23,382 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /SRResult/N25E118/_temporary/0/_temporary/attempt_20170327232029_0002_m_000017_21/part-00017. BP-2089499914-196.168.168.100-1490492430641 blk_1073742807_1983{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5785ace1-a611-479b-b360-79562081feb1:NORMAL:196.168.168.104:50010|RBW], ReplicaUnderConstruction[[DISK]DS-411c0e4c-86c5-4203-94d8-d6d7a95df7da:NORMAL:196.168.168.102:50010|RBW], ReplicaUnderConstruction[[DISK]DS-6eb004b2-b3dc-42df-b212-ffa2fd6b5572:NORMAL:196.168.168.27:50010|RBW]]}
2017-03-28 11:21:23,459 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /SRResult/N25E118/_temporary/0/_temporary/attempt_20170327232029_0002_m_000029_33/part-00029. BP-2089499914-196.168.168.100-1490492430641 blk_1073742808_1984{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5785ace1-a611-479b-b360-79562081feb1:NORMAL:196.168.168.104:50010|RBW], ReplicaUnderConstruction[[DISK]DS-411c0e4c-86c5-4203-94d8-d6d7a95df7da:NORMAL:196.168.168.102:50010|RBW], ReplicaUnderConstruction[[DISK]DS-017dbc57-6553-43ea-8a2d-3555fccd663d:NORMAL:196.168.168.103:50010|RBW]]}
2017-03-28 11:21:23,509 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /SRResult/N25E118/_temporary/0/_temporary/attempt_20170327232029_0002_m_000026_30/part-00026. BP-2089499914-196.168.168.100-1490492430641 blk_1073742809_1985{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-97e7de7f-fbcd-44bb-821d-4d245f1ce82c:NORMAL:196.168.168.101:50010|RBW], ReplicaUnderConstruction[[DISK]DS-6eb004b2-b3dc-42df-b212-ffa2fd6b5572:NORMAL:196.168.168.27:50010|RBW], ReplicaUnderConstruction[[DISK]DS-411c0e4c-86c5-4203-94d8-d6d7a95df7da:NORMAL:196.168.168.102:50010|RBW]]}
2017-03-28 11:21:23,513 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /SRResult/N25E118/_temporary/0/_temporary/attempt_20170327232029_0002_m_000009_13/part-00009. BP-2089499914-196.168.168.100-1490492430641 blk_1073742810_1986{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5785ace1-a611-479b-b360-79562081feb1:NORMAL:196.168.168.104:50010|RBW], ReplicaUnderConstruction[[DISK]DS-017dbc57-6553-43ea-8a2d-3555fccd663d:NORMAL:196.168.168.103:50010|RBW], ReplicaUnderConstruction[[DISK]DS-99ba79bc-da18-4d0d-9a2c-b7b367cbea66:NORMAL:196.168.168.29:50010|RBW]]}
2017-03-28 11:21:23,521 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /SRResult/N25E118/_temporary/0/_temporary/attempt_20170327232029_0002_m_000013_17/part-00013. BP-2089499914-196.168.168.100-1490492430641 blk_1073742811_1987{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5785ace1-a611-479b-b360-79562081feb1:NORMAL:196.168.168.104:50010|RBW], ReplicaUnderConstruction[[DISK]DS-411c0e4c-86c5-4203-94d8-d6d7a95df7da:NORMAL:196.168.168.102:50010|RBW], ReplicaUnderConstruction[[DISK]DS-6eb004b2-b3dc-42df-b212-ffa2fd6b5572:NORMAL:196.168.168.27:50010|RBW]]}
2017-03-28 11:21:24,090 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /SRResult/N25E118/_temporary/0/_temporary/attempt_20170328112029_0002_m_000019_23/part-00019. BP-2089499914-196.168.168.100-1490492430641 blk_1073742812_1988{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-017dbc57-6553-43ea-8a2d-3555fccd663d:NORMAL:196.168.168.103:50010|RBW], ReplicaUnderConstruction[[DISK]DS-5785ace1-a611-479b-b360-79562081feb1:NORMAL:196.168.168.104:50010|RBW], ReplicaUnderConstruction[[DISK]DS-411c0e4c-86c5-4203-94d8-d6d7a95df7da:NORMAL:196.168.168.102:50010|RBW]]}
2017-03-28 11:27:42,734 INFO BlockStateChange: BLOCK* processReport: from storage DS-411c0e4c-86c5-4203-94d8-d6d7a95df7da node DatanodeRegistration(196.168.168.102, datanodeUuid=5407fb12-70a4-48d2-ac27-813a7833434c, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-18972982-c034-4dd4-b10b-d6563325e4cb;nsid=220744474;c=0), blocks: 20, hasStaleStorages: false, processing time: 1 msecs
2017-03-28 12:15:58,137 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 196.168.168.100
2017-03-28 12:15:58,137 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2017-03-28 12:15:58,137 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 7487
2017-03-28 12:15:58,137 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 360 Total time for transactions(ms): 33 Number of transactions batched in Syncs: 17 Number of syncs: 147 SyncTimes(ms): 1310 712
2017-03-28 12:15:58,160 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 360 Total time for transactions(ms): 33 Number of transactions batched in Syncs: 17 Number of syncs: 148 SyncTimes(ms): 1328 716
2017-03-28 12:15:58,161 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /root/hadoop/hadoop-2.6.5/name1/current/edits_inprogress_0000000000000007487 -> /root/hadoop/hadoop-2.6.5/name1/current/edits_0000000000000007487-0000000000000007846
2017-03-28 12:15:58,161 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /root/hadoop/hadoop-2.6.5/name2/current/edits_inprogress_0000000000000007487 -> /root/hadoop/hadoop-2.6.5/name2/current/edits_0000000000000007487-0000000000000007846
2017-03-28 12:15:58,161 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 7847
2017-03-28 12:15:58,551 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.08s at 139.24 KB/s
2017-03-28 12:15:58,551 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000007846 size 12281 bytes.
2017-03-28 12:15:58,618 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 7486
2017-03-28 12:15:58,618 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/root/hadoop/hadoop-2.6.5/name1/current/fsimage_0000000000000007447, cpktTxId=0000000000000007447)
2017-03-28 12:15:58,619 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/root/hadoop/hadoop-2.6.5/name2/current/fsimage_0000000000000007447, cpktTxId=0000000000000007447)
2017-03-28 12:48:56,541 INFO BlockStateChange: BLOCK* processReport: from storage DS-99ba79bc-da18-4d0d-9a2c-b7b367cbea66 node DatanodeRegistration(196.168.168.29, datanodeUuid=9efd8c9e-162c-4c45-af71-bf33f49ad408, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-18972982-c034-4dd4-b10b-d6563325e4cb;nsid=220744474;c=0), blocks: 13, hasStaleStorages: false, processing time: 1 msecs
2017-03-28 13:15:58,890 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 196.168.168.100
2017-03-28 13:15:58,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2017-03-28 13:15:58,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 7847
2017-03-28 13:15:58,891 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 70 46
2017-03-28 13:15:58,948 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 105 68
2017-03-28 13:15:58,949 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /root/hadoop/hadoop-2.6.5/name1/current/edits_inprogress_0000000000000007847 -> /root/hadoop/hadoop-2.6.5/name1/current/edits_0000000000000007847-0000000000000007848
2017-03-28 13:15:58,950 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /root/hadoop/hadoop-2.6.5/name2/current/edits_inprogress_0000000000000007847 -> /root/hadoop/hadoop-2.6.5/name2/current/edits_0000000000000007847-0000000000000007848
2017-03-28 13:15:58,951 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 7849
2017-03-28 13:15:59,856 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.20s at 55.28 KB/s
2017-03-28 13:15:59,856 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000007848 size 12281 bytes.
2017-03-28 13:16:00,041 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 7846
2017-03-28 13:16:00,041 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/root/hadoop/hadoop-2.6.5/name1/current/fsimage_0000000000000007486, cpktTxId=0000000000000007486)
2017-03-28 13:16:00,041 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/root/hadoop/hadoop-2.6.5/name2/current/fsimage_0000000000000007486, cpktTxId=0000000000000007486)
2017-03-28 13:43:11,290 INFO logs: Aliases are enabled
从网页的4040端口可以看(有可能是404X),到底卡在哪个任务哪个executor。
2. 把这个参数调大些试试:spark.shuffle.memoryFraction
* 参数说明:该参数用于设置shuffle过程中一个task拉取到上个stage的task的输出后,进行聚合操作时能够使用的Executor内存的比例,默认是0.2。也就是说,Executor默认只有20%的内存用来进行该操作。shuffle操作在进行聚合时,如果发现使用的内存超出了这个20%的限制,那么多余的数据就会溢写到磁盘文件中去,此时就会极大地降低性能。
* 参数调优建议:如果Spark作业中的RDD持久化操作较少,shuffle操作较多时,建议降低持久化操作的内存占比,提高shuffle操作的内存占比比例,避免shuffle过程中数据过多时内存不够用,必须溢写到磁盘上,降低了性能。此外,如果发现作业由于频繁的gc导致运行缓慢,意味着task执行用户代码的内存不够用,那么同样建议调低这个参数的值。
.set("spark.default.parallelism", "20")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("spark.shuffle.consolidateFiles", "true").set("spark.reducer.maxSizeInFlight", "100m")
.set("spark.shuffle.file.buffer", "100k").set("spark.shuffle.io.maxRetries", "10")
.set("spark.shuffle.io.retryWait", "10s");
我设置了这些参数,在添加内存之后可以跑完,但是很慢很慢很慢,无法忍受,请大神再指点指点
num-executors 100 --这个调小
spark.shuffle.memoryFraction --这个调大
不知道你具体慢在哪了,所以没法给你具体的优化建议.你采用的是hashshuffle吗? consolidateFiles这个参数是hashshuffle的时候用的,要不改成SortShuffle试试,一般慢都慢在shuffle上了
https://www.zhihu.com/question/57772280?guide=1
我看了下ui截图,感觉和shuffle无关,没有数据倾斜,是不是就是数据量大的,资源不足的原因啊.
你要分析下到底卡在哪个stage了,然后才能具体的分析哪块代码效率不高啊
--driver-memory 6g \
--executor-memory 6g \
--executor-cores 8 \100个executors 一个executor-memory 6G内存 8核cpu 那得多少内存多少cpu啊