MapReduce运行任务报错
MapReduce Error:
Error: java.io.IOException: Failing write.
Tried pipeline recovery 5 times without success.at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1113)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:622)
attempt_1501490104699_0091_r_000014_0 100.00FAILED reduce > reducehadoopserver16:8042 logs
Wed Aug 2 12:15:24 +0800 2017 Wed Aug 2 17:32:55 +0800 2017
5hrs, 17mins, 31sec Error:
java.io.IOException: Failing write. Tried pipeline recovery 5 times without success.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1113)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:622)
java.io.IOException: Bad response ERROR for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762901 from datanode DatanodeInfoWithStorage[192.168.22.164:50010,DS-e4a0570e-6077-447e-8299-de701eb33a1b,DISK]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1002)
2017-08-03 04:47:23,860 WARN [DataStreamer for file /user/hive/warehouse/transinfo_db.db/gps_srcdata_orc_middle_timeid/_SCRATCH0.6625733093777507/supplier=huoyun/logdate=20160830/timeid=001_288/_temporary/1/_temporary/attempt_1501681898109_0001_r_000007_0/part-r-00007 block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762901] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762901 in pipeline DatanodeInfoWithStorage[192.168.22.163:50010,DS-47fcf3cc-f3db-4475-b98e-d08178cb97a8,DISK], DatanodeInfoWithStorage[192.168.22.164:50010,DS-e4a0570e-6077-447e-8299-de701eb33a1b,DISK], DatanodeInfoWithStorage[192.168.22.151:50010,DS-735335cb-ae15-4fdf-a056-f3944db05ffc,DISK]: bad datanode DatanodeInfoWithStorage[192.168.22.164:50010,DS-e4a0570e-6077-447e-8299-de701eb33a1b,DISK]
2017-08-03 05:06:07,338 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762904] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762904
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2241)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:235)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:971)
2017-08-03 05:06:07,340 WARN [DataStreamer for file /user/hive/warehouse/transinfo_db.db/gps_srcdata_orc_middle_timeid/_SCRATCH0.6625733093777507/supplier=huoyun/logdate=20160830/timeid=001_288/_temporary/1/_temporary/attempt_1501681898109_0001_r_000007_0/part-r-00007 block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762904] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762904 in pipeline DatanodeInfoWithStorage[192.168.22.163:50010,DS-47fcf3cc-f3db-4475-b98e-d08178cb97a8,DISK], DatanodeInfoWithStorage[192.168.22.151:50010,DS-735335cb-ae15-4fdf-a056-f3944db05ffc,DISK]: bad datanode DatanodeInfoWithStorage[192.168.22.163:50010,DS-47fcf3cc-f3db-4475-b98e-d08178cb97a8,DISK]
2017-08-03 05:25:01,890 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762933] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762933
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2241)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:235)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:971)
2017-08-03 05:25:01,891 WARN [DataStreamer for file /user/hive/warehouse/transinfo_db.db/gps_srcdata_orc_middle_timeid/_SCRATCH0.6625733093777507/supplier=huoyun/logdate=20160830/timeid=001_288/_temporary/1/_temporary/attempt_1501681898109_0001_r_000007_0/part-r-00007 block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762933] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762933 in pipeline DatanodeInfoWithStorage[192.168.22.151:50010,DS-735335cb-ae15-4fdf-a056-f3944db05ffc,DISK], DatanodeInfoWithStorage[192.168.22.152:50010,DS-7ec6ea23-21fd-4447-a066-af8a1b2fb967,DISK]: bad datanode DatanodeInfoWithStorage[192.168.22.151:50010,DS-735335cb-ae15-4fdf-a056-f3944db05ffc,DISK]
2017-08-03 05:43:40,302 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762955] org.apache.hadoop.hdfs.DFSClient: Slow ReadProcessor read fields took 66624ms (threshold=30000ms); ack: seqno: -2 reply: 0 reply: 1 downstreamAckTimeNanos: 0, targets: [DatanodeInfoWithStorage[192.168.22.152:50010,DS-7ec6ea23-21fd-4447-a066-af8a1b2fb967,DISK], DatanodeInfoWithStorage[192.168.22.153:50010,DS-9e523c00-0b33-4cee-8ce9-0e3a5be949bb,DISK]]
2017-08-03 05:43:40,303 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762955] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762955
java.io.IOException: Bad response ERROR for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762955 from datanode DatanodeInfoWithStorage[192.168.22.153:50010,DS-9e523c00-0b33-4cee-8ce9-0e3a5be949bb,DISK]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1002)
2017-08-03 05:43:40,304 WARN [DataStreamer for file /user/hive/warehouse/transinfo_db.db/gps_srcdata_orc_middle_timeid/_SCRATCH0.6625733093777507/supplier=huoyun/logdate=20160830/timeid=001_288/_temporary/1/_temporary/attempt_1501681898109_0001_r_000007_0/part-r-00007 block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762955] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762955 in pipeline DatanodeInfoWithStorage[192.168.22.152:50010,DS-7ec6ea23-21fd-4447-a066-af8a1b2fb967,DISK], DatanodeInfoWithStorage[192.168.22.153:50010,DS-9e523c00-0b33-4cee-8ce9-0e3a5be949bb,DISK]: bad datanode DatanodeInfoWithStorage[192.168.22.153:50010,DS-9e523c00-0b33-4cee-8ce9-0e3a5be949bb,DISK]
2017-08-03 06:26:37,039 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762977] org.apache.hadoop.hdfs.DFSClient: Slow ReadProcessor read fields took 65551ms (threshold=30000ms); ack: seqno: -2 reply: 0 reply: 1 downstreamAckTimeNanos: 0, targets: [DatanodeInfoWithStorage[192.168.22.152:50010,DS-7ec6ea23-21fd-4447-a066-af8a1b2fb967,DISK], DatanodeInfoWithStorage[192.168.22.173:50010,DS-b3903c92-4732-42b4-a83f-546d260d137a,DISK]]
2017-08-03 06:26:37,040 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762977] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762977
java.io.IOException: Bad response ERROR for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762977 from datanode DatanodeInfoWithStorage[192.168.22.173:50010,DS-b3903c92-4732-42b4-a83f-546d260d137a,DISK]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1002)
2017-08-03 06:26:37,045 WARN [DataStreamer for file /user/hive/warehouse/transinfo_db.db/gps_srcdata_orc_middle_timeid/_SCRATCH0.6625733093777507/supplier=huoyun/logdate=20160830/timeid=001_288/_temporary/1/_temporary/attempt_1501681898109_0001_r_000007_0/part-r-00007 block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762977] org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3762977 in pipeline DatanodeInfoWithStorage[192.168.22.152:50010,DS-7ec6ea23-21fd-4447-a066-af8a1b2fb967,DISK], DatanodeInfoWithStorage[192.168.22.173:50010,DS-b3903c92-4732-42b4-a83f-546d260d137a,DISK]: bad datanode DatanodeInfoWithStorage[192.168.22.173:50010,DS-b3903c92-4732-42b4-a83f-546d260d137a,DISK]
2017-08-03 06:30:13,838 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3763028] org.apache.hadoop.hdfs.DFSClient: Slow ReadProcessor read fields took 60247ms (threshold=30000ms); ack: seqno: -2 reply: 0 reply: 1 downstreamAckTimeNanos: 0, targets: [DatanodeInfoWithStorage[192.168.22.152:50010,DS-7ec6ea23-21fd-4447-a066-af8a1b2fb967,DISK], DatanodeInfoWithStorage[192.168.22.161:50010,DS-aca2eb5f-e9b9-45a3-af21-bb66c55b0f3a,DISK]]
2017-08-03 06:30:13,838 WARN [ResponseProcessor for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3763028] org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3763028
java.io.IOException: Bad response ERROR for block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3763028 from datanode DatanodeInfoWithStorage[192.168.22.161:50010,DS-aca2eb5f-e9b9-45a3-af21-bb66c55b0f3a,DISK]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:1002)
2017-08-03 06:30:13,839 WARN [DataStreamer for file /user/hive/warehouse/transinfo_db.db/gps_srcdata_orc_middle_timeid/_SCRATCH0.6625733093777507/supplier=huoyun/logdate=20160830/timeid=001_288/_temporary/1/_temporary/attempt_1501681898109_0001_r_000007_0/part-r-00007 block BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3763028] org.apache.hadoop.hdfs.DFSClient: Error recovering pipeline for writing BP-15450043-192.168.22.158-1464844718994:blk_1077470611_3763028. Already retried 5 times for the same packet.
2017-08-03 08:09:07,644 WARN [communication thread] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Error reading the stream java.io.IOException: 没有那个进程
Looks like you are reaching the limit set for max open file handles in your system.
Is your application opening lot of files at same time? If you expect your application to open lot many files at same time, you can increase max open files limit in hadoop by changing some settings. (Check command " ulimit -n <limit>" to update your open files
handles limit.)
声明:该文观点仅代表作者本人,牛骨文系教育信息发布平台,牛骨文仅提供信息存储空间服务。
copyright © 2008-2019 亿联网络 版权所有 备案号:粤ICP备14031511号-2