解决running beyond virtual memory limits. Current usage: 35.5 MB of 1 GB physical memory used; 16.8 G
1、刚在公司搭建好的一个集群,然后运行wordcount测试看是否能正常使用,发现报如下错误(我在自己电脑上也是用同一版本,并没有报错)
[root@S1PA124 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output 14/08/20 09:51:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/08/20 09:51:35 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 14/08/20 09:51:36 INFO input.FileInputFormat: Total input paths to process : 1 14/08/20 09:51:36 INFO mapreduce.JobSubmitter: number of splits:1 14/08/20 09:51:36 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name 14/08/20 09:51:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 14/08/20 09:51:36 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class 14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class 14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class 14/08/20 09:51:37 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name 14/08/20 09:51:37 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class 14/08/20 09:51:37 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 14/08/20 09:51:37 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 14/08/20 09:51:37 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 14/08/20 09:51:37 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 14/08/20 09:51:37 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 14/08/20 09:51:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1408499127545_0001 14/08/20 09:51:37 INFO impl.YarnClientImpl: Submitted application application_1408499127545_0001 to ResourceManager at /0.0.0.0:8032 14/08/20 09:51:37 INFO mapreduce.Job: The url to track the job: http://S1PA124:8088/proxy/application_1408499127545_0001/ 14/08/20 09:51:37 INFO mapreduce.Job: Running job: job_1408499127545_0001 14/08/20 09:51:44 INFO mapreduce.Job: Job job_1408499127545_0001 running in uber mode : false 14/08/20 09:51:44 INFO mapreduce.Job: map 0% reduce 0% 14/08/20 09:51:49 INFO mapreduce.Job: map 100% reduce 0% 14/08/20 09:51:54 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_0, Status : FAILED Container [pid=26042,containerID=container_1408499127545_0001_01_000003] is running beyond virtual memory limits. Current usage: 35.5 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1408499127545_0001_01_000003 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 26047 26042 26042 26042 (java) 36 3 17963216896 8801 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_0 3 |- 26042 25026 26042 26042 (bash) 0 0 65409024 276 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000003/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_0 3 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000003/stderr Container killed on request. Exit code is 143 14/08/20 09:52:00 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_1, Status : FAILED Container [pid=26111,containerID=container_1408499127545_0001_01_000004] is running beyond virtual memory limits. Current usage: 100.3 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1408499127545_0001_01_000004 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 26116 26111 26111 26111 (java) 275 8 18016677888 25393 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_1 4 |- 26111 25026 26111 26111 (bash) 0 0 65409024 275 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_1 4 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000004/stderr Container killed on request. Exit code is 143 14/08/20 09:52:06 INFO mapreduce.Job: Task Id : attempt_1408499127545_0001_r_000000_2, Status : FAILED Container [pid=26185,containerID=container_1408499127545_0001_01_000005] is running beyond virtual memory limits. Current usage: 100.4 MB of 1 GB physical memory used; 16.8 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1408499127545_0001_01_000005 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 26190 26185 26185 26185 (java) 271 7 18025807872 25414 /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_2 5 |- 26185 25026 26185 26185 (bash) 0 0 65409024 276 /bin/bash -c /opt/lxx/jdk1.7.0_51/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.awt.headless=true -Djava.io.tmpdir=/root/install/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1408499127545_0001/container_1408499127545_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.58.22.221 10301 attempt_1408499127545_0001_r_000000_2 5 1>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005/stdout 2>/root/install/hadoop-2.2.0/logs/userlogs/application_1408499127545_0001/container_1408499127545_0001_01_000005/stderr Container killed on request. Exit code is 143 14/08/20 09:52:13 INFO mapreduce.Job: map 100% reduce 100% 14/08/20 09:52:13 INFO mapreduce.Job: Job job_1408499127545_0001 failed with state FAILED due to: Task failed task_1408499127545_0001_r_000000 Job failed as tasks failed. failedMaps:0 failedReduces:1 14/08/20 09:52:13 INFO mapreduce.Job: Counters: 32 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=80425 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=895 HDFS: Number of bytes written=0 HDFS: Number of read operations=3 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed reduce tasks=4 Launched map tasks=1 Launched reduce tasks=4 Rack-local map tasks=1 Total time spent by all maps in occupied slots (ms)=3082 Total time spent by all reduces in occupied slots (ms)=11065 Map-Reduce Framework Map input records=56 Map output records=56 Map output bytes=1023 Map output materialized bytes=1141 Input split bytes=96 Combine input records=56 Combine output records=56 Spilled Records=56 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=25 CPU time spent (ms)=680 Physical memory (bytes) snapshot=253157376 Virtual memory (bytes) snapshot=18103181312 Total committed heap usage (bytes)=1011875840 File Input Format Counters Bytes Read=7992、mapred-site.xml配置文件配置如下
<configuration> <property> <name>mapreduce.cluster.local.dir</name> <value>/root/install/hadoop/mapred/local</value> </property> <property> <name>mapreduce.cluster.system.dir</name> <value>/root/install/hadoop/mapred/system</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>S1PA124:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>S1PA124:19888</value> </property> <!-- <property> <name>mapred.child.java.opts</name> <value>-Djava.awt.headless=true</value> </property> <property> <name>yarn.app.mapreduce.am.command-opts</name> <value>-Djava.awt.headless=true -Xmx1024m</value> </property> <property> <name>yarn.app.mapreduce.am.admin-command-opts</name> <value>-Djava.awt.headless=true</value> </property> --> </configuration>3、解决办法
我把mapred-site.xml配置文件里配置与JVM运行内存空间的那几行配置注释掉,然后重新启动集群就解决了。具体原因暂时还没有时间来研究,大概知道是与机器JVM的分配情况有关。
声明:该文观点仅代表作者本人,牛骨文系教育信息发布平台,牛骨文仅提供信息存储空间服务。