ÔØÈëÖС£¡£¡£ 'S bLog
 
ÔØÈëÖС£¡£¡£
 
ÔØÈëÖС£¡£¡£
ÔØÈëÖС£¡£¡£
ÔØÈëÖС£¡£¡£
ÔØÈëÖС£¡£¡£
ÔØÈëÖС£¡£¡£
 
ÌîдÄúµÄÓʼþµØÖ·£¬¶©ÔÄÎÒÃǵľ«²ÊÄÚÈÝ£º


 
Hadoop³£¼ûÎÊÌâ¼°½â¾ö°ì·¨
[ 2011/4/4 18:23:00 | By: ÃÎÏè¶ù ]
 

1£ºShuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out
Answer£º
³ÌÐòÀïÃæÐèÒª´ò¿ª¶à¸öÎļþ£¬½øÐзÖÎö£¬ÏµÍ³Ò»°ãĬÈÏÊýÁ¿ÊÇ1024£¬£¨ÓÃulimit -a¿ÉÒÔ¿´µ½£©¶ÔÓÚÕý³£Ê¹ÓÃÊ**»ÁË£¬µ«ÊǶÔÓÚ³ÌÐòÀ´½²£¬¾ÍÌ«ÉÙÁË¡£
Ð޸İ취£º
ÐÞ¸Ä2¸öÎļþ¡£
        /etc/security/limits.conf
vi /etc/security/limits.conf
¼ÓÉÏ£º
* soft nofile 102400
* hard nofile 409600

    $cd /etc/pam.d/
    $sudo vi login
        Ìí¼Ó        session    required     /lib/security/pam_limits.so

Õë¶ÔµÚÒ»¸öÎÊÌâÎÒ¾ÀÕýÏ´𰸣º
ÕâÊÇreduceÔ¤´¦Àí½×¶Îshuffleʱ»ñÈ¡ÒÑÍê³ÉµÄmapµÄÊä³öʧ°Ü´ÎÊý³¬¹ýÉÏÏÞÔì³ÉµÄ£¬ÉÏÏÞĬÈÏΪ5¡£ÒýÆð´ËÎÊÌâµÄ·½Ê½¿ÉÄÜ»áÓкܶàÖÖ£¬±ÈÈçÍøÂçÁ¬½Ó²»Õý³££¬Á¬½Ó³¬Ê±£¬´ø¿í½Ï²îÒÔ¼°¶Ë¿Ú×èÈûµÈ¡£¡£¡£Í¨³£¿ò¼ÜÄÚÍøÂçÇé¿ö½ÏºÃÊDz»»á³öÏÖ´Ë´íÎóµÄ¡£

2£ºToo many fetch-failures
Answer:
³öÏÖÕâ¸öÎÊÌâÖ÷ÒªÊǽáµã¼äµÄÁ¬Í¨²»¹»È«Ãæ¡£
1) ¼ì²é ¡¢/etc/hosts
   ÒªÇó±¾»úip ¶ÔÓ¦ ·þÎñÆ÷Ãû
   ÒªÇóÒª°üº¬ËùÓеķþÎñÆ÷ip + ·þÎñÆ÷Ãû
2) ¼ì²é .ssh/authorized_keys
   ÒªÇó°üº¬ËùÓзþÎñÆ÷£¨°üÀ¨Æä×ÔÉí£©µÄpublic key

3£º´¦ÀíËÙ¶ÈÌØ±ðµÄÂý ³öÏÖmapºÜ¿ì µ«ÊÇreduceºÜÂý ¶øÇÒ·´¸´³öÏÖ reduce=0%
Answer:
½áºÏµÚ¶þµã£¬È»ºó
ÐÞ¸Ä conf/hadoop-env.sh ÖеÄexport HADOOP_HEAPSIZE=4000

4£ºÄܹ»Æô¶¯datanode£¬µ«ÎÞ·¨·ÃÎÊ£¬Ò²ÎÞ·¨½áÊøµÄ´íÎó
ÔÚÖØÐ¸ñʽ»¯Ò»¸öеķֲ¼Ê½Îļþʱ£¬ÐèÒª½«ÄãNameNodeÉÏËùÅäÖõÄdfs.name.dirÕâÒ»namenodeÓÃÀ´´æ·ÅNameNode ³Ö¾Ã´æ´¢Ãû×ֿռ估ÊÂÎñÈÕÖ¾µÄ±¾µØÎļþϵͳ·¾¶É¾³ý£¬Í¬Ê±½«¸÷DataNodeÉϵÄdfs.data.dirµÄ·¾¶ DataNode ´æ·Å¿éÊý¾ÝµÄ±¾µØÎļþϵͳ·¾¶µÄĿ¼Ҳɾ³ý¡£Èç±¾´ËÅäÖþÍÊÇÔÚNameNodeÉÏɾ³ý/home/hadoop/NameData£¬ÔÚDataNodeÉÏɾ³ý/home/hadoop/DataNode1ºÍ/home/hadoop/DataNode2¡£ÕâÊÇÒòΪHadoopÔÚ¸ñʽ»¯Ò»¸öеķֲ¼Ê½Îļþϵͳʱ£¬Ã¿¸ö´æ´¢µÄÃû×ֿռ䶼¶ÔÓ¦Á˽¨Á¢Ê±¼äµÄÄǸö°æ±¾£¨¿ÉÒԲ鿴/home/hadoop /NameData/currentĿ¼ÏµÄVERSIONÎļþ£¬ÉÏÃæ¼Ç¼Á˰汾ÐÅÏ¢£©£¬ÔÚÖØÐ¸ñʽ»¯Ðµķֲ¼Ê½ÏµÍ³Îļþʱ£¬×îºÃÏÈɾ³ýNameData Ŀ¼¡£±ØÐëɾ³ý¸÷DataNodeµÄdfs.data.dir¡£ÕâÑù²Å¿ÉÒÔʹnamedodeºÍdatanode¼Ç¼µÄÐÅÏ¢°æ±¾¶ÔÓ¦¡£
×¢Ò⣺ɾ³ýÊǸöºÜΣÏյ͝×÷£¬²»ÄÜÈ·ÈϵÄÇé¿öϲ»ÄÜɾ³ý£¡£¡×öºÃɾ³ýµÄÎļþµÈͨͨ±¸·Ý£¡£¡

5£ºjava.io.IOException: Could not obtain block: blk_194219614024901469_1100 file=/user/hive/warehouse/src_20090724_log/src_20090724_log
³öÏÖÕâÖÖÇé¿ö´ó¶àÊǽáµã¶ÏÁË£¬Ã»ÓÐÁ¬½ÓÉÏ¡£

6£ºjava.lang.OutOfMemoryError: Java heap space
³öÏÖÕâÖÖÒì³££¬Ã÷ÏÔÊÇjvmÄÚ´æ²»¹»µÃÔ­Òò£¬ÒªÐÞ¸ÄËùÓеÄdatanodeµÄjvmÄÚ´æ´óС¡£
Java -Xms1024m -Xmx4096m
Ò»°ãjvmµÄ×î´óÄÚ´æÊ¹ÓÃÓ¦¸ÃΪ×ÜÄÚ´æ´óСµÄÒ»°ë£¬ÎÒÃÇʹÓõÄ8GÄڴ棬ËùÒÔÉèÖÃΪ4096m£¬ÕâÒ»Öµ¿ÉÄÜÒÀ¾É²»ÊÇ×îÓŵÄÖµ¡£

±¾Ö÷ÌâÓÉ admin ÓÚ 2009-11-20 10:50 Öö¥
¶¥£¬ÕâÑùµÄÌù×ӷdz£ºÃ£¬ÒªÖö¥¡£¸½¼þÊÇÓÉHadoop¼¼Êõ½»Á÷ȺÖÐÈô±ùµÄͬѧÌṩµÄÏà¹Ø×ÊÁÏ£º
(12.58 KB)
HadoopÌí¼Ó½ÚµãµÄ·½·¨
×Ô¼ºÊµ¼ÊÌí¼Ó½Úµã¹ý³Ì£º
1. ÏÈÔÚslaveÉÏÅäÖúû·¾³£¬°üÀ¨ssh£¬jdk£¬Ïà¹Øconfig£¬lib£¬binµÈµÄ¿½±´£»
2. ½«ÐµÄdatanodeµÄhost¼Óµ½¼¯Èºnamenode¼°ÆäËûdatanodeÖÐÈ¥£»
3. ½«ÐµÄdatanodeµÄip¼Óµ½masterµÄconf/slavesÖУ»
4. ÖØÆôcluster,ÔÚclusterÖп´µ½ÐµÄdatanode½Úµã£»
5. ÔËÐÐbin/start-balancer.sh£¬Õâ¸ö»áºÜºÄʱ¼ä
±¸×¢£º
1. Èç¹û²»balance£¬ÄÇôcluster»á°ÑеÄÊý¾Ý¶¼´æ·ÅÔÚеÄnodeÉÏ£¬ÕâÑù»á½µµÍmrµÄ¹¤×÷ЧÂÊ£»
2. Ò²¿Éµ÷ÓÃbin/start-balancer.sh ÃüÁîÖ´ÐУ¬Ò²¿É¼Ó²ÎÊý -threshold 5
   threshold ÊÇÆ½ºâãÐÖµ£¬Ä¬ÈÏÊÇ10%£¬ÖµÔ½µÍ¸÷½ÚµãԽƽºâ£¬µ«ÏûºÄʱ¼äÒ²¸ü³¤¡£
3. balancerÒ²¿ÉÒÔÔÚÓÐmr jobµÄclusterÉÏÔËÐУ¬Ä¬ÈÏdfs.balance.bandwidthPerSecºÜµÍ£¬Îª1M/s¡£ÔÚûÓÐmr jobʱ£¬¿ÉÒÔÌá¸ß¸ÃÉèÖÃ¼Ó¿ì¸ºÔØ¾ùºâʱ¼ä¡£

ÆäËû±¸×¢£º
1. ±ØÐëÈ·±£slaveµÄfirewallÒѹرÕ;
2. È·±£ÐµÄslaveµÄipÒѾ­Ìí¼Óµ½master¼°ÆäËûslavesµÄ/etc/hostsÖУ¬·´Ö®Ò²Òª½«master¼°ÆäËûslaveµÄipÌí¼Óµ½ÐµÄslaveµÄ/etc/hostsÖÐ
mapper¼°reducer¸öÊý
urlµØÖ·£º http://wiki.apache.org/hadoop/HowManyMapsAndReduces
HowManyMapsAndReduces
Partitioning your job into maps and reduces
Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead.
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the [WWW] InputFormat determines the number of maps.
The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.
Number of Reduces
The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.
Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.
The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.
The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).
×Ô¼ºµÄÀí½â£º
mapper¸öÊýµÄÉèÖ㺸úinput file ÓйØÏµ£¬Ò²¸úfilesplitsÓйØÏµ£¬filesplitsµÄÉÏÏßΪdfs.block.size£¬ÏÂÏß¿ÉÒÔͨ¹ýmapred.min.split.sizeÉèÖã¬×îºó»¹ÊÇÓÉInputFormat¾ö¶¨¡£

½ÏºÃµÄ½¨Ò飺
The right number of reduces seems to be 0.95 or 1.75 multiplied by (<no. of nodes> * mapred.tasktracker.reduce.tasks.maximum).increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures.
<property>
  <name>mapred.tasktracker.reduce.tasks.maximum</name>
  <value>2</value>
  <description>The maximum number of reduce tasks that will be run
  simultaneously by a task tracker.
  </description>
</property>

µ¥¸önodeмÓÓ²ÅÌ
1.ÐÞ¸ÄÐèҪмÓÓ²Å̵ÄnodeµÄdfs.data.dir£¬ÓöººÅ·Ö¸ôС¢¾ÉÎļþĿ¼
2.ÖØÆôdfs

ͬ²½hadoop ´úÂë
hadoop-env.sh
# host:path where hadoop code should be rsync'd from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

ÓÃÃüÁîºÏ²¢HDFSСÎļþ
hadoop fs -getmerge <src> <dest>

ÖØÆôreduce job·½·¨
Introduced recovery of jobs when JobTracker restarts. This facility is off by default.
Introduced config parameters "mapred.jobtracker.restart.recover", "mapred.jobtracker.job.history.block.size", and "mapred.jobtracker.job.history.buffer.size".
»¹Î´ÑéÖ¤¹ý¡£

IOд²Ù×÷³öÏÖÎÊÌâ
0-1246359584298, infoPort=50075, ipcPort=50020):Got exception while serving blk_-5911099437886836280_1292 to /172.16.100.165:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/
172.16.100.165:50010 remote=/172.16.100.165:50930]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:293)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:179)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:94)
        at java.lang.Thread.run(Thread.java:619)

It seems there are many reasons that it can timeout, the example given in
HADOOP-3831 is a slow reading client.

½â¾ö°ì·¨£ºÔÚhadoop-site.xmlÖÐÉèÖÃdfs.datanode.socket.write.timeout=0ÊÔÊÔ£»
My understanding is that this issue should be fixed in Hadoop 0.19.1 so that
we should leave the standard timeout. However until then this can help
resolve issues like the one you're seeing.

HDFSÍË·þ½ÚµãµÄ·½·¨
Ŀǰ°æ±¾µÄdfsadminµÄ°ïÖúÐÅÏ¢ÊÇûдÇå³þµÄ£¬ÒѾ­fileÁËÒ»¸öbugÁË£¬ÕýÈ·µÄ·½·¨ÈçÏ£º
1. ½« dfs.hosts ÖÃΪµ±Ç°µÄ slaves£¬ÎļþÃûÓÃÍêÕû·¾¶£¬×¢Ò⣬ÁбíÖеĽڵãÖ÷»úÃûÒªÓôóÃû£¬¼´ uname -n ¿ÉÒԵõ½µÄÄǸö¡£
2. ½« slaves ÖÐÒª±»ÍË·þµÄ½ÚµãµÄÈ«ÃûÁбí·ÅÔÚÁíÒ»¸öÎļþÀÈç slaves.ex£¬Ê¹Óà dfs.host.exclude ²ÎÊýÖ¸ÏòÕâ¸öÎļþµÄÍêÕû·¾¶
3. ÔËÐÐÃüÁî bin/hadoop dfsadmin -refreshNodes
4. web½çÃæ»ò bin/hadoop dfsadmin -report ¿ÉÒÔ¿´µ½ÍË·þ½ÚµãµÄ״̬ÊÇ Decomission in progress£¬Ö±µ½ÐèÒª¸´ÖƵÄÊý¾Ý¸´ÖÆÍê³ÉΪֹ
5. Íê³ÉÖ®ºó£¬´Ó slaves Àָ dfs.hosts Ö¸ÏòµÄÎļþ£©È¥µôÒѾ­ÍË·þµÄ½Úµã

¸½´øËµÒ»Ï -refreshNodes ÃüÁîµÄÁíÍâÈýÖÖÓÃ;£º
2. Ìí¼ÓÔÊÐíµÄ½Úµãµ½ÁбíÖУ¨Ìí¼ÓÖ÷»úÃûµ½ dfs.hosts ÀïÀ´£©
3. Ö±½ÓÈ¥µô½Úµã£¬²»×öÊý¾Ý¸±±¾±¸·Ý£¨ÔÚ dfs.hosts ÀïÈ¥µôÖ÷»úÃû£©
4. ÍË·þµÄÄæ²Ù×÷¡ª¡ªÍ£Ö¹ exclude ÀïÃæºÍ dfs.hosts ÀïÃæ¶¼Óеģ¬ÕýÔÚ½øÐÐ decomission µÄ½ÚµãµÄÍË·þ£¬Ò²¾ÍÊÇ°Ñ Decomission in progress µÄ½ÚµãÖØÐ±äΪ Normal £¨ÔÚ web ½çÃæ½Ð in service)

hadoop ѧϰ½è¼ø
1. ½â¾öhadoop OutOfMemoryErrorÎÊÌ⣺
<property>
   <name>mapred.child.java.opts</name>
   <value>-Xmx800M -server</value>
</property>
With the right JVM size in your hadoop-site.xml , you will have to copy this
to all mapred nodes and restart the cluster.
»òÕߣºhadoop jar jarfile [main class] -D mapred.child.java.opts=-Xmx800M

2. Hadoop java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) while indexing.
when i use nutch1.0,get this error:
Hadoop java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) while indexing.
Õâ¸öÒ²ºÜºÃ½â¾ö£º
¿ÉÒÔɾ³ýconf/log4j.properties£¬È»ºó¿ÉÒÔ¿´µ½ÏêϸµÄ´íÎ󱨸æ
ÎÒÕâ¶ù³öÏÖµÄÊÇout of memory
½â¾ö°ì·¨ÊÇÔÚ¸øÔËÐÐÖ÷Ààorg.apache.nutch.crawl.Crawl¼ÓÉϲÎÊý£º-Xms64m -Xmx512m
ÄãµÄ»òÐí²»ÊÇÕâ¸öÎÊÌ⣬µ«ÊÇÄÜ¿´µ½ÏêϸµÄ´íÎ󱨸æÎÊÌâ¾ÍºÃ½â¾öÁË

distribute cacheʹÓÃ
ÀàËÆÒ»¸öÈ«¾Ö±äÁ¿£¬µ«ÊÇÓÉÓÚÕâ¸ö±äÁ¿½Ï´ó£¬ËùÒÔ²»ÄÜÉèÖÃÔÚconfigÎļþÖУ¬×ª¶øÊ¹ÓÃdistribute cache
¾ßÌåʹÓ÷½·¨£º(Ïê¼û¡¶the definitive guide¡·,P240)
1. ÔÚÃüÁîÐе÷ÓÃʱ£ºµ÷ÓÃ-files£¬ÒýÈëÐèÒª²éѯµÄÎļþ(¿ÉÒÔÊÇlocal file, HDFS file(ʹÓÃhdfs://xxx?)), »òÕß -archives (JAR,ZIP, tarµÈ)
% hadoop jar job.jar MaxTemperatureByStationNameUsingDistributedCacheFile \
  -files input/ncdc/metadata/stations-fixed-width.txt input/ncdc/all output
2. ³ÌÐòÖе÷Óãº
   public void configure(JobConf conf) {
      metadata = new NcdcStationMetadata();
      try {
        metadata.initialize(new File("stations-fixed-width.txt"));
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
   }
ÁíÍâÒ»ÖÖ¼ä½ÓµÄʹÓ÷½·¨£ºÔÚhadoop-0.19.0ÖкÃÏñûÓÐ
µ÷ÓÃaddCacheFile()»òÕßaddCacheArchive()Ìí¼ÓÎļþ£¬
ʹÓÃgetLocalCacheFiles() »ò getLocalCacheArchives() »ñµÃÎļþ

hadoopµÄjobÏÔʾweb
There are web-based interfaces to both the JobTracker (MapReduce master) and NameNode (HDFS master) which display status pages about the state of the entire system. By default, these are located at [WWW] http://job.tracker.addr:50030/ and [WWW] http://name.node.addr:50070/.

hadoop¼à¿Ø
OnlyXP(52388483) 131702
ÓÃnagios×÷¸æ¾¯£¬ganglia×÷¼à¿ØÍ¼±í¼´¿É

status of 255 error
´íÎóÀàÐÍ£º
java.io.IOException: Task process exit with nonzero status of 255.
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:424)

´íÎóÔ­Òò£º
Set mapred.jobtracker.retirejob.interval and mapred.userlog.retain.hours to higher value. By default, their values are 24 hours. These might be the reason for failure, though I'm not sure

split size
FileInputFormat input splits: (Ïê¼û ¡¶the definitive guide¡·P190)
mapred.min.split.size: default=1, the smallest valide size in bytes for a file split.
mapred.max.split.size: default=Long.MAX_VALUE, the largest valid size.
dfs.block.size: default = 64M, ϵͳÖÐÉèÖÃΪ128M¡£
Èç¹ûÉèÖà minimum split size > block size, »áÔö¼Ó¿éµÄÊýÁ¿¡£(²ÂÏë´ÓÆäËû½ÚµãÄÃÈ¥Êý¾ÝµÄʱºò£¬»áºÏ²¢block£¬µ¼ÖÂblockÊýÁ¿Ôö¶à)
Èç¹ûÉèÖÃmaximum split size < block size, »á½øÒ»²½²ð·Öblock¡£

split size = max(minimumSize, min(maximumSize, blockSize));
ÆäÖÐ minimumSize < blockSize < maximumSize.

sort by value
hadoop ²»Ìṩֱ½ÓµÄsort by value·½·¨£¬ÒòΪÕâÑù»á½µµÍmapreduceÐÔÄÜ¡£
µ«¿ÉÒÔÓÃ×éºÏµÄ°ì·¨À´ÊµÏÖ£¬¾ßÌåʵÏÖ·½·¨¼û¡¶the definitive guide¡·, P250
»ù±¾Ë¼Ï룺
1. ×éºÏkey/value×÷ΪеÄkey£»
2. ÖØÔØpartitioner£¬¸ù¾Ýold keyÀ´·Ö¸î£»
conf.setPartitionerClass(FirstPartitioner.class);
3. ×Ô¶¨ÒåkeyComparator£ºÏȸù¾Ýold keyÅÅÐò£¬ÔÙ¸ù¾Ýold valueÅÅÐò£»
conf.setOutputKeyComparatorClass(KeyComparator.class);
4. ÖØÔØGroupComparator, Ò²¸ù¾Ýold key À´×éºÏ£»  conf.setOutputValueGroupingComparator(GroupComparator.class);

small input filesµÄ´¦Àí
¶ÔÓÚһϵÁеÄsmall files×÷Ϊinput file£¬»á½µµÍhadoopЧÂÊ¡£
ÓÐ3ÖÖ·½·¨¿ÉÒÔ½«small fileºÏ²¢´¦Àí£º
1. ½«Ò»ÏµÁеÄsmall filesºÏ²¢³ÉÒ»¸ösequneceFile£¬¼Ó¿ìmapreduceËÙ¶È¡£
Ïê¼ûWholeFileInputFormat¼°SmallFilesToSequenceFileConverter,¡¶the definitive guide¡·, P194
2. ʹÓÃCombineFileInputFormat¼¯³ÉFileinputFormat£¬µ«ÊÇδʵÏÖ¹ý£»
3. ʹÓÃhadoop archives(ÀàËÆ´ò°ü)£¬¼õÉÙСÎļþÔÚnamenodeÖеÄmetadataÄÚ´æÏûºÄ¡£(Õâ¸ö·½·¨²»Ò»¶¨¿ÉÐУ¬ËùÒÔ²»½¨ÒéʹÓÃ)
   ·½·¨£º
   ½«/my/filesĿ¼¼°Æä×ÓĿ¼¹éµµ³Éfiles.har£¬È»ºó·ÅÔÚ/myĿ¼ÏÂ
   bin/hadoop archive -archiveName files.har /my/files /my
   
   ²é¿´files in the archive:
   bin/hadoop fs -lsr har://my/files.har

skip bad records
JobConf conf = new JobConf(ProductMR.class);
conf.setJobName("ProductMR");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Product.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reduce.class);
conf.setMapOutputCompressorClass(DefaultCodec.class);
conf.setInputFormat(SequenceFileInputFormat.class);
conf.setOutputFormat(SequenceFileOutputFormat.class);
String objpath = "abc1";
SequenceFileInputFormat.addInputPath(conf, new Path(objpath));
SkipBadRecords.setMapperMaxSkipRecords(conf, Long.MAX_VALUE);
SkipBadRecords.setAttemptsToStartSkipping(conf, 0);
SkipBadRecords.setSkipOutputPath(conf, new Path("data/product/skip/"));
String output = "abc";
SequenceFileOutputFormat.setOutputPath(conf, new Path(output));
JobClient.runJob(conf);

For skipping failed tasks try : mapred.max.map.failures.percent

restart µ¥¸ödatanode
Èç¹ûÒ»¸ödatanode ³öÏÖÎÊÌ⣬½â¾öÖ®ºóÐèÒªÖØÐ¼ÓÈëcluster¶ø²»ÖØÆôcluster£¬·½·¨ÈçÏ£º
bin/hadoop-daemon.sh start datanode
bin/hadoop-daemon.sh start jobtracker

reduce exceed 100%
"Reduce Task Progress shows > 100% when the total size of map outputs (for a
single reducer) is high "
Ôì³ÉÔ­Òò£º
ÔÚreduceµÄmerge¹ý³ÌÖУ¬check progressÓÐÎó²î£¬µ¼ÖÂstatus > 100%£¬ÔÚͳ¼Æ¹ý³ÌÖоͻá³öÏÖÒÔÏ´íÎó£ºjava.lang.ArrayIndexOutOfBoundsException: 3
        at org.apache.hadoop.mapred.StatusHttpServer$TaskGraphServlet.getReduceAvarageProgresses(StatusHttpServer.java:228)
        at org.apache.hadoop.mapred.StatusHttpServer$TaskGraphServlet.doGet(StatusHttpServer.java:159)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:689)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:427)
        at org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(WebApplicationHandler.java:475)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:567)
        at org.mortbay.http.HttpContext.handle(HttpContext.java:1565)
        at org.mortbay.jetty.servlet.WebApplicationContext.handle(WebApplicationContext.java:635)
        at org.mortbay.http.HttpContext.handle(HttpContext.java:1517)
        at org.mortbay.http.HttpServer.service(HttpServer.java:954)

jiraµØÖ·£º

counters
3ÖÐcounters£º
1. built-in counters: Map input bytes, Map output records...
2. enum counters
   µ÷Ó÷½Ê½£º
  enum Temperature {
    MISSING,
    MALFORMED
  }

reporter.incrCounter(Temperature.MISSING, 1)
   ½á¹ûÏÔʾ£º
09/04/20 06:33:36 INFO mapred.JobClient:   Air Temperature Recor
09/04/20 06:33:36 INFO mapred.JobClient:     Malformed=3
09/04/20 06:33:36 INFO mapred.JobClient:     Missing=66136856
3. dynamic countes:
   µ÷Ó÷½Ê½£º
   reporter.incrCounter("TemperatureQuality", parser.getQuality(),1);
   
   ½á¹ûÏÔʾ£º
09/04/20 06:33:36 INFO mapred.JobClient:   TemperatureQuality
09/04/20 06:33:36 INFO mapred.JobClient:     2=1246032
09/04/20 06:33:36 INFO mapred.JobClient:     1=973422173
09/04/20 06:33:36 INFO mapred.JobClient:     0=1
7: Namenode in safe mode
½â¾ö·½·¨
bin/hadoop dfsadmin -safemode leave

8:java.net.NoRouteToHostException: No route to host
j½â¾ö·½·¨£º
sudo /etc/init.d/iptables stop

9£º¸ü¸Änamenodeºó£¬ÔÚhiveÖÐÔËÐÐselect ÒÀ¾ÉÖ¸Ïò֮ǰµÄnamenodeµØÖ·
ÕâÊÇÒòΪ£ºWhen youcreate a table, hive actually stores the location of the table (e.g.
hdfs://ip:port/user/root/...) in the SDS and DBS tables in the metastore . So when I bring up a new cluster the master has a new IP, but hive's metastore is still pointing to the locations within the old
cluster. I could modify the metastore to update with the new IP everytime I bring up a cluster. But the easier and simpler solution was to just use an elastic IP for the master
ËùÒÔÒª½«metastoreÖеÄ֮ǰ³öÏÖµÄnamenodeµØÖ·È«²¿¸ü»»ÎªÏÖÓеÄnamenodeµØÖ·


10£ºYour DataNode is started and you can create directories with bin/hadoop dfs -mkdir, but you get an error message when you try to put files into the HDFS (e.g., when you run a command like bin/hadoop dfs -put).
½â¾ö·½·¨£º
Go to the HDFS info web page (open your web browser and go to http://namenode:dfs_info_port where namenode is the hostname of your NameNode and dfs_info_port is the port you chose dfs.info.port; if followed the QuickStart on your personal computer then this URL will be http://localhost:50070). Once at that page click on the number where it tells you how many DataNodes you have to look at a list of the DataNodes in your cluster.
If it says you have used 100% of your space, then you need to free up room on local disk(s) of the DataNode(s).
If you are on Windows then this number will not be accurate (there is some kind of bug either in Cygwin's df.exe or in Windows). Just free up some more space and you should be okay. On one Windows machine we tried the disk had 1GB free but Hadoop reported that it was 100% full. Then we freed up another 1GB and then it said that the disk was 99.15% full and started writing data into the HDFS again. We encountered this bug on Windows XP SP2.
11£ºYour DataNodes won't start, and you see something like this in logs/*datanode*:
Incompatible namespaceIDs in /tmp/hadoop-ross/dfs/data
Ô­Òò£º
Your Hadoop namespaceID became corrupted. Unfortunately the easiest thing to do reformat the HDFS.
½â¾ö·½·¨£º
You need to do something like this:
bin/stop-all.sh
rm -Rf /tmp/hadoop-your-username/*
bin/hadoop namenode -format
12£ºYou can run Hadoop jobs written in Java (like the grep example), but your HadoopStreaming jobs (such as the Python example that fetches web page titles) won't work.
Ô­Òò£º
You might have given only a relative path to the mapper and reducer programs. The tutorial originally just specified relative paths, but absolute paths are required if you are running in a real cluster.
½â¾ö·½·¨£º
Use absolute paths like this from the tutorial:
bin/hadoop jar contrib/hadoop-0.15.2-streaming.jar \
  -mapper  $HOME/proj/hadoop/multifetch.py         \
  -reducer $HOME/proj/hadoop/reducer.py            \
  -input   urls/*                                  \
  -output  titles
13£º 2009-01-08 10:02:40,709 ERROR metadata.Hive (Hive.java:getPartitions(499)) - javax.jdo.JDODataStoreException: Required table missing : ""PARTITIONS"" in Catalog "" Schema "". JPOX requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "org.jpox.autoCreateTables"
Ô­Òò£º¾ÍÊÇÒòΪÔÚ hive-default.xml Àï°Ñ org.jpox.fixedDatastore ÉèÖÃ³É true ÁË
starting namenode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-namenode-hadoop.out
localhost: starting datanode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop.out
localhost: starting secondarynamenode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop.out
localhost: Exception in thread "main" java.lang.NullPointerException
localhost:      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
localhost:      at org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:116)
localhost:      at org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:120)
localhost:      at org.apache.hadoop.dfs.SecondaryNameNode.initialize(SecondaryNameNode.java:124)
localhost:      at org.apache.hadoop.dfs.SecondaryNameNode.<init>(SecondaryNameNode.java:108)
localhost:      at org.apache.hadoop.dfs.SecondaryNameNode.main(SecondaryNameNode.java:460)
14£º09/08/31 18:25:45 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:45 INFO hdfs.DFSClient: Abandoning block blk_-8575812198227241296_1001
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Abandoning block blk_-2932256218448902464_1001
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Abandoning block blk_-1014449966480421244_1001
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Abandoning block blk_7193173823538206978_1001
> 09/08/31 18:26:09 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable
to create new block.
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2731)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2182)
>
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Error Recovery for block blk_7193173823538206978_1001
bad datanode[2] nodes == null
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/umer/8GB_input"
- Aborting...
> put: Bad connect ack with firstBadLink 192.168.1.16:50010


½â¾ö·½·¨£º
I have resolved the issue:
What i did:

1) '/etc/init.d/iptables stop' -->stopped firewall
2) SELINUX=disabled in '/etc/selinux/config' file.-->disabled selinux
I worked for me after these two changes
½â¾öjline.ConsoleReader.readLineÔÚWindowsÉϲ»ÉúЧÎÊÌâ·½·¨
ÔÚ CliDriver.javaµÄmain()º¯ÊýÖУ¬ÓÐÒ»ÌõÓï¾äreader.readLine£¬ÓÃÀ´¶ÁÈ¡±ê×¼ÊäÈ룬µ«ÔÚWindowsƽ̨ÉϸÃÓï¾ä×ÜÊÇ·µ»Ønull£¬Õâ¸öreaderÊÇÒ»¸öʵÀýjline.ConsoleReaderʵÀý£¬¸øWindows Eclipseµ÷ÊÔ´øÀ´²»±ã¡£
ÎÒÃÇ¿ÉÒÔͨ¹ýʹÓÃjava.util.Scanner.ScannerÀ´Ìæ´úËü£¬½«Ô­À´µÄ
while ((line=reader.readLine(curPrompt+"> ")) != null)
¸´ÖÆ´úÂë
Ìæ»»Îª£º
Scanner sc = new Scanner(System.in);
while ((line=sc.nextLine()) != null)
¸´ÖÆ´úÂë
ÖØÐ±àÒë·¢²¼£¬¼´¿ÉÕý³£´Ó±ê×¼ÊäÈë¶ÁÈ¡ÊäÈëµÄSQLÓï¾äÁË¡£

Windows eclispeµ÷ÊÔhive±¨does not have a scheme´íÎó¿ÉÄÜÔ­Òò
1¡¢HiveÅäÖÃÎļþÖеġ°hive.metastore.local¡±ÅäÖÃÏîֵΪfalse£¬ÐèÒª½«ËüÐÞ¸ÄΪtrue£¬ÒòΪÊǵ¥»ú°æ
2¡¢Ã»ÓÐÉèÖÃHIVE_HOME»·¾³±äÁ¿£¬»òÉèÖôíÎó
3¡¢ ¡°does not have a scheme¡±ºÜ¿ÉÄÜÊÇÒòΪÕÒ²»µ½¡°hive-default.xml¡±¡£Ê¹ÓÃEclipseµ÷ÊÔHiveʱ£¬Óöµ½ÕÒ²»µ½hive- default.xmlµÄ½â¾ö·½·¨£ºhttp://bbs.hadoopor.com/thread-292-1-1.html
1¡¢ÖÐÎÄÎÊÌâ
    ´ÓurlÖнâÎö³öÖÐÎÄ,µ«hadoopÖдòÓ¡³öÀ´ÈÔÊÇÂÒÂë?ÎÒÃÇÔø¾­ÒÔΪhadoopÊDz»Ö§³ÖÖÐÎĵģ¬ºóÀ´¾­¹ý²é¿´Ô´´úÂ룬·¢ÏÖhadoop½ö½öÊDz»Ö§³ÖÒÔgbk¸ñʽÊä³öÖÐÎĶø¼º¡£
    ÕâÊÇTextOutputFormat.classÖеĴúÂ룬hadoopĬÈϵÄÊä³ö¶¼ÊǼ̳Ð×ÔFileOutputFormatÀ´µÄ£¬FileOutputFormatµÄÁ½¸ö×ÓÀàÒ»¸öÊÇ»ùÓÚ¶þ½øÖÆÁ÷µÄÊä³ö£¬Ò»¸ö¾ÍÊÇ»ùÓÚÎı¾µÄÊä³öTextOutputFormat¡£
    public class TextOutputFormat<K, V> extends FileOutputFormat<K, V> {
  protected static class LineRecordWriter<K, V>
    implements RecordWriter<K, V> {
    private static final String utf8 = ¡°UTF-8¡å;//ÕâÀﱻдËÀ³ÉÁËutf-8
    private static final byte[] newline;
    static {
      try {
        newline = ¡°\n¡±.getBytes(utf8);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(¡±can¡¯t find ¡± + utf8 + ¡± encoding¡±);
      }
    }
¡­
    public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
      this.out = out;
      try {
        this.keyValueSeparator = keyValueSeparator.getBytes(utf8);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(¡±can¡¯t find ¡± + utf8 + ¡± encoding¡±);
      }
    }
¡­
    private void writeObject(Object o) throws IOException {
      if (o instanceof Text) {
        Text to = (Text) o;
        out.write(to.getBytes(), 0, to.getLength());//ÕâÀïÒ²ÐèÒªÐÞ¸Ä
      } else {
        out.write(o.toString().getBytes(utf8));
      }
    }
¡­
}
    ¿ÉÒÔ¿´³öhadoopĬÈϵÄÊä³öдËÀΪutf-8£¬Òò´ËÈç¹ûdecodeÖÐÎÄÕýÈ·£¬ÄÇô½«Linux¿Í»§¶ËµÄcharacterÉèΪutf-8ÊÇ¿ÉÒÔ¿´µ½ÖÐÎĵġ£ÒòΪhadoopÓÃutf-8µÄ¸ñʽÊä³öÁËÖÐÎÄ¡£
    ÒòΪ´ó¶àÊýÊý¾Ý¿âÊÇÓÃgbkÀ´¶¨Òå×ֶεģ¬Èç¹ûÏëÈÃhadoopÓÃgbk¸ñʽÊä³öÖÐÎÄÒÔ¼æÈÝÊý¾Ý¿âÔõô°ì£¿
    ÎÒÃÇ¿ÉÒÔ¶¨ÒåÒ»¸öеÄÀࣺ
    public class GbkOutputFormat<K, V> extends FileOutputFormat<K, V> {
  protected static class LineRecordWriter<K, V>
    implements RecordWriter<K, V> {
//д³Égbk¼´¿É
    private static final String gbk = ¡°gbk¡±;
    private static final byte[] newline;
    static {
      try {
        newline = ¡°\n¡±.getBytes(gbk);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(¡±can¡¯t find ¡± + gbk + ¡± encoding¡±);
      }
    }
¡­
    public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
      this.out = out;
      try {
        this.keyValueSeparator = keyValueSeparator.getBytes(gbk);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(¡±can¡¯t find ¡± + gbk + ¡± encoding¡±);
      }
    }
¡­
    private void writeObject(Object o) throws IOException {
      if (o instanceof Text) {
//        Text to = (Text) o;
//        out.write(to.getBytes(), 0, to.getLength());
//      } else {
        out.write(o.toString().getBytes(gbk));
      }
    }
¡­
}
    È»ºóÔÚmapreduce´úÂëÖмÓÈëconf1.setOutputFormat(GbkOutputFormat.class)
    ¼´¿ÉÒÔgbk¸ñʽÊä³öÖÐÎÄ¡£

2¡¢Ä³´ÎÕý³£ÔËÐÐmapreduceʵÀýʱ,Å׳ö´íÎó

java.io.IOException: All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting¡­
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2158)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
java.io.IOException: Could not get block locations. Aborting¡­
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
¾­²éÃ÷£¬ÎÊÌâÔ­ÒòÊÇlinux»úÆ÷´ò¿ªÁ˹ý¶àµÄÎļþµ¼Ö¡£ÓÃÃüÁîulimit -n¿ÉÒÔ·¢ÏÖlinuxĬÈϵÄÎļþ´ò¿ªÊýĿΪ1024£¬ÐÞ¸Ä/ect/security/limit.conf£¬Ôö¼Óhadoop soft 65535

ÔÙÖØÐÂÔËÐгÌÐò£¨×îºÃËùÓеÄdatanode¶¼Ð޸ģ©£¬ÎÊÌâ½â¾ö

3¡¢ÔËÐÐÒ»¶Îʱ¼äºóhadoop²»ÄÜstop-all.shµÄÎÊÌ⣬ÏÔʾ±¨´í
no tasktracker to stop £¬no datanode to stop
ÎÊÌâµÄÔ­ÒòÊÇhadoopÔÚstopµÄʱºòÒÀ¾ÝµÄÊÇdatanodeÉϵÄmapredºÍdfs½ø³ÌºÅ¡£¶øÄ¬ÈϵĽø³ÌºÅ±£´æÔÚ/tmpÏ£¬linuxĬÈÏ»áÿ¸ôÒ»¶Îʱ¼ä£¨Ò»°ãÊÇÒ»¸öÔ»òÕß7Ìì×óÓÒ£©È¥É¾³ýÕâ¸öĿ¼ÏµÄÎļþ¡£Òò´Ëɾµôhadoop-hadoop-jobtracker.pidºÍhadoop- hadoop-namenode.pidÁ½¸öÎļþºó£¬namenode×ÔÈ»¾ÍÕÒ²»µ½datanodeÉϵÄÕâÁ½¸ö½ø³ÌÁË¡£
ÔÚÅäÖÃÎļþÖеÄexport HADOOP_PID_DIR¿ÉÒÔ½â¾öÕâ¸öÎÊÌâ


ÎÊÌ⣺
Incompatible namespaceIDs in /usr/local/hadoop/dfs/data: namenode namespaceID = 405233244966; datanode namespaceID = 33333244
Ô­Òò£º
ÔÚÿ´ÎÖ´ÐÐhadoop namenode -formatʱ£¬¶¼»áΪNameNodeÉú³ÉnamespaceID,£¬µ«ÊÇÔÚhadoop.tmp.dirĿ¼ÏµÄDataNode»¹ÊDZ£ÁôÉÏ´ÎµÄ namespaceID£¬ÒòΪnamespaceIDµÄ²»Ò»Ö£¬¶øµ¼ÖÂDataNodeÎÞ·¨Æô¶¯£¬ËùÒÔÖ»ÒªÔÚÿ´ÎÖ´ÐÐhadoop namenode -format֮ǰ£¬ÏÈɾ³ýhadoop.tmp.dirĿ¼¾Í¿ÉÒÔÆô¶¯³É¹¦¡£Çë×¢ÒâÊÇɾ³ýhadoop.tmp.dir¶ÔÓ¦µÄ±¾µØÄ¿Â¼£¬¶ø²»ÊÇHDFS Ŀ¼¡£
Problem: Storage directory not exist
2010-02-09 21:37:53,203 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory D:\hadoop\run\dfs_name_dir does not exist.
2010-02-09 21:37:53,203 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:\hadoop\run\dfs_name_dir is in an inconsistent state: storage directory does not exist or is not accessible.
solution: ÊÇÒòΪ´æ´¢Ä¿Â¼D:\hadoop\run\dfs_name_dir²»´æÔÚ£¬ËùÒÔÖ»ÐèÒªÊÖ¶¯´´½¨ºÃÕâ¸öĿ¼¼´¿É¡£
Problem: NameNode is not formatted
solution: ÊÇÒòΪHDFS»¹Ã»Óиñʽ»¯£¬Ö»ÐèÒªÔËÐÐhadoop namenode -formatһϣ¬È»ºóÔÙÆô¶¯¼´¿É

bin/hadoop jpsºó±¨ÈçÏÂÒì³££º
Exception in thread "main" java.lang.NullPointerException
        at sun.jvmstat.perfdata.monitor.protocol.local.LocalVmManager.activeVms(LocalVmManager.java:127)
        at sun.jvmstat.perfdata.monitor.protocol.local.MonitoredHostProvider.activeVms(MonitoredHostProvider.java:133)
        at sun.tools.jps.Jps.main(Jps.java:45)
Ô­ÒòΪ£º
ϵͳ¸ùĿ¼/tmpÎļþ¼Ð±»É¾³ýÁË¡£ÖØÐ½¨Á¢/tmpÎļþ¼Ð¼´¿É¡£
bin/hiveÖгöÏÖ unable to  create log directory /tmp/...Ò²¿ÉÄÜÊÇÕâ¸öÔ­Òò

http://blog.csdn.net/zyj8170/archive/2010/11/26/6037934.aspx

 
 
  • ±êÇ©£ºHadoop ÎÊÌâ ´íÎó 
  • ·¢±íÆÀÂÛ£º
    ÔØÈëÖС£¡£¡£

     
     
     

    ÃÎÏè¶ùÍøÕ¾ ÃηÉÏèµÄµØ·½ http://www.dreamflier.net
    ÖлªÈËÃñ¹²ºÍ¹úÐÅÏ¢²úÒµ²¿TCP/IPϵͳ ±¸°¸ÐòºÅ£ºÁÉICP±¸09000550ºÅ

    Powered by Oblog.