载入中。。。 'S bLog
 
载入中。。。
 
载入中。。。
载入中。。。
载入中。。。
载入中。。。
载入中。。。
 
填写您的邮件地址,订阅我们的精彩内容:


 
停掉hadoop数据结节的方法
[ 2011/5/30 19:51:00 | By: 梦翔儿 ]
 
Before I can install MogileFS onto the cluster, I need first to prepare some computer nodes for it. Our cluster has more than 70 nodes, and I’ll use 6 nodes to setup the MogileFS, so, first I need to recommission the 6 nodes from Hadoop.

According to one thread from ServerFault, First, we need to add the following lines to conf/mapred-site.xml, 
<property>

<name>dfs.hosts.exclude</name>
<value>/etc/hadoop/conf.dist/dfs.hosts.exclude</value>
<final>true</final>
</property>
where in the value part, we put the name of the file containing the 6 nodes. After this, use?hadoop dfsadmin -refreshNodes to notify HDFS to transfer the data from the 6 nodes and delete the 6 nodes from the system. We can monitor this process from the webpage http://hadoop_master_data_node/dfsnodelist.jsp?whatNodes=LIVE

http://xinxiwang.wordpress.com/tag/mogilefs/

 
 
  • 标签:hadoop 
  • 发表评论:
    载入中。。。

     
     
     

    梦翔儿网站 梦飞翔的地方 http://www.dreamflier.net
    中华人民共和国信息产业部TCP/IP系统 备案序号:辽ICP备09000550号

    Powered by Oblog.