Analytics/Cluster/Hadoop/Administration

From Wikitech

NameNodes

New NameNode Installation

Our Hadoop NameNodes are R420 boxes. It was difficult (if not impossible) to create a suitable partman recipe for the partition layout we wanted. The namenode partitions were created manually during installation.

These nodes have 4 disks. We are mostly concerned with reliability of these nodes. The 4 disks were assembled into a single software RAID 1 array:

$ mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jan  7 21:10:18 2015
     Raid Level : raid1
     Array Size : 2930132800 (2794.39 GiB 3000.46 GB)
  Used Dev Size : 2930132800 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Jan 14 21:42:20 2015
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           Name : analytics1001:0  (local to host analytics1001)
           UUID : d1e971cb:3e615ace:8089f89e:280cdbb3
         Events : 100

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2

LVM md0 with a single volume group was then added onto md0. Two logical volumes were then added for / root and for /var/lib/hadoop/name Hadoop NameNode partition.

$ cat /etc/fstab
/dev/mapper/analytics--vg-root /               ext4    errors=remount-ro 0       1
/dev/mapper/analytics--vg-namenode /var/lib/hadoop/name ext3    noatime         0       2

High Availability

We don't (yet) use automatic failover between active and standby NameNodes. This means that upon start, all NameNode processes will put themselves into standby mode. If you ever need to restart a NameNode, you'll have to manually promote one of them to active before HDFS can be used.

Note that you need to use the logical name of the NameNode in hdfs haadmin commands, not the hostname. The puppet-cdh module uses the fqdn of the node with dots replaced with dashes as the logical NameNode names.

Transition to Active

If all of your NameNodes are currently in standby state, you can choose one to transition to active:

 sudo -u hdfs /usr/bin/hdfs haadmin -transitionToActive analytics1001-eqiad-wmnet

Manual Failover

To identify the current name node machines, read puppet configs (analytics100[12] at moment of writing). If you want to move the active status to a different NameNode, you can force a manual failover:

 sudo -u hdfs /usr/bin/hdfs haadmin -failover analytics1001-eqiad-wmnet analytics1002-eqiad-wmnet

(That command assumes that analytics1010-eqiad-wmnet is the currently active and should become standby, while analytics1002-eqiad-wmnet is standby and should become active)

Migrating to new HA NameNodes

This section will describe how to make an existent cluster use new hardware for new NameNodes. This will require a full cluster restart.

Put new NameNode(s) into 'unknown standby' mode

HDFS HA does not allow for hdfs-site.xml to specify more than two dfs.ha.namenodes at a time. HA is intended to only work with a single standby NameNode.

An unknown standby NameNode (I just made this term up!) is a standby NameNode that knows about the active master and the JournalNodes, but that is not known by the rest of the cluster. That is, it will be configured to know how to read edits from the JournalNodes, but it will not be a fully functioning standby NameNode. You will not be able to promote it to active. Configuring your new NameNodes as unknown standbys allows them to sync their name data from the JournalNodes before shutting down the cluster and configuring them as the new official NameNodes.

Configure the new NameNodes exactly as you would have a normal NameNode, but make sure that the following properties only set the current active NameNode and the hostname of the new NameNode. In this example, let's say you have NameNodes nn1 and nn2 currently operating, and you want to migrate them to nodes nn3 and nn4.


nn3 should have the following set in hdfs-site.xml

  <property>
    <name>dfs.ha.namenodes.cluster-name</name>
    <value>nn1,nn3</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.cluster-name.nn1</name>
    <value>nn1:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.cluster-name.nn3</name>
    <value>nn3:8020</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.cluster-name.nn1</name>
    <value>nn1:50070</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.cluster-name.nn3</name>
    <value>nn3:50070</value>
  </property>

Note that there is no mention of nn2 in hdfs-site.xml on nn3. nn4 should be configured the same, except with reference to nn4 instead of nn3.

Once this is done, you can start hadoop-hdfs-namenode on nn3 and nn4 and bootstrap them. This is done on both new nodes.

sudo -u hdfs hdfs namenode -bootstrapStandby
service hadoop-hdfs-namenode start

NOTE: If you are using the puppet-cdh module, this will be done for you. You should probably just conditionally configure your new NameNodes differently than the rest of your cluster and apply that for this step, e.g.:

        namenode_hosts => $::hostname ? {
            'nn3'   => ['nn1', 'nn3'],
            'nn4'   => ['nn1', 'nn4'],
            default => ['nn1', 'nn2'],
        }

Put all NameNodes in standby and shutdown Hadoop.

Once your new unknown standby NameNodes are up and bootstrapped, transition your active NameNode to standby so that all writes to HDFS stop. At this point all 4 NameNodes will be in sync. Then shutdown the whole cluster:

sudo -u hdfs hdfs haadmin -transitionToStandby nn1
# Do the following on every Hadoop node.
# Since you will be changing global configs, you should also
# shutdown any 3rd party services too (e.g. Hive, Oozie, etc.).
# Anything that has a reference to the old NameNodes should
# be shut down.

shutdown_service() {
    test -f /etc/init.d/$1 && echo "Stopping $1" && service $1 stop
}

shutdown_hadoop() {
    shutdown_service hue
    shutdown_service oozie
    shutdown_service hive-server2
    shutdown_service hive-metastore
    shutdown_service hadoop-yarn-resourcemanager
    shutdown_service hadoop-hdfs-namenode
    shutdown_service hadoop-hdfs-httpfs
    shutdown_service hadoop-mapreduce-historyserver
    shutdown_service hadoop-hdfs-journalnode
    shutdown_service hadoop-yarn-nodemanager
    shutdown_service hadoop-hdfs-datanode
}

shutdown_hadoop

Reconfigure your cluster with the new NameNodes and start everything back up.

Next, edit hdfs-site.xml everywhere, and anywhere else that there was a mention of nn1 or nn2 and replace them with nn3 nn4. If you are moving yarn services as well, now is a good time to reconfigure them with the new hostnames as well.

Restart your JournalNodes with the new configs first. Once that is done, your can start all of your cluster services back up in any order. Once everything is back up, transition your new primary active NameNode to active:

sudo -u hdfs hdfs haadmin -transitionToActive nn1

JournalNodes

JournalNodes should be provisioned in odd numbers 3 or greater. These should be balanced across rows for greater resiliency against potential datacenter related failures.

Adding a new JournalNode to a running HA Hadoop Cluster

See https://groups.google.com/a/cloudera.org/d/msg/cdh-user/M4OvE-oimOk/dyGfSoI_3TgJ

1. Create journal partition. This should be RAID 1. Existing JournalNodes use RAID 1 LVM ext4.

# create the RAID 1 array
md_name=/dev/md/2
mdadm --create ${md_name} --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3

# Update mdadm.conf so the new array is reassembled on boot.
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

# Create the logical volume on ${md_name}
pvcreate ${md_name}
vgcreate $HOSTNAME-vg ${md_name}
lvcreate -L10G -n journalnode $HOSTNAME-vg

# create an ext4 filesystem
mkfs.ext4 /dev/mapper/$HOSTNAME--vg-journalnode
tune2fs -m 0 /dev/mapper/$HOSTNAME--vg-journalnode

# add the partition to fstab
echo "# Hadoop JournalNode partition
/dev/mapper/$HOSTNAME--vg-journalnode	/var/lib/hadoop/journal	ext3	defaults,noatime	0	2" >> /etc/fstab

# create the journal directory and mount it
mkdir -p /var/lib/hadoop/journal
mount /var/lib/hadoop/journal
chown hdfs:hdfs /var/lib/hadoop/journal

2. Copy the journal directory from an existing JournalNode.

# shut down one of your existing JournalNodes
service hadoop-hdfs-journalnode stop

# copy the journal data over
rsync -avP /var/lib/hadoop/journal/ newhost.eqiad.wmnet:/var/lib/hadoop/journal/

# Start the JournalNode back up
service hadoop=hdfs-journalnode start

3. Puppetize and start the new JournalNode. Now puppetize the new node as a JournalNode. In role/analytics/hadoop.pp edit the role::analytics::hadoop::production class and add this node's fqdn to the $journalnode_hosts array.

   $journalnode_hosts        = [
       'analytics1011.eqiad.wmnet',  # Row A2
       'analytics1014.eqiad.wmnet',  # Row C7
       'oldnode.eqiad.wmnet',
       'newnode.eqiad.wmnet', # Brand new row!
   ]

Run puppet on the new JournalNode. This should install and start the JournalNode daemon.

4. Apply puppet on NameNodes and restart them. Once the new JournalNode is up and running, we need to reconfigure and restart the NameNodes to get them recognize the new JournalNode. Note that we don't currently use hot-failover between NameNodes, so if you want to keep Hadoop online while you do this, you'll have to make sure you only restart a NameNode when it is in standby mode. These instructions will show how to do that, but you could just restart the NameNodes willy nilly if you like.

Run puppet on each of your NameNodes. The new JournalNode will be added to the list in dfs.namenode.shared.edits.dir.

Restart NameNode on any of your standby NameNodes first. Once that's done, check their dfshealth status to see if the new JournalNode registers. Once all of your standby NameNodes see the new JournalNode, go ahead and promote one of them to active so we can restart the primary active NameNode:

 sudo -u hdfs /usr/bin/hdfs haadmin -failover analytics1010-eqiad-wmnet analytics1009-eqiad-wmnet

Note that you need to use the logical name of the NameNode here, not the hostname. The puppet-cdh4 module uses the fqdn of the node with dots replaced with dashes as the logical NameNode names.

Once you've moved failed over to a different active NameNode, you may restart the hadoop-hdfs-namenode on your usual active NameNode. Check the dfshealth status page again and be sure that the new JournalNode is up and writing transactions.

Cool! You'll probably want to promote your usual NameNode back to active:

 sudo -u hdfs /usr/bin/hdfs haadmin -failover analytics1009-eqiad-wmnet analytics1010-eqiad-wmnet 

Done!

Worker Nodes (DataNode & NodeManager)

New Worker Installation

Note: There is a bug in older versions of the linux kernel with busy usage JVM processes. Our Ubuntu Trusty image ships with this bug, so make sure to apt-get upgrade when you provision a new node.

12 disk, no SSDs (analytics1011-analytics1020)

These nodes have 12 2TB disks. The first two disks have a 30GB RAID 1 /, and a 1GB RAID 1 swap. Partman will create these partitions when the node is installed. Partman is not fancy enough to do the rest, so you will have to do this manually.

Our desired layout looks like this:

Device Size Mount Point
/dev/md0 (sda1,sdb1 RAID 1) 30 G /
/dev/md1 (sda2,sdb2 RAID 1) 1G swap
/dev/md2 (sda3,sdb3 RAID1) 10G /var/lib/hadoop/journal (only if journalnode)
/dev/sda4 Fill Disk /var/lib/hadoop/data/a
/dev/sdb4 Fill Disk /var/lib/hadoop/data/b
/dev/sdc1 Fill Disk /var/lib/hadoop/data/c
/dev/sdd1 Fill Disk /var/lib/hadoop/data/d
/dev/sde1 Fill Disk /var/lib/hadoop/data/e
/dev/sdf1 Fill Disk /var/lib/hadoop/data/f
/dev/sdg1 Fill Disk /var/lib/hadoop/data/g
/dev/sdh1 Fill Disk /var/lib/hadoop/data/h
/dev/sdi1 Fill Disk /var/lib/hadoop/data/i
/dev/sdj1 Fill Disk /var/lib/hadoop/data/j
/dev/sdk1 Fill Disk /var/lib/hadoop/data/k
/dev/sdl1 Fill Disk /var/lib/hadoop/data/l

Note that the JournalNode partition won't be used on all of these nodes. We set up a small JournalNode partition to ease installation and movement of JournalNodes onto any worker node.

Copy/pasting the following should do everything necessary to set up the Broker data partitions.

#!/bin/bash

# Delete partition 3 and 4 if you have them left over from a previous installation.
for disk in /dev/sd{a,b}; do
fdisk $disk <<EOF
d
3
d
4
w
EOF
done

# Delete DataNode partitions if leftover from previous installation.
for disk in /dev/sd{c,d,e,f,g,h,i,j,k,l}; do
fdisk $disk <<EOF
d
1
w
EOF
done

# Create partition 3 for JournalNode data, and partition 4 spanning the rest
# of the disk for DataNode data.
for disk in /dev/sd{a,b}; do
fdisk $disk <<EOF
n
p
3

+10G
t
3
fd
n
p
4


w
EOF
done


# Create a single partition spanning full disk for remaining 10 disks.
for disk in /dev/sd{c,d,e,f,g,h,i,j,k,l}; do
fdisk $disk <<EOF
n
p
1
t


w
EOF
done

# run partprobe to refresh partition table
partprobe


# Create mirrored RAID 1 on sda3 and sdb3 for JournalNode data.
md_name=/dev/md/2
mdadm --create ${md_name} --level 1 --raid-devices=2 /dev/sda3 /dev/sdb3 <<EOF
y
EOF
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

# Now make ext4 filesystems everywhere
mkfs.ext4 ${md_name} &
for disk_letter in a b c d e f g h i j k l; do
    if [ "${disk_letter}" = 'a' -o  "${disk_letter}" = 'b' ]; then
        partition_number=4
    else
        partition_number=1
    fi
    partition="/dev/sd${disk_letter}${partition_number}"

    mkfs.ext4 $partition &
done

### IMPORTANT!
# Wait for all the above ext4 filesystems to be formatted
# before running the following loop.
#

journal_directory=/var/lib/hadoop/journal
tune2fs -m 0 ${md_name}
mkdir -pv $journal_directory

# use the partition's UUID in fstab, in case mdadm's name for this changes on boot.
uuid=$(blkid -o value ${md_name} | head -n 1)
grep -q $journal_directory /etc/fstab || echo -e "# Hadoop JournalNode partition (was originally ${md_name}\nUUID=${uuid}\t${journal_directory}\text4\tdefaults,noatime\t0\t2" | tee -a /etc/fstab

data_directory=/var/lib/hadoop/data
for disk_letter in a b c d e f g h i j k l; do
    if [ "${disk_letter}" = 'a' -o  "${disk_letter}" = 'b' ]; then
        partition_number=4
    else
        partition_number=1
    fi
    partition="/dev/sd${disk_letter}${partition_number}"
    mount_point="${data_directory}/${disk_letter}"

    # Don't reserve any blocks for OS on these partitions.
    tune2fs -m 0 $partition

    # Make the mount point.
    mkdir -pv $mount_point
    grep -q $mount_point /etc/fstab || echo -e "# Hadoop DataNode partition ${disk_letter}\n${partition}\t${mount_point}\text4\tdefaults,noatime\t0\t2" | tee -a /etc/fstab

    mount -v $mount_point
done


12 disk, 2 flex bay drives (analytics1028-analytics1059)

These nodes come with 2 x 2.5" drives on which the OS and JournalNode partitions are installed. This leaves all of the space on the 12 4TB HDDs for DataNode use.


Device Size Mount Point
/dev/mapper/analytics1029--vg-root (LVM on /dev/sda5 Hardware RAID 1 on 2 flex bays) 30 G /
/dev/mapper/analytics1029--vg-swap_1 (LVM on /dev/sda1 Hardware RAID 1 on 2 flex bays) 1G swap
/dev/mapper/analytics1029--vg-journalnode (LVM on /dev/sda1 Hardware RAID 1 on 2 flex bays) 10G /var/lib/hadoop/journal
/dev/sdb1 Fill Disk /var/lib/hadoop/data/a
/dev/sdc1 Fill Disk /var/lib/hadoop/data/b
/dev/sdd1 Fill Disk /var/lib/hadoop/data/c
/dev/sde1 Fill Disk /var/lib/hadoop/data/d
/dev/sdf1 Fill Disk /var/lib/hadoop/data/e
/dev/sdg1 Fill Disk /var/lib/hadoop/data/f
/dev/sdh1 Fill Disk /var/lib/hadoop/data/g
/dev/sdi1 Fill Disk /var/lib/hadoop/data/h
/dev/sdj1 Fill Disk /var/lib/hadoop/data/i
/dev/sdk1 Fill Disk /var/lib/hadoop/data/j
/dev/sdl1 Fill Disk /var/lib/hadoop/data/k
/dev/sdm1 Fill Disk /var/lib/hadoop/data/l

The analytics-flex.cfg partman recipe will have created the root and swap logical volumes. We need to create the JournalNode and DataNode partitions manually. Copy/Pasting the following commands should do this for you.

# Create a logical volumne for JournalNode data.
# There should only be one VG, look up its name:
vgname=$(vgdisplay -C --noheadings -o vg_name | head -n 1 | tr -d ' ')
lvcreate -n journalnode -L 10G $vgname

# make an ext4 filesystem
mkfs.ext4 /dev/$vgname/journalnode

# Don't reserve any blocks for OS on this partition.
tune2fs -m 0 /dev/$vgname/journalnode

mount_point=/var/lib/hadoop/journal
mkdir -pv $mount_point
grep -q $mount_point /etc/fstab || echo -e "# # Hadoop JournalNode partition\n/dev/$vgname/journalnode\t${mount_point}\text4\tdefaults,noatime\t0\t2" | tee -a /etc/fstab

mount -v $mount_point


# Now make ext4 filesystems on each DataNode partition.
# These disks are larger than 2TB, so we must use
# parted to create a GUID partition table on them.
for disk_letter in b c d e f g h i j k l m; do
    disk=/dev/sd${disk_letter}
    parted ${disk} --script mklabel gpt
    parted ${disk} --script mkpart primary ext4 0% 100%

    partition=${disk}1
    mkfs.ext4 $partition &
done

### IMPORTANT!
# Wait for all the above ext4 filesystems to be formatted
# before running the following loop.
#
data_directory=/var/lib/hadoop/data
for disk_letter in b c d e f g h i j k l m; do
    partition_number=1
    partition="/dev/sd${disk_letter}${partition_number}"
    mount_point="${data_directory}/${disk_letter}"
    # Don't reserve any blocks for OS on these partitions.
    tune2fs -m 0 $partition

    # Make the mount point.
    mkdir -pv $mount_point
    grep -q $mount_point /etc/fstab || echo -e "# Hadoop DataNode partition ${disk_letter}\n${partition}\t${mount_point}\text4\tdefaults,noatime\t0\t2" | tee -a /etc/fstab
 
    mount -v $mount_point
done

Decommissioning

To decommission a Hadoop worker node, you will need to edit hosts.exclude on all of the NameNodes, and then tell both HDFS and YARN to refresh the list of nodes.

Edit /etc/hadoop/conf/hosts.exclude on all NameNodes (analytics1001 and analytics1002 as of 2015-07) and add the FQDN of each node you intend to decommission, one hostname per line. NOTE: YARN and HDFS both use this file to exclude nodes. It seems that HDFS expects hostnames, and YARN expects FQDNs. To be safe, you can put both in this file, one per line. E.g if you wanted to decommission analytics1012:

 analytics1012.eqiad.wmnet
 analytics1012

Once done, run hdfs dfsadmin -refreshNodes command for each NameNode FS URI:

 sudo -u hdfs hdfs dfsadmin -fs hdfs://analytics1010.eqiad.wmnet:8020 -refreshNodes
 sudo -u hdfs hdfs dfsadmin -fs hdfs://analytics1009.eqiad.wmnet:8020 -refreshNodes

Run this on each ResourceManager host:

 sudo -u hdfs yarn rmadmin -refreshNodes

Now check both YARN and HDFS Web UIs to double check that your node is listed as decommissioning for HDFS, and not listed in the list of active nodes for YARN:

Balancing HDFS

Over time it might occur that some datanodes have almost all of their disk space for HDFS filled up, while others have lots of free HDFS disk space. hdfs balance can be used to re-balance HDFS.

In March 2015 ~0.8TB/day needed to be moved to keep HDFS balanced. (On 2015-03-13T04:40, a balancing run ended and HDFS was balanced. And on 2015-03-15T09:10 already 1.83TB needed to get balanced again)

hdfs balancer is now run regularly as a cron job, so hopefully you won't have to do this manually.

Checking balanced-ness on HDFS

To see how balanced/un-balanced HDFS is, you can run sudo -u hdfs hdfs dfsadmin -report on stat1002. This command will output DFS Used% per data node. If that number is equal for each data node, disk space utilization for HDFS is proportionally alike for the whole cluster. If DFS Used% numbers do not align, the cluster is somewhat misbalanced; Per default hdfs balance considers a cluster balanced once each datanodes are within 10% points (in terms of DFS Used%) of the total DFS Used%).

Here's a pipeline that'll format a report for you:

_________________________________________________________________
qchris@stat1002 // jobs: 0 // time: 09:40:52 // exit code: 0
cwd: ~
sudo -u hdfs hdfs dfsadmin -report | cat <(echo "Name: Total") - | grep '^\(Name\|Total\|DFS Used\)' | tr '\n' '\t' | sed -e 's/\(Name\)/\n\1/g' | sort --field-separator=: --key=5,5n

Name: Total     DFS Used: 479802829437520 (436.38 TB)   DFS Used%: 56.48%
Name: 10.64.53.16:50010 (analytics1037.eqiad.wmnet)     DFS Used: 24728499759105 (22.49 TB)     DFS Used%: 52.34%
Name: 10.64.53.15:50010 (analytics1036.eqiad.wmnet)     DFS Used: 24795956362795 (22.55 TB)     DFS Used%: 52.48%
Name: 10.64.36.130:50010 (analytics1030.eqiad.wmnet)    DFS Used: 24824351783406 (22.58 TB)     DFS Used%: 52.54%
Name: 10.64.53.20:50010 (analytics1041.eqiad.wmnet)     DFS Used: 24841574653962 (22.59 TB)     DFS Used%: 52.58%
Name: 10.64.53.19:50010 (analytics1040.eqiad.wmnet)     DFS Used: 24904078674773 (22.65 TB)     DFS Used%: 52.71%
Name: 10.64.36.128:50010 (analytics1028.eqiad.wmnet)    DFS Used: 24996433223579 (22.73 TB)     DFS Used%: 52.90%
Name: 10.64.36.129:50010 (analytics1029.eqiad.wmnet)    DFS Used: 25038245415956 (22.77 TB)     DFS Used%: 52.99%
Name: 10.64.53.18:50010 (analytics1039.eqiad.wmnet)     DFS Used: 25078144032077 (22.81 TB)     DFS Used%: 53.08%
Name: 10.64.53.17:50010 (analytics1038.eqiad.wmnet)     DFS Used: 25086314642814 (22.82 TB)     DFS Used%: 53.10%
Name: 10.64.53.14:50010 (analytics1035.eqiad.wmnet)     DFS Used: 25096872644001 (22.83 TB)     DFS Used%: 53.12%
Name: 10.64.36.131:50010 (analytics1031.eqiad.wmnet)    DFS Used: 25162887940986 (22.89 TB)     DFS Used%: 53.26%
Name: 10.64.36.132:50010 (analytics1032.eqiad.wmnet)    DFS Used: 25579248652515 (23.26 TB)     DFS Used%: 54.14%
Name: 10.64.36.134:50010 (analytics1034.eqiad.wmnet)    DFS Used: 25952623444683 (23.60 TB)     DFS Used%: 54.93%
Name: 10.64.36.133:50010 (analytics1033.eqiad.wmnet)    DFS Used: 26577437902292 (24.17 TB)     DFS Used%: 56.25%
Name: 10.64.53.11:50010 (analytics1019.eqiad.wmnet)     DFS Used: 15823398196956 (14.39 TB)     DFS Used%: 67.21%
Name: 10.64.53.12:50010 (analytics1020.eqiad.wmnet)     DFS Used: 15834595307796 (14.40 TB)     DFS Used%: 67.25%
Name: 10.64.36.115:50010 (analytics1015.eqiad.wmnet)    DFS Used: 15868752703281 (14.43 TB)     DFS Used%: 67.40%
Name: 10.64.36.117:50010 (analytics1017.eqiad.wmnet)    DFS Used: 15895973328317 (14.46 TB)     DFS Used%: 67.52%
Name: 10.64.36.116:50010 (analytics1016.eqiad.wmnet)    DFS Used: 15898802110703 (14.46 TB)     DFS Used%: 67.53%
Name: 10.64.36.114:50010 (analytics1014.eqiad.wmnet)    DFS Used: 15914265676257 (14.47 TB)     DFS Used%: 67.59%
Name: 10.64.5.13:50010 (analytics1013.eqiad.wmnet)      DFS Used: 15944320425203 (14.50 TB)     DFS Used%: 67.72%
Name: 10.64.5.11:50010 (analytics1011.eqiad.wmnet)      DFS Used: 15960093714655 (14.52 TB)     DFS Used%: 67.79%

Here, the total DFS Used% is 56.48%. Any datanode that is more that 10% points (per default) away from that is considered over-/under-uitilized. So here, the bottom 8 data nodes are above 66.48%, hence over-utilized and need to have data moved onto the other nodes.

Note that the absolute DFS Used (no % at the end) ranges greatly. It's already low for the bottom 8 data nodes, although they are over-utilized per the previous paragraph and need to move data to other nodes. The reason is that the total disk space that is reserved for HDFS is lower on those nodes.

Re-balancing HDFS

To rebalance:

  1. Check that no other balancer is running. (Ask Ottomata. He would know. It's recommended to run at most 1 balancer per cluster)
  2. Decide on which node to run the balancer on. (In several pages in the www one finds recommendations to run the balancer on a node that does not host Hadoop services. Hence, (at 2015-03-15) analytics1027 seems like a sensible choice.)
  3. Adjust the balancing bandwidth. (The default balancing bandwidth of 1MiBi is too little to keep up with the rate of the cluster getting unbalanced. When running in March 2015 40MiBi was sufficient to decrease unbalancedness about 7TB/day, without grinding the cluster to a halt. Higher values might or might not speed up things)
    sudo -u hdfs hdfs dfsadmin -setBalancerBandwidth $((40*1048576))
  4. Start the balancing run by
    sudo -u hdfs hdfs balancer
    (Each block move that the balancer did is permanent. So when aborting a balancer run at some point, the balancing work done up to then stays effective.)

HDFS Balancer Cron Job

analytics1027 has a cron job installed that should run the HDFS Balancer daily. However, we have noticed that often the balancer will run for much more than 24 hours, because it is never satisfied with the distribution of blocks around the cluster. Also, it seems that a long running balancer starts to slow down, and not balance well. If this happens, you should probably restart the balancer using the same cron job command.

First, check if a balancer is running that you want to restart:

 ps aux | grep '\-Dproc_balancer' | grep -v grep
 # hdfs     12327  0.7  3.2 1696740 265292 pts/5  Sl+  13:56   0:10 /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -Dproc_balancer -Xmx1000m -Dhadoop.log.dir=/usr/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.hdfs.server.balancer.Balancer


Login as hdfs user, kill the balancer, delete the balancer lock file, and then run the balancer cron job command:

 sudo -u hdfs -i
 kill 12327
 rm /tmp/hdfs-balancer.lock
 crontab -l | grep balancer
 # copy paste the balancer full command:
 (lockfile-check /tmp/hdfs-balancer && echo "$(date '+%y/%m/%d %H:%M:%S') WARN Not starting hdfs balancer, it is already running (or the lockfile exists)." >> /var/log/hadoop-hdfs/balancer.log) || (lockfile-create /tmp/hdfs-balancer && hdfs dfsadmin -setBalancerBandwidth $((40*1048576)) && /usr/bin/hdfs balancer 2>&1 >> /var/log/hadoop-hdfs/balancer.log; lockfile-remove /tmp/hdfs-balancer)

Fixing HDFS mount at /mnt/hdfs

/mnt/hdfs is mounted on some hosts as a way of accessing files in hdfs via the filesystem. There are several production monitoring and rsync jobs that rely on this mount, including rsync jobs that copy pagecount* data to http://dumps.wikimedia.org. If this data is not copied, the community notices.

/mnt/hdfs is mounted using fuse_hdfs, and is not very reliable. If you get hung filesystem commands on /mnt/hdfs, the following worked once to refresh it:

 sudo umount -f /mnt/hdfs
 sudo fusermount -uz /mnt/hdfs
 sudo mount /mnt/hdfs