Apache Hadoop is a well know and de-facto framework for processing large big data sets through distributed & parallel computing. YARN(Yet Another Resources Negotiator) allowed Hadoop to evolve from a simple MapReduce engine to a big data ecosystem that can run heterogeneous (MapReduce and non-MapReduce) apps concurrently. This results in larger clusters with more workloads and users than ever before. Traditional recommendations encourage provisioning, isolation, and tuning to increase performance and avoid resource contention but result in highly underutilized clusters.
Tuning is an essential part of maintaining a Hadoop cluster. Cluster administrators must interpret system metrics and optimize for specific workloads (e.g., high CPU utilization versus high I/O). To know what to tune, Hadoop operators often rely on monitoring software for insight into cluster activity. Tools like Ganglia, Cloudera Manager, or Apache Ambari will give us near real-time statistics at the node level, and many provide after-the-fact reports for particular jobs.
Here we will quickly look in to the 11 checklists for tuning by admin/developer,
- Number of mappers : If we found that mappers are only running for a few seconds, try to use fewer mappers that can run longer (a minute or two ). Then increase mapred.min.split.size to decrease the number of mappers allocated in a slot.
- Mapper output: Mappers should output as little data as possible. Hence try filtering out records on the mapper side and use minimal data to form the map output key and map output value.
- Number of reducers: Reduce tasks should run for five minutes or so and produce at least a blocks worth of data.
- Combiners: We can specify a combiner to cut the amount of data shuffled between the mappers and the reducers.
- Compression: We can you enable map output compression to improve job execution time.
- Custom serialization: We can implement a RawComparator.
- Disks per node: We can adjust the number of disks per node (mapred.local.dir, dfs.name.dir, dfs.data.dir) and test how scaling affects execution time.
- JVM reuse: We have to consider enabling JVM reuse (mapred.job.reuse.jvm.num.tasks) for workloads with lots of short-running tasks.
- Maximize memory for the shuffle: We can generally maximize memory for the shuffle, but give the map and reduce functions enough memory to operate. Hence make the mapred.child.java.opts property as large as possible for the amount of memory on the task nodes.
- Minimize disk spilling: One spill to disk is optimal. The MapReduce counter spilled_records is a useful metric, as it counts the total number of records that were spilled to disk during a job.
- Adjust memory allocation: Total Memory = Map Slots + Reduce Slots + TT+ DN + Other Services + OS.
Originally appeared on DataOttam.