Hadoop

Basics of Big Data – Part 2 – Hadoop

15th Apr `14, 03:07 PM in Hadoop

As discussed in Part 1 of this series, Hadoop is the foremost among tools being current used for…

BDMS
Guest Contributor
 

As discussed in Part 1 of this series, Hadoop is the foremost among tools being current used for deriving value out of Big Data. The process of gaining insights from data through Business Intelligence and analytics essentially remains the same. However, with the huge variety, volume and velocity (the 3Vs of Big Data), it’s become necessary to re-think of the data management infrastructure. Hadoop, originally designed to be used with the MapReduce algorithm to solve parallel processing constraints in distributed architectures (e.g. web indexing) of web giants like Yahoo or Google, has become the de-facto standard for Big Data (large-scale data-intensive) analytics platforms.

What is Hadoop?

Think of Hadoop as an operating system for Big Data. It is essentially a flexible and available architecture for large scale computation and data processing on a network of commodity hardware.

Conceptually, the key components of the Java-based Hadoop framework are a file store and a distributed processing system:

Read More
MORE FROM BIG DATA MADE SIMPLE