Academic Journals Database
Disseminating quality controlled scientific knowledge

Big Data Processing with Hadoop-MapReduce in Cloud Systems

ADD TO MY LIST
 
Author(s): Rabi Prasad Padhy

Journal: International Journal of Cloud Computing and Services Science (IJ-CLOSER)
ISSN 2089-3337

Volume: 2;
Issue: 1;
Start page: 16;
Date: 2012;
Original page

ABSTRACT
Today, we’re surrounded by data like oxygen. The exponential growth of data first presented challenges to cutting-edge businesses such as Google, Yahoo, Amazon, Microsoft, Facebook, Twitter etc. Data volumes to be processed by cloud applications are growing much faster than computing power. This growth demands new strategies for processing and analyzing information. Hadoop-MapReduce has become a powerful Computation Model addresses to these problems. Hadoop HDFS became more popular amongst all the Big Data tools as it is open source with flexible scalability, less total cost of ownership & allows data stores of any form without the need to have data types or schemas defined. Hadoop MapReduce is a programming model and software framework for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes. In this paper I have provided an overview, architecture and components of Hadoop, HCFS (Hadoop Cluster File System) and MapReduce programming model, its various applications and implementations in Cloud Environments.
Save time & money - Smart Internet Solutions      Why do you need a reservation system?