Hadoop (hadoop.apache.org/core/) is a tool that makes it easy to run programs on clusters. It uses the MapReduce framework: it distributes the computation over individual records (such as data points) over a cluster and then allows the results of that computation to be combined in a reduce step. There is a very good tutorial at hadoop.apache.org/core/docs/current/mapred_tutorial.html that goes over the basics of Hadoop operation.
In order to use Hadoop, you need to either connect to a machine that has it installed or install it on your machine. Once installed, the main executable can be run by changing to the installation directory and running
bin/hadoop
This will list all the different options for running Hadoop. See the README in the code linked below for example usages.
Writing Hadoop Programs for ML
A large number of programs in ML look like:
1. Initialize parameters
2. For each data point
2a. Do something (compute gradient, sufficient statistics, etc)
2b. Combine it with the results on previous data points (add to the gradient, etc)
3. Update parameters based on the computation of 2.
4. Goto 2.
Step 2 often takes the longest amount of compute time and can be easily parallelized. This is where Hadoop comes in. It distributes your data and data-based computation across the cluster, allowing you to compute and sum gradients in parallel.
The code presented at the ML Tea can be downloaded from hadoop_example.tar.gz. This is a simple demonstration of how to parallelize logistic regression and we hope you will adapt it to write your own programs. NOTE: you need to have Java 1.6 in order to run this demo.
Comments (0)
You don't have permission to comment on this page.