{scrollbar}

How to use Pentaho MapReduce to convert raw weblog data into parsed, delimited records.


The steps in this guide include

  1. Loading the sample data file into HDFS
  2. Developing a PDI transformation which will serve as a Mapper
  3. Developing a PDI job which will invoke a Pentaho MapReduce step that runs a map-only job, using the developed mapper transformation.
  4. Executing and reviewing output

Prerequisites

In order to follow along with this how-to guide you will need the following:

Sample Files

The sample data file needed for this guide is:

File Name

Content

weblogs_rebuild.txt.zip

Unparsed, raw weblog data


NOTE: If you have completed the Loading Data into HDFS guide, then the necessary file will already be in the proper location.
This file should be placed in HDFS at /user/pdi/weblogs/raw using the following commands.

hadoop fs -mkdir /user/pdi/weblogs
hadoop fs -mkdir /user/pdi/weblogs/raw
hadoop fs -put weblogs_rebuild.txt /user/pdi/weblogs/raw/

Step-By-Step Instructions

Setup

Start Hadoop if it is not already running.

Create a PDI Job to Execute a Map Only MapReduce Process

In this task you will create a job that will execute a "map-only" MapReduce process using the mapper transformation you created in the previous section.

You can download the Kettle Job weblog_parse_mr.kjb already completed

  1. Within PDI, choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.

  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' node onto the job canvas. Your canvas should look like:


  3. Add a Pentaho MapReduce Job Entry: Expand the 'Big Data' section of the Design palette and drag a 'Pentaho MapReduce' job entry onto the job canvas. Your canvas should look like:


  4. Connect the Start and MapReduce Job Entries: Hover the mouse over the 'Delete folders' job entry and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Pentaho MapReduce' job entry.
    Your canvas should look like this:


  5. Edit the MapReduce Job Entry: Double-click on the 'Pentaho MapReduce' job entry to edit its properties. Enter this information:
    1. Hadoop Job Name: Enter 'Web Log Parser'
    2. Mapper Transformation: Enter <PATH>/weblog_parse_mapper.ktr
      <PATH> is the folder path you saved the mapper in.
    3. Mapper Input Step Name: Enter 'MapReduce Input'
    4. Mapper Output Step Name: Enter 'MapReduce Output'
      When you are done the window should look like:


  6. Configure the MapReduce Job: Switch to the 'Job Setup' tab. Enter this information:
    1. Check 'Suppress Output of Map Key'
    2. Input Path: Enter '/user/pdi/weblogs/raw'
    3. Output Path: Enter '/user/pdi/weblogs/parse'
    4. Input Format: Enter 'org.apache.hadoop.mapred.TextInputFormat'
    5. Output Format: Enter 'org.apache.hadoop.mapred.TextOutputFormat'
    6. Check 'Clean output path before execution'
      When you are done your window should look like:


  7. Configure the Cluster Properties: Switch to the 'Cluster' tab. Enter this information:
    1. Hadoop distribution: Select your Hadoop Distribution
    2. Working Directory: Enter '/tmp'
    3. HDFS Hostname, HDFS Port, Job Tracker Hostname, Job Tracker Port: Your connection information.
    4. Number of Mapper Tasks: Enter '3'. You can play around with this to get the best performance based on the size of your data and the number of nodes in your cluster.
    5. Number of Reducer Tasks: Enter '0'
    6. Check 'Enable Blocking'
    7. Logging Interval: Enter '10'. The number of seconds between pinging Hadoop for completion status messages
      When you are done your window should look like:

      Click 'OK' to close the window.

  8. Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'weblogs_parse_mr.kjb' into a folder of your choice.

  9. Run the Job: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the job toolbar. An 'Execute a job' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the job as it runs. After a few seconds the job should finish successfully:

If any errors occurred the job entry that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hadoop for Parsed Weblog Data

  1. Run the following command to check the cluster for the parsed files:
    hadoop fs -ls /user/pdi/weblogs/parse

    This should return:
    -rwxrwxrwx 3 demo demo 27132365 2012-01-04 16:52 /weblogs/parse/part-00001
    -rwxrwxrwx 3 demo demo 0 2012-01-04 16:52 /weblogs/parse/_SUCCESS
    -rwxrwxrwx 3 demo demo 27188268 2012-01-04 16:52 /weblogs/parse/part-00002
    drwxrwxrwx - demo demo 1 2012-01-04 16:52 /weblogs/parse/_logs
    -rwxrwxrwx 3 demo demo 27147417 2012-01-04 16:52 /weblogs/parse/part-00000

Summary

During this guide you learned how to create and execute a Pentaho MapReduce job to parse raw weblog data.

{scrollbar}