Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Panel
title:Warning
title:Warning
borderColorblack
bgColor#ffff00
borderStylesolid

(warning) PLEASE NOTE: This documentation applies to Pentaho 6.1 and an earlier version. For Pentaho 7.0 and later, see Spark Submit on the most recent documentation, visit the Pentaho Enterprise Edition documentation site.

Description

Apache Spark is an open-source cluster computing framework that is an alternative to the Hadoop MapReduce paradigm. The Spark Submit entry allows you to submit Spark jobs to CDH clusters version 5.3 and later, HDP 2.3 and later, MapR 5.1 and later, and EMR 3.10 and later.

...

Before you use this entry, you will need to install and configure a Spark client on any node from which you will run Spark jobs.  

Installation Prerequisites

...

Option

Description

Entry Name

Name of the entry. You can customize this, or leave it as the default.

Spark-Submit Utility

Script that launches the spark job.

Spark Master URL

The master URL for the cluster.  Two options are supported:

  • Yarn-Cluster, which runs the driver program as a thread of the yarn application master, which is on one of the node managers in the cluster.  This is very similar to the way mapreduce works.
  • Yarn-Client, which runs the driver program on the yarn client.  Tasks are still execute in the node managers of the YARN cluster.

Jar

Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.

Class Name

The entry point for your application.

Arguments

Arguments passed to the main method of your main class, if any.

Executor

Amount of memory to use per executor process.  Use the JVM format (e.g. 512m2g).

Driver

Amount of memory to use per driver.  Use the JVM format (e.g. 512m2g).

Block Execution

This option is enabled by default. If this option is selected, the job entry waits until the spark job finishes running. If it is not, job proceeds with its execution once the spark job is submitted for execution.

Help

Displays documentation on this entry.

OK

Saves the information and closes the window.

Cancel

Closes the window without saving changes.

...