Search: "pig"
Step 3: Create a cluster - E-MapReduce Documentation
, Hive, Spark, Spark Streaming, Flink, Storm, Presto, Impala, Oozie, and Pig. Hadoop ...
Step 1: Select configuration - E-MapReduce Documentation
requirements. If you need to process large amounts of data but do not require real-time processing, you can use MapReduce, Pig, and Spark ...
Edit a job - E-MapReduce Documentation
can create Shell, Hive, Hive SQL, Spark, SparkSQL, MapReduce, Sqoop, Pig, Spark Streaming, and Flink jobs ...
Gateway clusters - E-MapReduce Documentation
recommended that you associate it with a cluster on which Hadoop (HDFS and YARN), Hive, Spark, Sqoop, Pig, or other clients have ...
  • <
  • 1 ...
  • 2
  • 3
  • 4
  • 5
  •  Total 45 items