Hadoop Interview questions


Total available count: 27
Subject - Apache
Subsubject - Hadoop

How do you define SparkContext?

SparkContext is an entry/access point for a Spark Job. Each and every Spark application starts by instantiating a Spark context. A Spark application is an instance of SparkContext. Or you can say that a Spark context constitutes a Spark application.

SparkContext represents the connection to a Spark execution environment (deployment mode).
A Spark context can be used to create accumulators, RDDs, and access to Spark services, broadcast variables, and run jobs.

A Spark context is essentially a client of Spark’s execution environment (deployment mode) and it acts as the master of your Spark.




Next 2 interview question(s)

1
Why both Spark and Hadoop needed?
2
Why Spark, even Hadoop exists?