Introduction to Spark Interview Questions And Answers
Apache Spark is an open-source framework. Spark, as it is an open-source platform, we can use multiple programming languages such as java, python, Scala, R. As compared to Map-Reduce process performance, spark helps in improving execution performance. It also provides 100 times faster in-memory execution than Map-Reduce. Because of the Processing power of spark nowadays industries are preferring spark.
So you have finally found your dream job in Spark but are wondering how to crack the Spark Interview and what could be the probable Spark Interview Questions for 2021. Every interview is different and the scope of a job is different too. Keeping this in mind we have designed the most common Spark Interview Questions and Answers for 2021 to help you get success in your interview.
These questions are divided into two parts
Part 1 – Spark Interview Questions (Basic)
This first part covers basic Spark interview questions and answers
1. What is Spark?
Answer:
Apache Spark is an open-source framework. It improves execution performance than the Map-Reduce process. Its an open platform where we can use multiple programming languages like Java, Python, Scala, R . Spark provides in-memory execution which is 100 times faster than Map-Reduce. It uses the concept of RDD. RDD is a resilient distributed dataset that allows it to transparently store data on memory and persist it to disc only it’s needed. This is where it will reduce the time to access the data from memory instead of Disk. Today’s Industry prefers Spark because of its processing power.
2. Difference Between Hadoop and Spark?
Answer:
Feature Criteria | Apache Spark | Hadoop |
Speed | 10 to 100 times faster than Hadoop | Normal speed |
Processing | Real-time & Batch processing, In-memory, Caching | Batch processing only, Disk Dependent |
Difficulty | Easy because of high-level modules | Difficult to learn |
Recovery | Allows recovery of partitions using RDD | Fault-tolerant |
Interactivity | Has interactive, interactive modes | No interactive mode except Pig & Hive, No iterative mode |
Normal Hadoop architecture follows basic Map-Reduce, For the same process spark provides in-memory execution. Instead of read-write from the hard drive for Map-Reduce, spark provide read-write from virtual memory.
Let us move to the next Spark Interview Questions
3. What are the Features of Spark?
Answer:
- Provide integration facility with Hadoop and Files on HDFS. Spark can run on top of Hadoop using YARN resource clustering. Spark has the capacity to replace Hadoop’s Map-Reduce engine.
- Polyglot: Spark Provides high-level API for Java, Python, Scala, and R. Spark Code can be written in any of these four languages. IT provides an independent shell for scale (the language in which Spark is written) and a python interpreter. Which will help to interact with spark engine? Scala shell can be accessed through ./bin/spark-shell and Python shell through ./bin/pyspark from the installed directory.
- Speed: Spark engine is 100 times faster than Hadoop Map-Reduce for large-scale data processing. Speed will be achieved through partitioning for parallelizing distributed data processing with minimal network traffic. Spark Provide RDD’s (Resilient Distributed Datasets), which can be cached across computing nodes in a cluster
- Multiple Formats: Spark has a data source API. It will provide a mechanism to access structured data through spark SQL. Data Sources can be anything, Spark will just create a mechanism to convert the data and pull it to the spark. Spark supports multiple data sources like Hive, HBase, Cassandra, JSON, Parquet, ORC.
- Spark provides some inbuilt libraries to perform multiple tasks from the same core like batch processing, Steaming, Machine learning, Interactive SQL queries. However, Hadoop only supports batch processing. Spark Provide MLIb (Machine learning libraries ) which will be helpful for Big-Data Developer to process the data. This helps to remove dependencies on multiple tools for different purposes. Spark provides a common powerful platform to data engineers and data scientists with both fast performance and easy to use.
- Apache Spark delays the process execution until the action is necessary. This is one of the key features of spark. Spark will add each transformation to DAG (Direct Acyclic Graph) for execution, and when action wants to execute it will actually trigger the DAG to process.
- Real-Time Streaming: Apache Spark Provides real-time computations and low latency, Because of in-memory execution. Spark is designed for large scalabilities like a thousand nodes of the cluster and several models for computations.
4. What is YARN?
Answer:
This is the basic Spark Interview Questions asked in an interview. YARN (Yet Another Resource Negotiator) is the Resource manager. Spark is a platform that provides fast execution. Spark will use YARN for the execution of the job to the cluster, rather than its own built-in manager. There are some configurations to run Yarn. They include master, deploy-mode, driver-memory, executor-memory, executor-cores, and queue. This is the common Spark Interview Questions that are asked in an interview below is the advantages of spark:
Advantages of Spark over Map-Reduce
Spark has advantages over Map-Reduce as follows:-
Because of the ability of the In-memory process, Spark able to execute 10 to 100 times faster than Map-Reduce. Where Map-Reduce can be used for the persistence of data at the Map and Reduce stage.
Apache Spark provides a high level of inbuilt libraries to process multiple tasks at the same time as batch processing, Real-time streaming, Spark-SQL, Structured Streaming, MLib, etc. Same time Hadoop provides only batch processing.
The Hadoop Map-Reduce process will be disk-dependent, where Spark provides Caching and In-Memory.
Spark has both iterative, perform computation multiple on the same dataset and interactive, perform computation between different datasets where Hadoop doesn’t support iterative computation.
5. What is the language supported by Spark?
Answer:
Spark support scala, Python, R, and Java. In the market, big data developer mostly prefers scala and python. For a scale to compile the code we need to Set Path of scale/bin directory or to make a jar file.
6. What is RDD?
Answer:
RDD is an abstraction of Resilient Distributed Dataset, which provides a collection of elements partitioned across all nodes of the cluster which will help to execute multiple processes in parallel. Using RDD developer can store the data In-Memory or caching, to be reused efficiently for parallel execution of operations. RDD can be recovered easily from node failure.
Part 2 – Spark Interview Questions (Advanced)
Let us now have a look at the advanced Spark Interview Questions.
7. What are the factors responsible for the execution of Spark?
Answer:
1. Spark provides in-memory execution instead of disk-dependent like Hadoop Map-Reduce.
2.RDD Resilient Distributed Dataset, which is a responsible parallel execution of multiple operations on all nodes of a cluster.
3. Spark provides a shared variable feature for parallel execution. These variables help to reduce data transfer between nodes and share a copy of all nodes. There are two variables.
4. Broadcast Variable: This variable can be used to cache a value in memory on all nodes
5. Accumulators Variable: This variable is only “added” to, such as counters and sums.
8. What is Executor Memory?
Answer:
This is the frequently asked Spark Interview Questions in an interview. It is heap size allocated for spark executor. This property can be controlled by spark.executor.memory property of the –executor-memory flag. Each Spark application has at one executor for each worker node. This property refers to how much memory of the worker nodes will be allocated for an application.
9. How do you use Spark Stream? Explain One use case?
Answer:
Spark Stream is one of the features that useful for a real-time use case. We can use flume, Kafka with a spark for this purpose. Flume will trigger the data from a source. Kafka will persist the data into Topic. From Kafka Spark will pull the data using the stream and it will D-stream the data and perform the transformation.
We can use this process for real-time suspicious transactions, real-time offers, etc.
Let us move to the next Spark Interview Questions
10. Can we use Spark for the ETL process?
Answer:
Yes, We can use a spark platform for the ETL process.
11. What is Spark SQL?
Answer:
It is one special component of the spark that will support SQL queries.
12. What Lazy Evaluation?
Answer:
When we are working with a spark, Transformations are not evaluated until you perform an action. This helps optimize the overall data processing workflow. When defining transformation it will add to the DAG (Direct Acyclic Graph). And at action time it will start to execute stepwise transformations. This is the useful Spark Interview Question asked in an interview.
Recommended Articles
This has been a guide to List Of Spark Interview Questions and Answers. Here we have listed the best 12 interview sets of questions so that the jobseeker can crack the interview with ease. You may also look at the following articles to learn more-