Introduction to Spark Dataset
Dataset is a data structure in Spark SQL which provides compile-time type safety, the object-oriented interface as well as Spark SQL’s optimization.
Conceptually, it is an in-memory tabular structure having rows and columns which is distributed across multiple nodes like Dataframe.
It is an extension of the Dataframe. The main difference between the Dataset and the DataFrame is that Datasets are strongly typed.
[ Dataset ] = [ Dataframe + Compile-time type safety ]
Dataset was released in Spark 1.6 as an experimental API. Both Dataframe and Dataset are unified in Spark 2.0 version and Dataframe becomes an alias for Dataset[Row].
Dataframe = Dataset[Row]
Why do we need Spark Dataset?
To have a clear understanding of Dataset, we must begin with a bit history of spark and its evolution.
4.5 (2,194 ratings)
RDD is the core of Spark. Inspired by SQL and to make things easier, Dataframe was created on the top of RDD. Dataframe is equivalent to a table in a relational database or a DataFrame in Python.
RDD provides compile-time type safety but there is the absence of automatic optimization in RDD.
Dataframe provides automatic optimization but it lacks compile-time type safety.
Dataset is added as an extension of the Dataframe. Dataset combines the features of both RDD ( i.e compile-time type safety ) as well as Dataframe ( i.e Spark SQL automatic optimization ).
[RDD(Spark 1.0)] -> [Dataframe(Spark1.3)] -> [Dataset(Spark1.6)]
As Dataset has compile-time safety, therefore it is only supported in a compiled language( Java & Scala ) but not in an interpreted language(R & Python). But Spark Dataframe API is available in all the four languages( Java, Scala, Python & R ) supported by Spark.
|Language supported by Spark||Dataframe API||Dataset API|
|Compiled Language (Java & Scala)||YES||YES|
|Interpreted Language (R & Python)||YES||NO|
How to Create a Spark Dataset?
There are multiple ways of creating Dataset based on usecase
1. First Create SparkSession
SparkSession is a single entry point to a spark application that allows interacting with underlying Spark functionality and programming Spark with DataFrame and Dataset APIs.
val spark = SparkSession
- To create a dataset using basic data structure like Range, Sequence, List, etc :
- To create a dataset using the sequence of case classes by calling .toDS() method :
- To create dataset from RDD using .toDS():
- To create the dataset from Dataframe using Case Class:
- To create the dataset from Dataframe using Tuples :
2. Operations on Spark Dataset
- Word Count Example
- Convert Spark Dataset to Dataframe
We can also convert Spark Dataset to Datafame and utilize Dataframe APIs as below :
Features of Spark Dataset
- Type Safety
Dataset provides compile-time type safety. It means that syntax, as well as analysis errors of the application, will be checked at compile time before it runs.
Dataset is also immutable like RDD and Dataframe. It means we can not change the created Dataset. Every time a new dataset is created when any transformation is applied to the dataset.
Dataset is an in-memory tabular structure that has rows and named columns.
- Performance and Optimization
Like Dataframe, the Dataset also uses Catalyst Optimization to generate an optimized logical and physical query plan.
- Programming language
The dataset api is only present in Java and Scala which are compiled languages but not in Python which is an interpreted language.
- Lazy Evaluation
Like RDD and Dataframe, the Dataset also performs the lazy evaluation. It means the computation happens only when action is performed. Spark makes only plans during the transformation phase.
- Serialization and Garbage Collection
Spark dataset does not use standard serializers(Kryo or Java serialization). Instead, it uses Tungsten’s fast in-memory encoders, which understand the internal structure of the data and can efficiently transform objects into internal binary storage. It uses off-heap data serialization using Tungsten encoder and hence there is no need for garbage collection.
Dataset is the best of both RDD and Dataframe. RDD provides compile-time type safety but there is the absence of automatic optimization. Dataframe provides automatic optimization but it lacks compile-time type safety. Dataset provides both compile-time type safety as well as automatic optimization. Hence, the dataset is the best choice for Spark developers using Java or Scala.
This is a guide to Spark Dataset. Here we discuss How to Create a Spark Dataset in multiple ways with Examples and Features. You may also have a look at the following articles to learn more –