EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 360+ Courses All in One Bundle
  • Login
Home Data Science Data Science Tutorials Spark Tutorial Spark SQL Dataframe
Secondary Sidebar
Spark Tutorial
  • Basics
    • What is Apache Spark
    • Career in Spark
    • Spark Commands
    • How to Install Spark
    • Spark Versions
    • Apache Spark Architecture
    • Spark Tools
    • Spark Shell Commands
    • Spark Functions
    • RDD in Spark
    • Spark DataFrame
    • Spark Dataset
    • Spark Components
    • Apache Spark (Guide)
    • Spark Stages
    • Spark Streaming
    • Spark Parallelize
    • Spark Transformations
    • Spark Repartition
    • Spark Shuffle
    • Spark Parquet
    • Spark Submit
    • Spark YARN
    • SparkContext
    • Spark Cluster
    • Spark SQL Dataframe
    • Join in Spark SQL
    • What is RDD
    • Spark RDD Operations
    • Spark Broadcast
    • Spark?Executor
    • Spark flatMap
    • Spark Thrift Server
    • Spark Accumulator
    • Spark web UI
    • Spark Interview Questions
  • PySpark
    • PySpark version
    • PySpark Cheat Sheet
    • PySpark list to dataframe
    • PySpark MLlib
    • PySpark RDD
    • PySpark Write CSV
    • PySpark Orderby
    • PySpark Union DataFrame
    • PySpark apply function to column
    • PySpark Count
    • PySpark GroupBy Sum
    • PySpark AGG
    • PySpark Select Columns
    • PySpark withColumn
    • PySpark Median
    • PySpark toDF
    • PySpark partitionBy
    • PySpark join two dataframes
    • PySpark?foreach
    • PySpark when
    • PySPark Groupby
    • PySpark OrderBy Descending
    • PySpark GroupBy Count
    • PySpark Window Functions
    • PySpark Round
    • PySpark substring
    • PySpark Filter
    • PySpark Union
    • PySpark Map
    • PySpark SQL
    • PySpark Histogram
    • PySpark row
    • PySpark rename column
    • PySpark Coalesce
    • PySpark parallelize
    • PySpark read parquet
    • PySpark Join
    • PySpark Left Join
    • PySpark Alias
    • PySpark Column to List
    • PySpark structtype
    • PySpark Broadcast Join
    • PySpark Lag
    • PySpark count distinct
    • PySpark pivot
    • PySpark explode
    • PySpark Repartition
    • PySpark SQL Types
    • PySpark Logistic Regression
    • PySpark mappartitions
    • PySpark collect
    • PySpark Create DataFrame from List
    • PySpark TimeStamp
    • PySpark FlatMap
    • PySpark withColumnRenamed
    • PySpark Sort
    • PySpark to_Date
    • PySpark kmeans
    • PySpark LIKE
    • PySpark?groupby multiple columns

Related Courses

Spark Certification Course

PySpark Certification Course

Apache Storm Course

Spark SQL Dataframe

By Priya PedamkarPriya Pedamkar

Spark SQL Dataframe

Introduction to Spark SQL Dataframe

Spark SQL Dataframe is the distributed dataset that stores as a tabular structured format. Dataframe is similar to RDD or resilient distributed dataset for data abstractions. The Spark data frame is optimized and supported through the R language, Python, Scala, and Java data frame APIs. The Spark SQL data frames are sourced from existing RDD, log table, Hive tables, and Structured data files and databases. Spark uses select and filters query functionalities for data analysis. Spark SQL Dataframe supports fault tolerance, in-memory processing as an advanced feature. Spark SQL Dataframes are highly scalable that can process very high volumes of data.

The different sources which generate a dataframe are-

  • Existing RDD
  • Structured data files and databases
  • Hive Tables

Need of Dataframe

The spark community has always tried to bring structure to the data, where spark SQL- dataframes are the steps taken in that direction. The initial API of spark, RDD is for unstructured data where the computations and data are both opaque. Thus there was a requirement to create an API that is able to provide additional benefits of optimization. Below are the few requirements which formed the basis of dataframe-

  • Process structured and semi- data
  • Multiple data sources
  • Integration with multiple programming languages
  • The number of operations that can be performed on the data such as select & filter.

How to Create Spark SQL Dataframe?

Before understanding ways of creating a dataframe it is important to understand another concept by which spark applications create dataframe from different sources. This concept is known as sparksession and is the entry point for all the spark functionality. Earlier we had to create sparkConf, sparkContext or sqlContext individually but with sparksession, all are encapsulated under one session where spark acts as a sparksession object.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

import org.apache.spark.sql.SparkSession
val spark = SparkSession
.builder()
.appName("SampleWork")
.config("config.option", "value")
.getOrCreate()

Ways of creating a Spark SQL Dataframe

Let’s discuss the two ways of creating a dataframe.

All in One Data Science Bundle(360+ Courses, 50+ projects)
Python TutorialMachine LearningAWSArtificial Intelligence
TableauR ProgrammingPowerBIDeep Learning
Price
View Courses
360+ Online Courses | 50+ projects | 1500+ Hours | Verifiable Certificates | Lifetime Access
4.7 (86,650 ratings)

1. From Existing RDD

There are two ways in which a Dataframe can be created through RDD. One way is using reflection which automatically infers the schema of the data and the other approach is to create a schema programmatically and then apply to the RDD.

  • By Inferring the Schema

An easy way of converting an RDD to Dataframe is when it contains case classes due to the Spark’s SQL interface. The arguments passed to the case classes are fetched using reflection and it becomes the name of the columns of the table. Sequences and Arrays can also be defined in case classes. The RDD which will be created using the case class can be implicitly converted to Dataframe using the toDF() method.

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
case class Transport(AutoName: String, year: Int)
val Vehicle = sc.textFile("//path//").map(_.split(",")).map(p => Transport(p(0), p(1)).toDF()

A dataframe Vehicle is created and can be registered as a table against which sql statements can be executed.

  • By programmatically specifying the Schema

There may be cases where we are not aware of the schema beforehand or scenarios where case classes cannot take more than 22 fields. In such conditions, we use the approach of programmatically creating the schema. Firstly an RDD of rows is created from the original RDD, i.e converting the rdd object from rdd[t] to rdd[row]. Then create a schema using StructType (Table) and StructField (Field) objects. This schema is applied to the RDD of Rows using the createDataFrame method as which resembles the structure of rdd[row] created earlier.

val Vehicle = sc.textFile("//path")
import org.apache.spark.sql._
val schema = StructType(Array(StructField("AutoName",StringType,true),StructField("Year",IntegerType,true)))
scala> val rowRDD = vehicle.map(_.split(",")).map(p => org.apache.spark.sql.Row(p(0),p(1).toInt))
val vehicleSchemaRDD = sqlContext.applySchema(rowRDD, schema)

2. Through Data Sources

Spark allows the creation of dataframes through multiple sources such as hive, json, parquet, csv and text files that can also be used to create dataframes.

Val file=sqlContext.read.json(“path to the json file”)
Val file=sqlContext.read.csv(“path to the json file”)
Val file=sqlContext.read.text(“path to the json file”)
val hiveData = new org.apache.spark.sql.hive.HiveContext(sc)
val hiveDF = hiveData.sql(“select * from tablename”)

DataFrame Operations

As the data is stored in a tabular format along with the schema, there are a number of operations that can be performed on the dataframes. It allows multiple operations that can be performed on data in dataframes.

Consider file is a dataframe which has been created from a csv file with two columns – FullName and AgePerPA

1. printSchema()- To view the schema structure

file.printSchema()
// |-- AgePerPA: long (nullable = true)
// |-- FullName: string (nullable = true)

2. select- Similar to select statement in SQL, showcases the data as mentioned in the select statement.

file.select("FullName").show()
// +-------+
// |   name|
// +-------+
// |Sam|
// |Jodi|
// | Bala|
// +-------+

3. Filter- To view the filtered data from the dataframe. The condition mentioned in the command

file.filter($"AgePerPA" > 18).show()

4. GroupBy- To groupby the values

file.groupBy("AgePerPA").count().show()

5. show()- to display the contents of dataframe

file.show()

Limitations

Though with dataframes you can catch SQL syntax error at compile time itself, it is not capable of handling any analysis related error until runtime. For example, if a non-existing column name is being refered in the code it won’t be noticed until runtime. This would lead to wasting the developer’s time and project cost.

Conclusion

This article gives an overall picture(need, creation, limitations) about the dataframe API of Spark SQL. Due to the popularity of dataframe APIs Spark SQL remains one of the widely used libraries. Just like an RDD, it provides features like fault tolerance, lazy evaluation, in-memory processing along with some additional benefits. It can be defined as data distributed across the cluster in a tabular form. Thus a data frame will have a schema associated with it and can be created through multiple sources via spark session object.

Recommended Articles

This is a guide to Spark SQL Dataframe. Here we discuss the basic concept, need, and 2 ways of creating a dataframe with limitations. You may also look at the following article to learn more –

  1. Spark Shell Commands
  2. Cursors in SQL
  3. SQL Constraints
  4. Database in SQL​ 
Popular Course in this category
Apache Spark Training (3 Courses)
  3 Online Courses |  13+ Hours |  Verifiable Certificate of Completion |  Lifetime Access
4.5
Price

View Course

Related Courses

PySpark Tutorials (3 Courses)4.9
Apache Storm Training (1 Courses)4.8
0 Shares
Share
Tweet
Share
Primary Sidebar
Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Database Management
  • Machine Learning
  • All Tutorials
Certification Courses
  • All Courses
  • Data Science Course - All in One Bundle
  • Machine Learning Course
  • Hadoop Certification Training
  • Cloud Computing Training Course
  • R Programming Course
  • AWS Training Course
  • SAS Training Course

ISO 10004:2018 & ISO 9001:2015 Certified

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Data Science Course

SPSS, Data visualization with Python, Matplotlib Library, Seaborn Package

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Data Science Course

Hadoop, Data Science, Statistics & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more