EDUCBA

EDUCBA

MENUMENU
  • Free Tutorials
  • Free Courses
  • Certification Courses
  • 360+ Courses All in One Bundle
  • Login
Home Data Science Data Science Tutorials Spark Tutorial Spark Parquet
Secondary Sidebar
Spark Tutorial
  • Basics
    • What is Apache Spark
    • Career in Spark
    • Spark Commands
    • How to Install Spark
    • Spark Versions
    • Apache Spark Architecture
    • Spark Tools
    • Spark Shell Commands
    • Spark Functions
    • RDD in Spark
    • Spark DataFrame
    • Spark Dataset
    • Spark Components
    • Apache Spark (Guide)
    • Spark Stages
    • Spark Streaming
    • Spark Parallelize
    • Spark Transformations
    • Spark Repartition
    • Spark Shuffle
    • Spark Parquet
    • Spark Submit
    • Spark YARN
    • SparkContext
    • Spark Cluster
    • Spark SQL Dataframe
    • Join in Spark SQL
    • What is RDD
    • Spark RDD Operations
    • Spark Broadcast
    • Spark?Executor
    • Spark flatMap
    • Spark Thrift Server
    • Spark Accumulator
    • Spark web UI
    • Spark Interview Questions
  • PySpark
    • PySpark version
    • PySpark Cheat Sheet
    • PySpark list to dataframe
    • PySpark MLlib
    • PySpark RDD
    • PySpark Write CSV
    • PySpark Orderby
    • PySpark Union DataFrame
    • PySpark apply function to column
    • PySpark Count
    • PySpark GroupBy Sum
    • PySpark AGG
    • PySpark Select Columns
    • PySpark withColumn
    • PySpark Median
    • PySpark toDF
    • PySpark partitionBy
    • PySpark join two dataframes
    • PySpark?foreach
    • PySpark when
    • PySPark Groupby
    • PySpark OrderBy Descending
    • PySpark GroupBy Count
    • PySpark Window Functions
    • PySpark Round
    • PySpark substring
    • PySpark Filter
    • PySpark Union
    • PySpark Map
    • PySpark SQL
    • PySpark Histogram
    • PySpark row
    • PySpark rename column
    • PySpark Coalesce
    • PySpark parallelize
    • PySpark read parquet
    • PySpark Join
    • PySpark Left Join
    • PySpark Alias
    • PySpark Column to List
    • PySpark structtype
    • PySpark Broadcast Join
    • PySpark Lag
    • PySpark count distinct
    • PySpark pivot
    • PySpark explode
    • PySpark Repartition
    • PySpark SQL Types
    • PySpark Logistic Regression
    • PySpark mappartitions
    • PySpark collect
    • PySpark Create DataFrame from List
    • PySpark TimeStamp
    • PySpark FlatMap
    • PySpark withColumnRenamed
    • PySpark Sort
    • PySpark to_Date
    • PySpark kmeans
    • PySpark LIKE
    • PySpark?groupby multiple columns

Related Courses

Spark Certification Course

PySpark Certification Course

Apache Storm Course

Spark Parquet

By Priya PedamkarPriya Pedamkar

Spark Parquet

Introduction to Spark Parquet

Parquet is supported by a lot of data processing systems. It is a combat format. A deep integral part of analytics of a Spark – driven is defined as a parquet. The advantages are strong in relating to parquet. Those of which include Storage efficiency, fast, query efficient data, and a few tools accompanying them. Costing in communication (Input and output bound) and Decoding the data (CPU bound) are major bottlenecks of distribution analytics that have been overcome using Spark SQL Parquet. Reading and writing the files of Parquet is provided by Spark SQL support. And they automatically capture the original data scheme.

Syntax:

spark.write.parquet()

This is the syntax for the Spark Parquet Data frame.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

How Apache Spark Parquet Works?

Binary is the format used in Parquet. Parquet format is basically encoded and compressed. The files used are columnar. This SQL of Spark is machine friendly. JVM, Hadoop, and C++ are the APIs used. Let us consider an example while understanding Apache Spark Parquet. Before heading towards it, first, we need to create a Data frame for Spark from the Seq object. Only when one imports implicits using the toDF() data frame function on sequence comes into availability which is – spark.sqlContext.implicits._

All in One Data Science Bundle(360+ Courses, 50+ projects)
Python TutorialMachine LearningAWSArtificial Intelligence
TableauR ProgrammingPowerBIDeep Learning
Price
View Courses
360+ Online Courses | 50+ projects | 1500+ Hours | Verifiable Certificates | Lifetime Access
4.7 (86,527 ratings)

val entry = Seq(("John ","Hen","Kennedy","36636","M",8000),
("Michael ","","Rose","40288","F",10000),
("Robert ","Patrick","Williams","42114","M",5000),
(" ","Anne","Frank","39192","F",9000),
("Robert","Downey","Junior","","M",12554678966351))
val cols = Seq("first_name","mid_name","last_name","date of birth","gender","income")
import spark.sqlContext.implicits._
val dataframe_ = entry.toDF(cols:_*)

1. A data frame wiring for a parquet file:

df.write.parquet(“C:temp/file_output/persoal.parquet”)

2. All the columns are converted to be nullable automatically for compatibility reasons. Data entry in Parquet preserves the data types and names of the columns. Also, finalized by writing all the data to a specified folder. A data frame creation for a parquet file:

val parquet.data.frame = spark.read.parquet(“C:temp/file_output/personal.parquet”)

3. Query of a SQL usage for a parquet file: Temporary views of a Parquet can also be created and are used in statements of SQL.

parquet.data.frame.createOrReplaceViewTemp(“Table.Of.Parquet”)
val parquet.SQL = spar.sql(“select from table.of.parquet where income >=4000 ”)

4. File scan with a bottleneck performance is done by the above predicate on Parquet files, as of traditional database table scan. Using partitions to improve performance is a necessity.

5. Improving performance through Partitioning: Spark jobs working at scalable areas is caused by Partitioning as it is a feature of many databases and frameworks of data processing.

data.fram.write.partitionBy(“gender”, “income”)spark.read.parquet(“C:temp/file_output/personal2.parquet”)

Hierarchy of a partition folder is created as shown in the below picture:

Spark Parquet - 1

The above can be considered as an example for knowing how to write a data frame for Spark SQL into files of Parquet which preserves the partitions in columns of gender and income.

val parquet.data.frame = spark.read.parquet(“C:temp/file_output/personal.parquet”)
parquet.data.frame.createOrReplaceViewTemp(“Table2.Of.Parquet”)val parquet.SQL = spar.sql(“select from table.of.parquet where gender = ‘MALE’ and income >=4000 ”)

Execution without partition of the query is much easier and faster than the no partition query. Reading of  specific partition from SQL parquet:

val parquet.data.frame = spark.read.parquet(“C:temp/file_output/personal2.parquet / gender = ‘Male’”)

This helps in retrieving the data from the partition of gender males.

Output:

Spark Parquet - 2

Advantages

Advantages and uses to be noted about Apache Parquet in Spark:

  • Input-output operation storage limits by columnar.
  • Fetching of specific columns that one needs to access by column storage.
  • Less storage space is consumed by columnar storage.
  • Better summarization of data is done by columnar storage along with type-specific encoding.

Conclusion

We have seen the concept of Shuffle in Spark Architecture. Shuffle operation is pretty swift and sorting is not at all required. Sometimes no hash table is to be maintained. When included with a map, a small amount of data or files are created on the map side. Random Input-output operations, small amounts are required, most of it is sequential read and writes.

Recommended Articles

This is a guide to Spark Parquet. Here we discuss an introduction to Spark Parquet, syntax, how does it works with examples to implement. You can also go through our other related articles to learn more –

  1. Spark Tools
  2. Spark Shell Commands
  3. Spark Functions
  4. RDD In Spark
Popular Course in this category
Apache Spark Training (3 Courses)
  3 Online Courses |  13+ Hours |  Verifiable Certificate of Completion |  Lifetime Access
4.5
Price

View Course

Related Courses

PySpark Tutorials (3 Courses)4.9
Apache Storm Training (1 Courses)4.8
0 Shares
Share
Tweet
Share
Primary Sidebar
Footer
About Us
  • Blog
  • Who is EDUCBA?
  • Sign Up
  • Live Classes
  • Corporate Training
  • Certificate from Top Institutions
  • Contact Us
  • Verifiable Certificate
  • Reviews
  • Terms and Conditions
  • Privacy Policy
  •  
Apps
  • iPhone & iPad
  • Android
Resources
  • Free Courses
  • Database Management
  • Machine Learning
  • All Tutorials
Certification Courses
  • All Courses
  • Data Science Course - All in One Bundle
  • Machine Learning Course
  • Hadoop Certification Training
  • Cloud Computing Training Course
  • R Programming Course
  • AWS Training Course
  • SAS Training Course

ISO 10004:2018 & ISO 9001:2015 Certified

© 2022 - EDUCBA. ALL RIGHTS RESERVED. THE CERTIFICATION NAMES ARE THE TRADEMARKS OF THEIR RESPECTIVE OWNERS.

EDUCBA
Free Software Development Course

C# Programming, Conditional Constructs, Loops, Arrays, OOPS Concept

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA Login

Forgot Password?

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA
Free Software Development Course

Web development, programming languages, Software testing & others

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

EDUCBA

*Please provide your correct email id. Login details for this Free course will be emailed to you

By signing up, you agree to our Terms of Use and Privacy Policy.

Let’s Get Started

By signing up, you agree to our Terms of Use and Privacy Policy.

This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to our Privacy Policy

Loading . . .
Quiz
Question:

Answer:

Quiz Result
Total QuestionsCorrect AnswersWrong AnswersPercentage

Explore 1000+ varieties of Mock tests View more