search
Search
Join our weekly DS/ML newsletter layers DS/ML Guides
menu
menu search toc more_vert
Robocat
Guest 0reps
Thanks for the thanks!
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
help Ask a question
Share on Twitter
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
A
A
brightness_medium
share
arrow_backShare
Twitter
Facebook

PySpark DataFrame | coalesce method

Machine Learning
chevron_right
PySpark
chevron_right
Documentation
chevron_right
PySpark DataFrame
schedule Jul 1, 2022
Last updated
local_offer PySpark
Tags

PySpark DataFrame's coalesce(~) method reduces the number of partitions of the PySpark DataFrame without shuffling.

Parameters

1. num_partitions | int

The number of partitions to split the PySpark DataFrame's data into.

Return Value

A new PySpark DataFrame.

Examples

Consider the following PySpark DataFrame:

df = spark.createDataFrame([["Alex", 20], ["Bob", 30], ["Cathy", 40]], ["name", "age"])
df.show()
+-----+---+
| name|age|
+-----+---+
| Alex| 20|
| Bob| 30|
|Cathy| 40|
+-----+---+

The default number of partitions is governed by your PySpark configuration. In my case, the default number of partitions is:

We can see the actual content of each partition of the PySpark DataFrame by using the underlying RDD's glom() method:

[[],
[],
[Row(name='Alex', age=20)],
[],
[],
[Row(name='Bob', age=30)],
[],
[Row(name='Cathy', age=40)]]

We can see that we indeed have 8 partitions, 3 of which contain a Row.

Reducing the number of partitions of a PySpark DataFrame without shuffling

To reduce the number of partitions of the DataFrame without shufflinglink, use coalesce(~):

df_new = df.coalesce(2)
df_new.rdd.glom().collect()
[[Row(name='Alex', age=20)],
[Row(name='Bob', age=30), Row(name='Cathy', age=40)]]

Here, we can see that we now only have 2 partitions!

NOTE

Both the methods repartition(~) and coalesce(~) are used to change the number of partitions, but here are some notable differences:

  • repartition(~) generally results in a shuffling operationlink while coalesce(~) does not. This means that coalesce(~) is less costly than repartition(~) because the data does not have to travel across the worker nodes much.

  • coalesce(~) is used specifically for reducing the number of partitions.

mail
Join our newsletter for updates on new DS/ML comprehensive guides (spam-free)
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!