search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

PySpark DataFrame | repartition method

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

PySpark DataFrame's repartition(~) method returns a new PySpark DataFrame with the data split into the specified number of partitions. This method also allows to partition by column values.

Parameters

1. numPartitions | int

The number of patitions to break down the DataFrame.

2. cols | str or Column

The columns by which to partition the DataFrame.

Return Value

A new PySpark DataFrame.

Examples

Partitioning a PySpark DataFrame

Cosnider the following PySpark DataFrame:

df = spark.createDataFrame([("Alex", 20), ("Bob", 30), ("Cathy", 40)], ["name", "age"])
df.show()
+-----+---+
| name|age|
+-----+---+
| Alex| 20|
| Bob| 30|
|Cathy| 40|
+-----+---+

By default, the number of partitions depends on the parallelism level of your PySpark configuration:

In my case, our PySpark DataFrame is split into 8 partitions by default.

We can see how the rows of our DataFrame are partitioned using the glom() method of the underlying RDD:

[[],
[],
[Row(name='Alex', age=20)],
[],
[],
[Row(name='Bob', age=30)],
[],
[Row(name='Cathy', age=40)]]

Here, we can see that we have indeed 8 partitions, but only 3 of the partitions have a Row in them.

Now, let's repartition our DataFrame such that the Rows are divided into only 2 partitions:

df_new = df.repartition(2)
2

The distribution of the rows in our repartitioned DataFrame is now:

df_new.rdd.glom().collect()
[[Row(name='Alex', age=20),
Row(name='Bob', age=30),
Row(name='Cathy', age=40)],
[]]

As demonstrated here, there is no guarantee that the rows will be evenly distributed in the partitions.

Partitioning a PySpark DataFrame by column values

Consider the following PySpark DataFrame:

df = spark.createDataFrame([("Alex", 20), ("Bob", 30), ("Cathy", 40), ("Alex", 50)], ["name", "age"])
df.show()
+-----+---+
| name|age|
+-----+---+
| Alex| 20|
| Bob| 30|
|Cathy| 40|
| Alex| 50|
+-----+---+

To repartition this PySpark DataFrame by the column name into 2 partitions:

df_new = df.repartition(2, "name")
df_new.rdd.glom().collect()
[[Row(name='Alex', age=20),
Row(name='Cathy', age=40),
Row(name='Alex', age=50)],
[Row(name='Bob', age=30)]]

Here, notice how the rows with the same value for name ('Alex' in this case) end up in the same partition.

We can also repartition by multiple column values:

df_new = df.repartition(4, "name", "age")
df_new.rdd.glom().collect()
[[Row(name='Alex', age=20)],
[Row(name='Bob', age=30)],
[Row(name='Alex', age=50)],
[Row(name='Cathy', age=40)]]

Here, we are repartitioning by the name and age columns into 4 partitions.

We can also use the default number of partitions by specifying column labels only:

df_new = df.repartition("name")
1
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
3
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!