search
Search
Unlock 100+ guides
search toc
close
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
Doc Search
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Shrink
Navigate to
PySpark
147 guides
keyboard_arrow_down
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

# PySpark RDD | repartition method

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
expand_more
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

PySpark RDD's `repartition(~)` method splits the RDD into the specified number of partitions.

NOTE

When we first create RDDs, they will already be partitioned under the hood, which means that all RDDs are already partitioned. This method is called `repartition(~)` (emphasis on the `re`) because we are changing the existing partitioning.

# Parameters

1. `numPartitions` | `int`

The number of partitions in which to split the RDD.

# Return Value

A PySpark RDD (`pyspark.rdd.RDD`).

# Examples

## Re-partitioning a RDD with certain number of partitions

Consider the following RDD:

``` rdd = sc.parallelize(["A","B","C","A","A","B"], numSlices=3)rdd.collect() ['A', 'B', 'C', 'A', 'A', 'B'] ```

Here, we are using the `parallelize(~)` method to create a RDD with 3 partitions.

We can use the `glom()` method to see the actual content of the partitions:

``` rdd.glom().collect() [['A', 'B'], ['C', 'A'], ['A', 'B']] ```

To repartition our RDD into 2 partitions:

``` new_rdd = rdd.repartition(2)new_rdd.glom().collect() [['A', 'B', 'A', 'B'], ['C', 'A']] ```

Notice how even if we repartition our RDD:

• the same values do not necessarily end up in the same partition (`'A'` can be found in both partitions)

• the number of elements in each partition may also not be balanced - here we have 4 elements in the first partition, while only 2 elements in the second partition.

WARNING

The `repartition(~)` method involves shufflinglink, even when reducing the number of partitions. To avoid shuffling when reducing the number of partitions, use RDD's `coalesce(~)` method instead.

Edited by 0 others
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!