search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

PySpark RDD | reduceByKey method

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

PySpark RDD's reduceByKey(~) method aggregates the RDD data by key, and perform a reduction operation. A reduction operation is simply one where multiple values become reduced to a single value (e.g. summation, multiplication).

Parameters

1. func | function

The reduction function to apply.

2. numPartitions | int | optional

By default, the number of partitions will be equal to the number of partitions of the parent RDD. If the parent RDD does not have the partition count set, then the parallelism level in the PySpark configuration will be used.

3. partitionFunc | function | optional

The partitioner to use - the input is a key and return value must be the hashed value. By default, a hash partitioner will be used.

Return Value

A PySpark RDD (pyspark.rdd.PipelinedRDD).

Examples

Consider the following Pair RDD:

rdd = sc.parallelize([("A",1),("B",1),("C",1),("A",1)], numSlices=3)
rdd.collect()
[('A', 1), ('B', 1), ('C', 1), ('A', 1)]

Here, the parallelize(~) method creates a RDD with 3 partitions.

Grouping by key in pair RDD and performing a reduction operation

To group by key and perform a summation of the values of each grouped key:

rdd.reduceByKey(lambda a, b: a+b).collect()
[('B', 1), ('C', 1), ('A', 2)]

Setting number of partitions after reducing by key in pair RDD

By default, the number of partitions of the resulting RDD will be equal to the number of partitions of the parent RDD:

# Create a RDD using 3 partitions
rdd = sc.parallelize([("A",1),("B",1),("C",1),("A",1),("D",1)], numSlices=3)
new_rdd = rdd.reduceByKey(lambda a, b: a+b)
3

Here, rdd is the parent RDD of new_rdd.

We can set the number of partitions of the resulting RDD by setting the numPartitions parameter:

rdd = sc.parallelize([("A",1),("B",1),("C",1),("A",1),("D",1)], numSlices=3)
new_rdd = rdd.reduceByKey(lambda a, b: a+b, numPartitions=2)
2
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
1
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!