search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

PySpark SQL Functions | collect_set method

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

PySpark SQL Functions' collect_set(~) method returns a unique set of values in a column. Null values are ignored.

NOTE

Use collect_list(~) instead to obtain a list of values that allows for duplicates.

Parameters

1. col | string or Column object

The column label or a Column object.

Return Value

A PySpark SQL Column object (pyspark.sql.column.Column).

WARNING

Assume that the order of the returned set may be random since the order is affected by shuffle operationslink.

Examples

Consider the following PySpark DataFrame:

data = [("Alex", "A"), ("Alex", "B"), ("Bob", "A"), ("Cathy", "C"), ("Dave", None)]
df = spark.createDataFrame(data, ["name", "group"])
df.show()
+-----+-----+
| name|group|
+-----+-----+
| Alex| A|
| Alex| B|
| Bob| A|
|Cathy| C|
| Dave| null|
+-----+-----+

Getting a set of column values in PySpark

To get the unique set of values in the group column:

import pyspark.sql.functions as F
df.select(F.collect_set("group")).show()
+------------------+
|collect_set(group)|
+------------------+
| [C, B, A]|
+------------------+

Equivalently, you can pass in a Column object to collect_set(~) as well:

import pyspark.sql.functions as F
df.select(F.collect_set(df.group)).show()
+------------------+
|collect_set(group)|
+------------------+
| [C, B, A]|
+------------------+

Notice how the null value does not appear in the resulting set.

Getting the set as a standard list

To get the set as a standard list:

list_rows = df.select(F.collect_set(df.group)).collect()
list_rows[0][0]
['C', 'B', 'A']

Here, the PySpark DataFrame's collect() method returns a list of Row objects. This list is guaranteed to be length one due to the nature of collect_set(~). The Row object contains the list so we need to include another [0].

Getting a set of column values of each group in PySpark

The method collect_set(~) is often used in the context of aggregation. Consider the same PySpark DataFrame as before:

df.show()
+-----+-----+
| name|group|
+-----+-----+
| Alex| A|
| Alex| B|
| Bob| A|
|Cathy| C|
| Dave| null|
+-----+-----+

To flatten the group column into a single set for each name:

import pyspark.sql.functions as F
df.groupby("name").agg(F.collect_set("group")).show()
+-----+------------------+
| name|collect_set(group)|
+-----+------------------+
| Alex| [B, A]|
| Bob| [A]|
|Cathy| [C]|
+-----+------------------+
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
1
thumb_down
3
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!