search
Search
Join our weekly DS/ML newsletter layers DS/ML Guides
menu
menu search toc more_vert
Robocat
Guest 0reps
Thanks for the thanks!
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
help Ask a question
Share on Twitter
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
A
A
brightness_medium
share
arrow_backShare
Twitter
Facebook

PySpark SQL Functions | collect_list method

Machine Learning
chevron_right
PySpark
chevron_right
Documentation
chevron_right
PySpark SQL Functions
schedule Jul 1, 2022
Last updated
local_offer PySpark
Tags

PySpark SQL functions' collect_list(~) method returns a list of values in a column. Unlike collect_set(~), the returned list can contain duplicate values. Null values are ignored.

Parameters

1. col | string or Column object

The column label or a Column object.

Return Value

A PySpark SQL Column object (pyspark.sql.column.Column).

WARNING

Assume that the order of the returned list may be random since the order is affected by shuffle operations.

Examples

Consider the following PySpark DataFrame:

data = [("Alex", "A"), ("Alex", "B"), ("Bob", "A"), ("Cathy", "C"), ("Dave", None)]
df = spark.createDataFrame(data, ["name", "group"])
df.show()
+-----+-----+
| name|group|
+-----+-----+
| Alex| A|
| Alex| B|
| Bob| A|
|Cathy| C|
| Dave| null|
+-----+-----+

Getting a list of column values in PySpark

To get the a list of values in the group column:

import pyspark.sql.functions as F
df.select(F.collect_list("group")).show()
+-------------------+
|collect_list(group)|
+-------------------+
| [A, B, A, C]|
+-------------------+

Notice the following:

  • we have duplicate values (A).

  • null values are ignored.

Equivalently, you can pass in a Column object to collect_list(~) as well:

import pyspark.sql.functions as F
df.select(F.collect_list(df.group)).show()
+-------------------+
|collect_list(group)|
+-------------------+
| [A, B, A, C]|
+-------------------+

Obtaining a standard list

To obtain a standard list instead:

list_rows = df.select(F.collect_list(df.group)).collect()
list_rows[0][0]
['A', 'B', 'A', 'C']

Here, the collect() method returns the content of the PySpark DataFrame returned by select(~) as a list of Row objects. This list is guaranteed to be of length one because collect_list(~) collects the values into a single list. Finally, we access the content of the Row object using [0].

Getting a list of column values for each group in PySpark

The method collect_list(~) is often used in the context of aggregation. Consider the same PySpark DataFrame as above:

df.show()
+-----+-----+
| name|group|
+-----+-----+
| Alex| A|
| Alex| B|
| Bob| A|
|Cathy| C|
| Dave| null|
+-----+-----+

To flatten the group column into a single list for each name:

import pyspark.sql.functions as F
df.groupby("name").agg(F.collect_list("group")).show()
+-----+-------------------+
| name|collect_list(group)|
+-----+-------------------+
| Alex| [A, B]|
| Bob| [A]|
|Cathy| [C]|
| Dave| []|
+-----+-------------------+
mail
Join our newsletter for updates on new DS/ML comprehensive guides (spam-free)
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
1
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!