search
Search
Join our weekly DS/ML newsletter layers DS/ML Guides
menu
menu search toc more_vert
Robocat
Guest 0reps
Thanks for the thanks!
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
help Ask a question
Share on Twitter
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
A
A
brightness_medium
share
arrow_backShare
Twitter
Facebook

PySpark SQL Functions | count_distinct method

Machine Learning
chevron_right
PySpark
chevron_right
Documentation
chevron_right
PySpark SQL Functions
schedule Jul 1, 2022
Last updated
local_offer PySpark
Tags

PySpark SQL Functions' count_distinct(~) method counts the number of distinct values in the specified columns.

Parameters

1. *cols | string or Column

The columns in which to count the number of distinct values.

Return Value

A PySpark Column holding an integer.

Examples

Consider the following PySpark DataFrame:

df = spark.createDataFrame([["Alex", "A"], ["Bob", "A"], ["Cathy", "B"]], ["name", "class"])
df.show()
+-----+-----+
| name|class|
+-----+-----+
| Alex| A|
| Bob| A|
|Cathy| B|
+-----+-----+

Counting the number of distinct values in a single column in PySpark

To count the number of distinct values in the class column:

from pyspark.sql import functions as F
df.select(F.count_distinct("class").alias("c")).show()
+---+
| c|
+---+
| 2|
+---+

Here, we are giving the name "c" to the Column returned by count_distinct(~) via alias(~).

Note that we could also supply a Column object to count_distinct(~) instead:

df.select(F.count_distinct(df["class"]).alias("c")).show()
+---+
| c|
+---+
| 2|
+---+

Obtaining an integer count

By default, count_distinct(~) returns a PySpark Column. To get an integer count instead:

df.select(F.count_distinct(df["class"])).collect()[0][0]
2

Here, we are use the select(~) method to convert the Column into PySpark DataFrame. We then use the collect(~) method to convert the DataFrame into a list of Row objects. Since there is only one Row in this list as well as one value in the Row, we use [0][0] to access the integer count.

Counting the number of distinct values in a set of columns in PySpark

To count the number of distinct values for the columns name and class:

df.select(F.count_distinct("name", "class").alias("c")).show()
+---+
| c|
+---+
| 3|
+---+
mail
Join our newsletter for updates on new DS/ML comprehensive guides (spam-free)
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?