search
Search
Join our weekly DS/ML newsletter layers DS/ML Guides
menu
menu search toc more_vert
Robocat
Guest 0reps
Thanks for the thanks!
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
help Ask a question
Share on Twitter
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
A
A
brightness_medium
share
arrow_backShare
Twitter
Facebook

PySpark SQL Functions | countDistinct method

Machine Learning
chevron_right
PySpark
chevron_right
Documentation
chevron_right
PySpark SQL Functions
schedule Jul 1, 2022
Last updated
local_offer PySpark
Tags

PySpark SQL Functions' countDistinct(~) method returns the distinct number of rows for the specified columns.

Parameters

1. col | string or Column

The column to consider when counting distinct rows.

2. *col | string or Column | optional

The additional columns to consider when counting distinct rows.

Return Value

A PySpark Column (pyspark.sql.column.Column).

Examples

Consider the following PySpark DataFrame:

df = spark.createDataFrame([["Alex", 25], ["Bob", 30], ["Alex", 25], ["Alex", 50]], ["name", "age"])
df.show()
+----+---+
|name|age|
+----+---+
|Alex| 25|
| Bob| 30|
|Alex| 25|
|Alex| 50|
+----+---+

Counting the number of distinct values in single PySpark column

To count the number of distinct rows in the column name:

import pyspark.sql.functions as F
df.select(F.countDistinct("name")).show()
+--------------------+
|count(DISTINCT name)|
+--------------------+
| 2|
+--------------------+

Note that instead of passing in the column label ("name"), you can pass in a Column object like so:

# df.select(F.countDistinct(df.name)).show()
df.select(F.countDistinct(F.col("name"))).show()
+--------------------+
|count(DISTINCT name)|
+--------------------+
| 2|
+--------------------+

Counting the number of distinct values in multiple PySpark columns

To consider the columns name and age when counting duplicate rows:

df.select(F.countDistinct("name", "age")).show()
+-------------------------+
|count(DISTINCT name, age)|
+-------------------------+
| 3|
+-------------------------+

Counting the number of distinct rows in PySpark DataFrame

To consider all columns when counting duplicate rows, pass in "*":

df.select(F.countDistinct("*")).show()
+-------------------------+
|count(DISTINCT name, age)|
+-------------------------+
| 3|
+-------------------------+
mail
Join our newsletter for updates on new DS/ML comprehensive guides (spam-free)
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_down