search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

Counting number of negative values in PySpark DataFrame

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

Consider the following PySpark DataFrame:

rows = [["Alex", 25], ["Bob", 30]]
df = spark.createDataFrame(rows, ["name", "age"])
df.show()
+---+---+
| A| B|
+---+---+
| -3| -4|
| -5| 6|
+---+---+

Counting the number of negative values in a single column

To count the number of negative values in a single column:

df.filter('A < 0').count()
2

Here, the df.filter(~) method returns all the rows in the PySpark DataFrame where the value for A is negative. We then use the returned PySpark DataFrame's count() method to fetch the number of rows as an integer.

Counting the number of negative values in multiple columns

To count the number of negative values in multiple columns, we can use the selectExpr(~) method which accepts SQL expressions as the argument:

cols = ['A', 'B']
sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]
df.selectExpr(sql_expressions).show()
+-------+-------+
|A_count|B_count|
+-------+-------+
| 2| 1|
+-------+-------+

Here, we are using list comprehension to convert a list of column names to a list of SQL expressions:

cols = ['A', 'B']
sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]
print(sql_expressions)
['count(CASE WHEN (A < 0) THEN 1 END) AS A_count',
'count(CASE WHEN (B < 0) THEN 1 END) AS B_count']

Here, note the following:

  • we are using the CASE WHEN to map negative values to 1 and null otherwise. The count(~) SQL function only counts non-null values, and hence we are able to obtain the number of negative values.

  • we are assigning column labels to the returned column of count(~) using the alias clause AS.

Counting the total number of negative values in multiple columns

To count the total number of negative values in multiple columns:

cols = ['A', 'B']
sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]
df_res = df.selectExpr(sql_expressions)
row_object = df_res.collect()[0]
sum(list(row_object))
3

Here, we first obtain the number of negative values of each column, as we have done in the previous section:

cols = ['A', 'B']
sql_expressions = [f'count(CASE WHEN ({col} < 0) THEN 1 END) AS {col}_count' for col in cols]
df.selectExpr(sql_expressions).show()
+-------+-------+
|A_count|B_count|
+-------+-------+
| 2| 1|
+-------+-------+

We then convert this PySpark DataFrame data into a list of Row objects using the collect() method.

df_res.collect()
[Row(A_count=2, B_count=1)]

The resulting list in this case will always be of length one, and we access the inner Row object using [0]. We then convert the Row object into a list:

list(df_res.collect()[0])
[2, 1]

Finally, we just the built-in sum(~) method to compute the total number of negative values.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!