search
Search
Join our weekly DS/ML newsletter layers DS/ML Guides
menu
menu search toc more_vert
Robocat
Guest 0reps
Thanks for the thanks!
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
help Ask a question
Share on Twitter
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
A
A
brightness_medium
share
arrow_backShare
Twitter
Facebook

PySpark DataFrame | selectExpr method

Machine Learning
chevron_right
PySpark
chevron_right
Documentation
chevron_right
PySpark DataFrame
schedule Jul 1, 2022
Last updated
local_offer PySpark
Tags

PySpark DataFrame's selectExpr(~) method returns a new DataFrame based on the specified SQL expression.

Parameters

1. *expr | string

The SQL expression.

Return Value

A new PySpark DataFrame.

Examples

Consider the following PySpark DataFrame:

df = spark.createDataFrame([["Alex", 20], ["Bob", 30], ["Cathy", 40]], ["name", "age"])
df.show()
+-----+---+
| name|age|
+-----+---+
| Alex| 20|
| Bob| 30|
|Cathy| 40|
+-----+---+

Selecting data using SQL expressions in PySpark DataFrame

To get a new DataFrame where the values for the name column is uppercased:

df.selectExpr("upper(name) AS upper_name", "age * 2").show()
+----------+---------+
|upper_name|(age * 2)|
+----------+---------+
| ALEX| 40|
| BOB| 60|
| CATHY| 80|
+----------+---------+

We should use selectExpr(~) rather than select(~) to extract columns while performing some simple transformations on them - just as we have done here.

NOTE

There exists a similar method expr(~) in the pyspark.sql.functions library. expr(~) also takes in as argument a SQL expression, but the difference is that the return type is a PySpark Column. The following usage of selectExpr(~) and expr(~) are equivalent:

from pyspark.sql import functions as F
# The below is the same as df.selectExpr("upper(name)").show()
df.select(F.expr("upper(name)")).show()
+-----------+
|upper(name)|
+-----------+
| ALEX|
| BOB|
| CATHY|
+-----------+

In general, you should use selectExpr(~) rather than expr(~) because:

  • you won't have to import the pyspark.sql.functions library.

  • the syntax is shorter and clearer

Parsing more complex SQL expressions

Consider the following PySpark DataFrame:

df = spark.createDataFrame([['Alex',20],['Bob',60]], ['name','age'])
df.show()
+----+---+
|name|age|
+----+---+
|Alex| 20|
| Bob| 60|
+----+---+

We can use classic SQL clauses like AND and LIKE to formulate more complicated expressions:

df.selectExpr('age < 30 AND name LIKE "A%" as result').show()
+------+
|result|
+------+
| true|
| false|
+------+

Here, we are checking for rows where age is less than 30 and the name starts with the letter A.

Note that we can implement the same logic like so:

col = ((df.age < 30) & (F.col('name').startswith('A'))).alias('result')
df.select(col).show()
+------+
|result|
+------+
| true|
| false|
+------+

I personally prefer using selectExpr(~) because the syntax is cleaner and the meaning is intuitive for those who are familiar with SQL.

Checking for the existence of values in PySpark column

Another application of selectExpr(~) is to check for the existence of values in a PySpark column. Please check out the recipe here.

mail
Join our newsletter for updates on new DS/ML comprehensive guides (spam-free)
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
1
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!