search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

PySpark DataFrame | fillna method

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

PySpark DataFrame's fillna(~) method replaces null values with your specified value. We can also pick the columns to perform the fill.

Parameters

1. value | int or float or string or boolean or dict

The value to fill the null values with. For dict, the key will be the column labels and the value will be the fill value for that column. If dict is passed, then subset is ignored.

2. subset | string or tuple or list | optional

The columns to consider for filling. By default, all columns that are of the same type as value will be considered.

Return Value

A PySpark DataFrame (pyspark.sql.dataframe.DataFrame).

Examples

Consider the following PySpark DataFrame:

df = spark.createDataFrame([["Alex", 25, None], [None, 30, 200], ["Cathy", None, 100]], ["name", "age", "salary"])
df.show()
+-----+----+------+
| name| age|salary|
+-----+----+------+
| Alex| 25| null|
| null| 30| 200|
|Cathy|null| 100|
+-----+----+------+

Filling missing values in entire PySpark DataFrame

To fill all missing values with 50:

df.fillna(50).show()
+-----+---+------+
| name|age|salary|
+-----+---+------+
| Alex| 25| 50|
| null| 30| 200|
|Cathy| 50| 100|
+-----+---+------+

Here, notice how the null value is intact in the name column. This is because we passed in 50 for the value argument, which is a number type. However, the column name is a string type, and because of the mismatch in the data types, the null value was not filled for name column.

Filling missing values with different values for different columns

To fill the null values in age with 50, and those in salary in 300:

df.fillna({"age":50, "salary":300}).show()
+-----+---+------+
| name|age|salary|
+-----+---+------+
| Alex| 25| 300|
| null| 30| 200|
|Cathy| 50| 100|
+-----+---+------+

Filling missing values with the same value for different columns

To fill null values for the age and salary columns with 50:

df.fillna(50, ["age","salary"]).show()
+-----+---+------+
| name|age|salary|
+-----+---+------+
| Alex| 25| 50|
| null| 30| 200|
|Cathy| 50| 100|
+-----+---+------+

Filling missing values using values of another column

Unfortunately, the fillna(-) method does not allow for imputing missing values with values of another column.

Consider the following PySpark DataFrame:

df = spark.createDataFrame([["Alex", 25, None], [None, 30, 200], ["Cathy", None, 100]], ["name", "age", "salary"])
df.show()
+-----+----+------+
| name| age|salary|
+-----+----+------+
| Alex| 25| null|
| null| 30| 200|
|Cathy|null| 100|
+-----+----+------+

To impute missing values in age with values in salary, we can use PySpark's when(-) method:

df.withColumn("age", F.when(F.col("age").isNull(), F.col("salary")).otherwise(F.col("age"))).show()
+-----+---+------+
| name|age|salary|
+-----+---+------+
| Alex| 25| null|
| null| 30| 200|
|Cathy|100| 100|
+-----+---+------+
robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...