search
Search
Publish
menu
menu search toc more_vert
Robocat
Guest 0reps
Thanks for the thanks!
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
help Ask a question
Share on Twitter
search
keyboard_voice
close
Searching Tips
Search for a recipe: "Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
A
A
share
thumb_up_alt
bookmark
arrow_backShare
Twitter
Facebook

PySpark RDD | filter method

Machine Learning
chevron_right
PySpark
chevron_right
Documentation
chevron_right
PySpark RDD
schedule Jun 19, 2022
Last updated
local_offer PySpark
Tags

PySpark RDD's filter(~) method extracts a subset of the data based on the given function.

Parameters

1. f | function

A function that takes in as input an item of the RDD's data and returns a boolean where:

  • True indicates keeping

  • False indicates ignoring.

Return Value

A PySpark RDD (pyspark.rdd.PipelinedRDD).

Examples

Consider the following RDD:

rdd = sc.parallelize([4,2,5,7])
rdd
ParallelCollectionRDD[7] at readRDDFromInputStream at PythonRDD.scala:413

Filtering elements of a RDD

To obtain a new RDD where the values are all strictly larger than 3:

new_rdd = rdd.filter(lambda x: x > 3)
new_rdd.collect()
[4, 5, 7]

Here, the collect() method is used to retrieve the content of the RDD as a single list.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Ask a question or leave a feedback...