search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

PySpark DataFrame | intersectAll method

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

PySpark DataFrame's intersectAll(~) method returns a new PySpark DataFrame with rows that also exist in the other PySpark DataFrame. Unlike intersect(~), the intersectAll(~) method preserves duplicates.

NOTE

The intersectAll(~) method is identical to to the INTERSECT ALL statement in SQL.

Parameters

1. other | PySpark DataFrame

The other PySpark DataFrame.

Return Value

A new PySpark DataFrame.

Examples

Consider the following PySpark DataFrame:

df = spark.createDataFrame([("Alex", 20), ("Alex", 20), ("Bob", 30), ("Cathy", 40)], ["name", "age"])
df.show()
+-----+---+
| name|age|
+-----+---+
| Alex| 20|
| Alex| 20|
| Bob| 30|
|Cathy| 40|
+-----+---+

Suppose the other PySpark DataFrame is:

df_other = spark.createDataFrame([("Alex", 20), ("Alex", 20), ("David", 80), ("Eric", 80)], ["name", "age"])
df_other.show()
+-----+---+
| name|age|
+-----+---+
| Alex| 20|
| Alex| 20|
|David| 80|
| Eric| 80|
+-----+---+

Here, note the following:

  • the only matching row is Alex's row

  • Alex's row appears twice in both df and df_other

Getting rows that also exist in other PySpark DataFrame while preserving duplicates

To get rows that also exist in other PySpark DataFrame while preserving duplicates:

df_res = df.intersectAll(df_other)
df_res.show()
+----+---+
|name|age|
+----+---+
|Alex| 20|
|Alex| 20|
+----+---+

Note the following:

  • Alex's row is duplicated because Alex's row appears twice in df and df_other each.

  • if Alex's row only appeared once in one DataFrame but appeared multiple times in another, Alex's row will only be included once in the resulting DataFrame.

  • if you want to include duplicating rows only once, then use the intersect(~) method instead.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!