search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to

Uploading a new dataset on Databricks

schedule Aug 12, 2023
Last updated
local_offer
PySpark
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

Prerequisites

To follow along, make sure you have created a cluster and a notebook on Databricks. To learn how to do so, follow our guide here.

Uploading new dataset on Databricks GUI

In order to upload a new dataset, we need to create a Table. A Table in Databricks basically stores structured data - think of it as a SQL table or a Pandas DataFrame. To create a new Table, click on Data in the left side bar, and then click on Create Table:

Now, we upload a CSV file (which can be downloaded hereopen_in_new) called iris_dataset.csv from our local machine:

Once uploaded, DataBricks would tell you that the file has been uploaded to /FileStore/tables/iris_dataset.csv. At this point, the raw file has been uploaded to Databricks, but the table has not yet been created.

Click on Create Table with UI, and select our cluster with which to read the data. Next, click on Preview Table:

Here, we can specify information such as the file type and column delimiter of our dataset. Of course, since we have added the .csv extension to our file, Databricks is already aware that the file type is CSV and that the column delimiter is a comma.

What we might want to specify are the column names as well as the column types. By default, the column names would be set to _c0, _c1 and so on, while the column types are all set to string. This is obviously not desirable, and so let's change these default settings:

Once the table has been created, you can see this table in the Data section of the left side bar:

Note that you could click on the pin button next to Create Table to keep showing this panel while having a notebook open.

Reading the new dataset in a notebook on Databricks

Now that we have created a table holding our Iris dataset, we can import this table into our Databricks notebooks. Head over to our Databricks notebook, and type in the following command:

df = sqlContext.sql("SELECT * FROM iris_dataset_csv")
df.show(5)
+------------+-----------+------------+-----------+-------+
|sepal_length|sepal_width|petal_length|petal_width|species|
+------------+-----------+------------+-----------+-------+
| null| null| null| null|species|
| 5.1| 3.5| 1.4| 0.2| setosa|
| 4.9| 3.0| 1.4| 0.2| setosa|
| 4.7| 3.2| 1.3| 0.2| setosa|
| 4.6| 3.1| 1.5| 0.2| setosa|
+------------+-----------+------------+-----------+-------+
only showing top 5 rows

Note the following:

  • we used SQL to load in our dataset.

  • the table name iris_dataset_csv is what we specified when we uploaded the dataset.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...
thumb_up
0
thumb_down
0
chat_bubble_outline
0
settings
Enjoy our search
Hit / to insta-search docs and recipes!