4 Ways to Create a Data Frame in Spark

Learn how to create or transform your data to a Spark DataFrame object

Gustavo Santos
5 min readMar 6, 2024


Photo by Claudio Schwarz on Unsplash


Spark is known as a language to deal with parallelized data, as well as to work with Big Data. But it’s not that uncommon when you need to create a data frame using PySpark.

There are a handful of cases when creating a data frame can be useful: make some data for testing purposes, create a data frame out of a filtered list as a base for a join operation, and transform other objects into a data frame. Well… you name it. You might be thinking of other use cases right now.

The fact is that at one time or another, we’ll need to create a data frame, and in this post we will learn a couple of ways to do that in Spark, using Databricks.

As you may or may not know, Spark can be used in a Jupyter Notebook, upon installation or you can take advantage of Databricks, which is a platform on top of Apache Spark. In this post, we’ll consider the use of Databricks.

Let’s get to work.

Data Frame from a List

The first option we will cover is creating a data frame from a Python list.



Gustavo Santos

Data Scientist. I extract insights from data to help people and companies to make better and data driven decisions. | In: https://www.linkedin.com/in/gurezende/