PySpark’s Delta Storage Format

Recently the Apache Foundation have released a very useful new storage format for use with Spark called Delta. Delta is an extension to the parquet format and as such basic creation and reading of Delta files follows a very similar syntax.

# to write parquet

# to write to delta we can do exactly the same 
# note this OVERWRITES the Delta not upserts

# to read parquet
df =\

# to read delta we can do exactly the same
df =\

However Delta offers three additional benefits over Parquet which make it a much more attractive and easy to use format

Firstly  Delta allows an unusual method of writing to an existing Delta file. After loading the Delta file into a variable as a data frame, you can write direct to the Delta file using SQL commands.

Secondly Delta allows upserting of records to existing data. That is new records will be inserted, while old records can be updated. This is an improvement over Parquet where it was not possible to update an existing partition. Instead the partition in question had to be read, merged with the new data, deleted and then rewritten which required some rather careful handling and was not especially efficient. In Delta this process has been replaced by a simple SQL command which is quicker and easier.

These two features combine together to allow for exceptionally easy updating of Delta files:

import pyspark
from pyspark.sql import SparkSession, SQLContext
def upsert_to_delta(df_update, path, key_column):
  Upserts the rows in df_update to the existing Delta file 
  found at path using key_column as the upsert key
  IMPORTANT : This upsert will be applied directly to the 
  Delta file in place, not just to df_baseline
  df_update (dataframe): the dataframe to be upserted
  path (string) : path to Delta file resource
  key_column (string) : name of the key column (common 
            between Delta file and upserted dataframe)
  None : (the upsert is applied to the Delta in place)
  # connect to our Delta
  df_baseline ='delta').load(path)
  # give the dataframes to view names
  # setup our upsert merge SQL statement
  sql = """
    MERGE INTO baseline b
    USING updates u
    ON b.{id} = u.{id}
  sql = sql.format(id = key_column)
  # execute the upsert
  # finally optimise Delta file for future use
  spark.sql("OPTIMIZE baseline")

# to use the function
upsert_to_delta(df, path, "UniqueID")

Thirdly Delta allows you to view data as it was at some earlier state. This can be extremely useful in the case that an incorrect update was pushed to the Delta file. Using the time travel feature is extremely simple at the basic level. We can also rollback the existing Delta file to an earlier state using SQL as shown below.

# to load the dataframe from an earlier state
df =\
  .option("timestampAsOf", "2019-10-10")\

# to rollback a delta file 
# first choose the number of days
# connect to our Delta
delta ='delta').load(path)
# create a view so we can use SQL
# define our rollback
sql = """
  INSERT INTO rollback
  SELECT * FROM rollback 
# and rollback

The Delta format is new and its documentation is still evolving, documentation  on upserting can be found here and documentation on time travel can be found here. including information on using rollback with SQL commands

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.