11

What is the correct way to install the delta module in python??

In the example they import the module

from delta.tables import *

but i did not find the correct way to install the module in my virtual env

Currently i am using this spark param -

"spark.jars.packages": "io.delta:delta-core_2.11:0.5.0"

zsxwing
  • 20,270
  • 4
  • 37
  • 59
ofriman
  • 198
  • 1
  • 1
  • 9
  • See my answer on how to do this with Delta 1.2 & PySpark 3.2. The other answers are outdated. – Powers Jun 01 '22 at 02:17

6 Answers6

9

As the correct answer is hidden in the comments of the accepted solution, I thought I'd add it here.

You need to create your spark context with some extra settings and then you can import delta:

spark_session = SparkSession.builder \
    .master("local") \
    .config("spark.jars.packages", "io.delta:delta-core_2.12:0.8.0") \
    .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
    .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") \
    .getOrCreate()

from delta.tables import *

Annoyingly, your IDE will of course shout at you about this as the package isn't installed and you will also be operating without autocomplete and type hints. I'm sure there's a work around and I will update if I come accross it.

The package itself is on their github here and the readme suggests you can pip install but that doesn't work. In theory you could clone it and install manually.

DataMacGyver
  • 386
  • 3
  • 10
  • Did you find the workaround for autocomplete? – AhmedRana Jun 28 '21 at 08:49
  • No, I was just hacking and had to put this down. Theoretically you can go grab the package from their github (link in answer) and then install it but there's not a setup.py so that's not a simple task. An alternative (and hacky) solution may be to just pull the tables code (https://github.com/delta-io/delta/blob/master/python/delta/tables.py) and put it in your app. – DataMacGyver Jun 30 '21 at 12:20
  • There is any way of installing the module with a package manager like `poetry`? – mxmrpn Sep 01 '23 at 20:59
7

Because Delta's Python codes are stored inside a jar and loaded by Spark, delta module cannot be imported until SparkSession/SparkContext is created.

zsxwing
  • 20,270
  • 4
  • 37
  • 59
  • 1
    I created a SparkSession, but still get that error. Do you have code that works? – Franz657587 Jun 03 '20 at 08:18
  • I am not 100% sure, but I don't think `from delta.tables import *` will work outside of a Databricks Runtime. You can however use delta tables, just not specific delta table utilities. – Clay Jul 18 '20 at 21:33
  • 1
    How did you start pyspark? If you run a command like `pyspark --packages io.delta:delta-core_2.11:0.5.0 ...`, it should work. – zsxwing Jul 19 '20 at 04:53
  • 1
    started python then `SparkSession.builder.config("spark.jars.packages",'io.delta:delta-core_2.11:0.6.1').config("spark.delta.logStore.class","org.apache.spark.sql.delta.storage.S3SingleDriverLogStore").config("spark.sql.extensions","io.delta.sql.DeltaSparkSessionExtension").config("spark.sql.catalog.spark_catalog","org.apache.spark.sql.delta.catalog.DeltaCatalog").getOrCreate()` Reading and writing Delta Tables works, `from delta.tables import *` does not. However, it does when I start the pyspark REPL as you do. - I'll have to figure this out. – Clay Jul 19 '20 at 11:45
  • Now `from delta.tables import *` is working from SparkSession started after python, and `spark-submit --properties-file /path/to/my/spark-defaults.conf` with `spark.jars.packages io.delta:delta-core_2.11:0.6.1` in the `.conf` file. I have no idea what was the issue before. – Clay Jul 19 '20 at 12:07
  • 1
    `spark.jars.packages` is handled by `org.apache.spark.deploy.SparkSubmitArguments/SparkSubmit`. So it must be passed as an argument of `spark-submit`. When `SparkSession.builder.config` is called, `SparkSubmit` has done its job. So `spark.jars.packages` is no-op at this moment. – zsxwing Jul 19 '20 at 21:21
  • https://issues.apache.org/jira/browse/SPARK-21752 looks like there is already a ticket for this. – zsxwing Jul 19 '20 at 21:32
  • @ofriman This works for me with delta.io, outside of Databricks. spark = SparkSession \ .builder \ .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0") \ .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \ .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") \ .enableHiveSupport() \ .getOrCreate() – Boris Dec 11 '20 at 13:28
  • Just follow the official documentation, https://docs.delta.io/latest/quick-start.html#python, there is a nice example how to run it in python. You need to import after spark initialization, that is all! – Andrej Feb 15 '21 at 15:13
7

To run Delta locally with PySpark, you need to follow the official documentation.

This works for me but only when executing directly the script (python <script_file>), not with pytest or unittest.

To solve this problem, you need to add this environment variable:

PYSPARK_SUBMIT_ARGS='--packages io.delta:delta-core_2.12:1.0.0 pyspark-shell'

Use Scala and Delta version that match your case. With this environment variable, I can run pytest or unittest via cli without any problem

from unittest import TestCase

from delta import configure_spark_with_delta_pip
from pyspark.sql import SparkSession


class TestClass(TestCase):
    
    builder = SparkSession.builder.appName("MyApp") \
        .master("local[*]")
        .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
        .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
    
    spark = configure_spark_with_delta_pip(builder).getOrCreate()

    def test_create_delta_table(self):
            self.spark.sql("""CREATE IF NOT EXISTS TABLE <tableName> (
                              <field1> <type1>)
                              USING DELTA""")

The function configure_spark_with_delta_pip appends a config option in builder object

.config("io.delta:delta-core_<scala_version>:<delta_version>")
Dharman
  • 30,962
  • 25
  • 85
  • 135
davidretana
  • 171
  • 2
  • 4
1

Here's how you can install Delta Lake & PySpark with conda.

  • Make sure you have Java installed (I use SDKMAN to manage multiple Java versions)
  • Install Miniconda
  • Pick Delta Lake & PySpark versions that are compatible. For example, Delta Lake 1.2 is compatible with PySpark 3.2.
  • Create a YAML file with the required dependencies, here is an example from the delta-examples repo I created.
  • Create the environment with a command like conda env create envs/mr-delta.yml
  • Activate the conda environment with conda activate mr-delta
  • Here is an example notebook. Note that it starts with the following code:
import pyspark
from delta import *

builder = pyspark.sql.SparkSession.builder.appName("MyApp") \
    .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
    .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")

spark = configure_spark_with_delta_pip(builder).getOrCreate()
Powers
  • 18,150
  • 10
  • 103
  • 108
0

If you are facing issues with Jupyter notebook add the below environment variable

from pyspark.sql import SparkSession
import os
from delta import *

os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages  org.apache.spark:spark-avro_2.12:3.4.1,io.delta:delta-core_2.12:2.4.0 pyspark-shell'
# RUN spark-shell --packages org.apache.spark:spark-avro_2.12:3.4.1
# RUN spark-shell --packages io.delta:delta-core_2.12:2.4.0 --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension" --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog"

builder = SparkSession.builder.appName("SampleSpark") \
        .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
        .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")

spark = builder.getOrCreate()
Akhilraj N S
  • 9,049
  • 5
  • 36
  • 42
-1

In my case the issue was I had a Cluster running on a Databricks Runtime lower than 6.1

https://docs.databricks.com/delta/delta-update.html

The Python API is available in Databricks Runtime 6.1 and above.

After changing the Databricks Runtime to 6.4 problem disappeared.

To do that: Click clusters -> Pick the one you are using -> Edit -> Pick Databricks Runtime 6.1 and above

matkurek
  • 553
  • 5
  • 12
  • Thank you for answer, but I guess the question was related to "pure" python without Databricks – Andrej Feb 15 '21 at 14:55
  • @Andrej No it wasn't, it has a "databricks" tag – matkurek Feb 16 '21 at 10:01
  • I guess the tag with databricks is there by an error and should be removed, delta lake is configured in databricks out of box - https://docs.databricks.com/delta/intro-notebooks.html You need to temper with "spark.jars.packages' when you are setting up the spark on your local machine for instance. – Andrej Feb 19 '21 at 06:00