1

I try to load a local file as below

File = sc.textFile('file:///D:/Python/files/tit.csv')
File.count()

Full traceback

IllegalArgumentException                  Traceback (most recent call last)
<ipython-input-72-a84ae28a29dc> in <module>()
----> 1 File.count()

/databricks/spark/python/pyspark/rdd.pyc in count(self)
   1002         3
   1003         """
-> 1004         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
   1005 
   1006     def stats(self):

/databricks/spark/python/pyspark/rdd.pyc in sum(self)
    993         6.0
    994         """
--> 995         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
    996 
    997     def count(self):

/databricks/spark/python/pyspark/rdd.pyc in fold(self, zeroValue, op)
    867         # zeroValue provided to each partition is unique from the one provided
868         # to the final reduce call
--> 869         vals = self.mapPartitions(func).collect()
    870         return reduce(op, vals, zeroValue)
    871 

/databricks/spark/python/pyspark/rdd.pyc in collect(self)
    769         """
    770         with SCCallSiteSync(self.context) as css:
--> 771             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    772         return list(_load_from_socket(port, self._jrdd_deserializer))
    773 

/databricks/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
    811         answer = self.gateway_client.send_command(command)
    812         return_value = get_return_value(
--> 813             answer, self.gateway_client, self.target_id, self.name)
    814 
    815         for temp_arg in temp_args:

/databricks/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     51                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     52             if s.startswith('java.lang.IllegalArgumentException: '):
---> 53                 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
     54             raise
     55     return deco

IllegalArgumentException: u'java.net.URISyntaxException: Expected scheme-specific part at index 2: D:'

what's wrong? I do usual way for example load a local file to spark using sc.textFile() or How to load local file in sc.textFile, instead of HDFS These examles are for scala but for python is thr same way if i don't mind

but

val File = 'D:\\\Python\\files\\tit.csv'


SyntaxError: invalid syntax
  File "<ipython-input-132-2a3878e0290d>", line 1
    val File = 'D:\\\Python\\files\\tit.csv'
           ^
SyntaxError: invalid syntax
Community
  • 1
  • 1
Edward
  • 4,443
  • 16
  • 46
  • 81
  • Did you try `textFile('D:/` or using escaped back slashes because you are on Windows? – OneCricketeer Sep 17 '16 at 22:25
  • Although, seeing `/databricks/spark/` makes me think that you aren't using a Windows machine at all, and instead some Databricks platform – OneCricketeer Sep 17 '16 at 22:26
  • i am on Windows & i tried sc.textFile('file://D:/Python/files/tit.csv') & sc.textFile('file:/D:/Python/files/tit.csv') & sc.textFile('D:/Python/files/tit.csv') – Edward Sep 17 '16 at 22:28
  • How would you open the same file in a regular python interpreter? `open()`, probably? Does that work? – OneCricketeer Sep 17 '16 at 22:30
  • with open(), it works – Edward Sep 17 '16 at 22:32
  • 2
    Possible duplicate of [How to access local files in Spark on Windows?](http://stackoverflow.com/questions/30520176/how-to-access-local-files-in-spark-on-windows) – OneCricketeer Sep 17 '16 at 22:32

2 Answers2

2

Update: There seems to be an issue with ":" in hadoop...

filenames with ':' colon throws java.lang.IllegalArgumentException

https://issues.apache.org/jira/browse/HDFS-13

and

Path should handle all characters

https://issues.apache.org/jira/browse/HADOOP-3257

In this Q&A someone manage to overcome it with spark 2.0

Spark 2.0: Relative path in absolute URI (spark-warehouse)


There are several issues in the question:

1) python access to local files in windows

File = sc.textFile('file:///D:/Python/files/tit.csv')
File.count()

Can you please try:

import os
inputfile = sc.textFile(os.path.normpath("file://D:/Python/files/tit.csv"))
inputfile.count()

os.path.normpath(path)

Normalize a pathname by collapsing redundant separators and up-level references so that A//B, A/B/, A/./B and A/foo/../B all become A/B. This string manipulation may change the meaning of a path that contains symbolic links. On Windows, it converts forward slashes to backward slashes. To normalize case, use normcase().

https://docs.python.org/2/library/os.path.html#os.path.normpath

The output is:

>>> os.path.normpath("file://D:/Python/files/tit.csv")
'file:\\D:\\Python\\files\\tit.csv'

2) scala code tested in python:

val File = 'D:\\\Python\\files\\tit.csv'
SyntaxError: invalid syntax

This code doesn't run in python as it is scala code.

Community
  • 1
  • 1
Yaron
  • 10,166
  • 9
  • 45
  • 65
1

I've done

import os
os.path.normpath("file:///D:/Python/files/tit.csv")
Out[131]: 'file:/D:/Python/files/tit.csv'

then

inputfile = sc.textFile(os.path.normpath("file:/D:/Python/files/tit.csv"))
inputfile.count()
IllegalArgumentException: u'java.net.URISyntaxException: Expected scheme-specific part at index 2: D:'

if i do like this

inputfile = sc.textFile(os.path.normpath("file:\\D:\\Python\\files\\tit.csv"))
inputfile.count()
IllegalArgumentException: u'java.net.URISyntaxException: Relative path in absolute URI: file:%5CD:%5CPython%5Cfiles%5Ctit.csv'

and i did like this

os.path.normcase("file:///D:/Python/files/tit.csv")
Out[136]: 'file:///D:/Python/files/tit.csv'
inputfile = sc.textFile(os.path.normpath("file:///D:/Python/files/tit.csv"))
inputfile.count()
IllegalArgumentException: u'java.net.URISyntaxException: Expected scheme-specific part at index 2: D:'
Edward
  • 4,443
  • 16
  • 46
  • 81
  • 1
    it might be a global issue - http://stackoverflow.com/questions/38669206/spark-2-0-relative-path-in-absolute-uri-spark-warehouse – Yaron Sep 18 '16 at 11:58