Ad

Spark 2.1 PySpark Bug: Sc.textFile("test.txt").repartition(2).collect()

- 1 answer

Using a vanilla text file:

echo "a\nb\nc\nd" >> test.txt

Using the vanilla spark-2.1.0-bin-hadoop2.7.tgz, the following fails. The same test works fine with older versions of Spark:

$ bin/pyspark

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Python version 2.7.13 (default, Dec 18 2016 07:03:39)
SparkSession available as 'spark'.

>>> sc.textFile("test.txt").collect()
[u'a', u'b', u'c', u'd']

>>> sc.textFile("test.txt").repartition(2).collect()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/rdd.py", line 810, in collect
    return list(_load_from_socket(port, self._jrdd_deserializer))
  File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/rdd.py", line 140, in _load_from_socket
    for item in serializer.load_stream(rf):
  File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/serializers.py", line 529, in load_stream
    yield self.loads(stream)
  File "/Users/admin/opt/spark/spark-2.1.0-bin-hadoop2.7/python/pyspark/serializers.py", line 524, in loads
    return s.decode("utf-8") if self.use_unicode else s
  File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte

Using the vanilla Spark 2.1 local installation, same text file, but the Scala-based spark-shell, the exact same commands work:

$ bin/spark-shell

Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112)
Type in expressions to have them evaluated.
Type :help for more information.

scala> sc.textFile("test.txt").collect()
res0: Array[String] = Array(a, b, c, d)

scala> sc.textFile("test.txt").repartition(2).collect()
res1: Array[String] = Array(a, c, d, b)
Ad

Answer

This is a known bug. It will be fixed post Spark 2.1

UPDATE: Confirmed fixed in Spark 2.1.1

Ad
source: stackoverflow.com
Ad