Home > OS >  Error installing PySpark in Google colab - TypeError: an integer is required (got type bytes)
Error installing PySpark in Google colab - TypeError: an integer is required (got type bytes)

Time:12-10

I am trying to install Pyspark in Google Colab and I got the following error:

TypeError: an integer is required (got type bytes)

I tried using latest spark 3.3.1 and it did not resolve the problem. https://dlcdn.apache.org/spark/spark-3.3.1/spark-3.3.1-bin-hadoop3.tgz

Below is the code:

!apt-get update
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://archive.apache.org/dist/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz
!tar xf spark-2.3.1-bin-hadoop2.7.tgz
!pip install -q findspark


import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.3.1-bin-haoop2.7"




import findspark
findspark.init()
from pyspark import SparkContext

sc = SparkContext.getOrCreate()
sc

Error code below:

TypeError                                 Traceback (most recent call last)
<ipython-input-4-6a9e5a844c87> in <module>
      1 import findspark
      2 findspark.init()
----> 3 from pyspark import SparkContext
      4 
      5 sc = SparkContext.getOrCreate()

4 frames
/content/spark-2.3.1-bin-hadoop2.7/python/pyspark/cloudpickle.py in _make_cell_set_template_code()
    125         )
    126     else:
--> 127         return types.CodeType(
    128             co.co_argcount,
    129             co.co_kwonlyargcount,

TypeError: an integer is required (got type bytes)

Can anyone help with pyspark setup in Coogle Colab?

CodePudding user response:

You need to install pyspark:

! pip install pyspark

CodePudding user response:

I use following steps to create Spark notebook with latest Spark v3.3 in Google Colab:

!apt-get install openjdk-8-jdk-headless

!wget https://dlcdn.apache.org/spark/spark-3.3.1/spark-3.3.1-bin-hadoop3.tgz

!tar xf spark-3.3.1-bin-hadoop3.tgz

!pip install -q findspark

import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.3.1-bin-hadoop3"

import findspark
findspark.init()

from pyspark.sql import SparkSession

spark = SparkSession.builder\
        .master("local")\
        .appName("hello_spark")\
        .config('spark.ui.port', '4050')\
        .getOrCreate()

This works as of Dec-2022. The hadoop URL or minimum JDK may change with future releases.

  • Related