我已经安装了OpenJDK 13.0.1,python 3.8和spark 2.4.4。测试安装的说明是从spark安装的根目录运行。\ bin \ pyspark。我不确定是否错过了spark安装步骤,例如设置一些环境变量,但是找不到更多详细说明。
我可以在我的机器上运行python解释器,因此我确信它已正确安装,并且运行“ java -version”可以提供预期的响应,因此我认为这两个问题都不是问题。
我从cloudpickly.py中获得了错误的堆栈跟踪:
Traceback (most recent call last): File "C:\software\spark-2.4.4-bin-hadoop2.7\bin\..\python\pyspark\shell.py", line 31, infrom pyspark import SparkConf File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\__init__.py", line 51, in from pyspark.context import SparkContext File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\context.py", line 31, in from pyspark import accumulators File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\accumulators.py", line 97, in from pyspark.serializers import read_int, PickleSerializer File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\serializers.py", line 71, in from pyspark import cloudpickle File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 145, in _cell_set_template_code = _make_cell_set_template_code() File "C:\software\spark-2.4.4-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 126, in _make_cell_set_template_code return types.CodeType( TypeError: an integer is required (got type bytes)
John.. 7
发生这种情况是因为您使用的是python 3.8。pyspark的最新pip版本不支持python 3.8。现在降级到python 3.7,应该没问题。
发生这种情况是因为您使用的是python 3.8。pyspark的最新pip版本不支持python 3.8。现在降级到python 3.7,应该没问题。