CodePudding user response:
Spark of each node is also a Linux host,. So the file on each node, engineering through Java calls. So give it a tryCodePudding user response:
This method is so stupid, what I want is by spark code loaded to come in, and then publish to all computing nodes, analogous to the hadoop DistributedCache. The addFileToClassPath () function,CodePudding user response:
. I am using python, pyspark SparkContext addFile (path) can be used to distribute the file to the compute nodes for call (Add a file to be downloaded with this Spark job on every node), so the library call no problem, I usedCodePudding user response:
Consult how to solve the building LordCodePudding user response:
How to solve,http://blog.csdn.net/ddjj_1980/article/details/74940593