Home > other >  Use has been compiled jar package run scala program always problems
Use has been compiled jar package run scala program always problems

Time:09-25

Everyone a great god, and I now write a scala program myself, want to put on Linux running on the server, but a very strange question:

I run a command is such (run) in the jar's package directory:

Java - classpath "./spark - test. The jar:./spark - assembly - 1.1.0 - cdh5.2.0 - hadoop2.5.0 - cdh5.2.0. Jar:./hadoop - HDFS - 2.5.0 - cdh5.2.0. Jar:.. "sparktest. Test

The spark - test. The jar is my own writing scala program, at the back of the two jar package is I found on this server jar package

When I am inside the program set SparkContext to local there is no question of when running

SparkContext but when I want to convert into a cluster of spark://shzx002:18080 will appear error:

14/12/04 17:17:41 WARN TaskSchedulerImpl: Initial job has not accepted any resources; Check your cluster UI to ensure that workers are registered and have sufficient memory
14/12/04 17:17:42 INFO AppClient $ClientActor: Connecting to master the spark://shzx002:18080...
14/12/04 17:17:42 WARN AppClient $ClientActor: Could not connect to akka. TCP://sparkMaster @ shzx002:18080: akka. Remote. EndpointAssociationException: Association failed with [akka. TCP://sparkMaster @ shzx002:18080]
14/12/04 17:17:42 WARN AppClient $ClientActor: Could not connect to akka. TCP://sparkMaster @ shzx002:18080: akka. Remote. EndpointAssociationException: Association failed with [akka. TCP://sparkMaster @ shzx002:18080]
14/12/04 17:17:42 WARN AppClient $ClientActor: Could not connect to akka. TCP://sparkMaster @ shzx002:18080: akka. Remote. EndpointAssociationException: Association failed with [akka. TCP://sparkMaster @ shzx002:18080]
14/12/04 17:17:42 WARN AppClient $ClientActor: Could not connect to akka. TCP://sparkMaster @ shzx002:18080: akka. Remote. EndpointAssociationException: Association failed with [akka. TCP://sparkMaster @ shzx002:18080]

The cluster is not my own configuration, plus I contact soon, so is not very good... The address of the cluster should be no problem

I don't know is what reason, how to solve it? Look up a lot of things on the Internet, but all useless... First thanked you

CodePudding user response:

Why not use spark - submit?

CodePudding user response:

I use the submit also appeared the same problem! Ask for help,

CodePudding user response:

Hello,,, blogger, I also met the same problem, could you tell me how do you solve,, if see please reply,,, my qq389923309 can add friends to discuss,

CodePudding user response:

Send up your Spark program,

Where setMaster writing format is not correct,

CodePudding user response:

Check your cluster UI to ensure that workers are registered and have sufficient memory
Should be your uri is wrong, check the configuration

CodePudding user response:

No configuration SPARK_EXECUTOR_MEMORY parameter, the default will use 1 gb of memory, so there will be a memory, thus appeared above log warning information,

So the solution is the spark - env. Add the following parameters in sh:

Export SPARK_EXECUTOR_MEMORY=100 m

CodePudding user response:

Independent deployment mode, the client mode of spark master the default port 7077, cluster mode port is 6066, the wrong port, not a spark of web UI port

CodePudding user response:

Upstairs said is a very important errors, other all sorts of problems are likely to cluster set, suggested that the post have a problem and confused configuration

CodePudding user response:

1. Port error cause,

2. Your HOST, also is the problem domain configuration is

3. Memory problems

CodePudding user response:

2 more likely
  • Related