I've been having quite the struggle with starting my Python-script on a remote Raspberry Pi. I can't quite get my script working, either it's running, but blocking the terminal in Jenkins - blocking the deployment from ever finishing, or it's not starting at all. The weird thing in my opinion, is that when I'm running the exact same commands from my Windows machine, or even directly from the Jenkins node (Ubuntu Server), the script starts absolutely fine and without blocking the terminal. It just behaves like the good background-process I want it to be. But no! Not through the Jenkins pipeline itself.
I'll summarize the setup:
- Jenkins controller: Docker container in my Unraid Server with SWAG reverse proxy for GitHub hook.
- Jenkins node: VM in Unraid Server.
- Target machine: Raspberry Pi, currently available on my local network (will set up VPN for this later).
- Target script: A python script to update a small screen through GPIO with information gotten through a HTTP GET request, has a
while True
loop as the main loop and sometime.sleep()
.
The following is my pipeline:
pipeline {
agent any
stage('Deploy') {
steps {
sh 'ls'
sshagent(credentials: ['jenkinsvm-to-pi'])
{
// Clear any existing instances
sh """
ssh ${target} pkill -f ${filename} &
"""
// Copy downloaded files to target machine.
sh """
scp lcd1602.py ${target}:home/pi/Documents/${filename}
"""
// Run script on target machine.
sh """
ssh ${target} "nohup python3 home/pi/Documents/${filename}" &
"""
}
}
}
}
This kills the process if I had started it through other means than from the pipeline (the pkill -f
command). But it does not start it after this. I have tried with lots of variations, with & and without.
The entire pipeline seems quite simple to me, and I can't for the life of me figure out what's causing this issue.
I would greatly appreciate some assistance with this. Thanks!
Edit:
Also tried this setup:
steps {
sshagent(credentials: ['conn-to-mmrasp'])
{
sh "ssh ${target} pkill -f ${filename} || echo 'No process was running'"
sh "scp lcd1602.py ${target}:~/Documents"
sh "ssh ${target} nohup python3 Documents/${filename} &"
}
}
A pretty weird thing is that it seemed to work the first time I updated the pipeline. At least so it seemed, but subsequent attempts didn't work. The console log isn't very helpful either, but I'll include the output.
[ssh-agent] Using credentials pi
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-SSrqVoXbnr0v/agent.40947
SSH_AGENT_PID=40949
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/workspace/AlbinTracker_main@tmp/private_key_4492026892118281470.key (/home/jenkins/workspace/AlbinTracker_main@tmp/private_key_4492026892118281470.key)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
ssh [email protected] pkill -f lcd1602.py
echo No process was running
No process was running
[Pipeline] sh
scp lcd1602.py [email protected]:~/Documents
[Pipeline] sh
ssh [email protected] nohup python3 Documents/lcd1602.py
[Pipeline] }
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 40949 killed;
[ssh-agent] Stopped.
[Pipeline] // sshagent
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
I've done some more testing. It seems like something is wrong even running the script from a terminal-SSH session. When I'm running the command: nohup python3 lcd1602.py &
while I'm in a SSH session, there's no input to the nohup.out file, while when I'm not running it in nohup (python3 lcd1602.py &
), I get the desired output directly in the terminal. If I run the command without the "&", the output is directed to nohup.out like it should, but this blocks the terminal, which clogs the deploy stage. I'm starting to think this is a problem related to nohup (my use of it) and not jenkins?
CodePudding user response:
Ok, so I seem to have fixed it, but I'm not entirely sure what has changed.
So I added the UNBUFFERED flag -u
to the python command, to make sure that the logging went through instead of being stored in the buffer. I added a bash script to the repository, that did the executing instead of running the python script directly. Now I have some logging to make sure that I know when the script has been running etc. But I do think that the added bash script was actually what made the script start running. So I'll post here in case anyone finds this thread later on, and maybe it can help.
The final Jenkinsfile:
String target = '[email protected]'
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(credentials: ['conn-to-mmrasp'])
{
sh "scp -r ./* ${target}:~"
sh "ssh ${target} bash startScript.sh &"
}
}
}
}
}
The bash script:
filename=lcd1602.py
pkill -f $filename || echo No process running
nohup python3 -u $filename >> nohup.out