I have a Jenkins build which uses :
stage("Build docker images and start containers") {
steps {
sh "docker-compose build "
sh "docker-compose up -d"
}
}
The docker compose file has quite a lot of containers, networks, and volumes which normally would be quite difficult maintain and to integrate into the docker based pipeline.
On the other hand docker object in Jenkins offers quite some nice functionalities which I would also like to benefit from.
But no matter what I tried it looks impossible for me to create a docker object in the pipeline referencing an already existing/running docker container. For example I tried (minimal example for the sake of this question):
node {
checkout scm
def runningContainer = docker.build("my-image:${env.BUILD_ID}")
runningContainer.inside {
sh 'make test'
}
}
But In this case it builds an image from scratch and then runs the command. I also tried:
node {
checkout scm
docker.image('mysql:5').withRun('-p 3306:3306') {
/* do things */
}
}
But also in this case it is the same. An image is created/used and then the code is run.
Is there any way I can create an object from an existing docker container, something like:
node {
checkout scm
runningContainer = docker.reference('already_running_container_name')
runningContainer.inside {
sh 'make test'
}
}
Thank you in advance for your help!
CodePudding user response:
The two tools you have available here are that docker.image().withRun()
can take arbitrary extra docker run
options, and that you can sh 'docker ...'
to run arbitrary commands to do things the standard Jenkins Docker integration doesn't support. It also helps you that Compose creates normal Docker objects with predictable (or manually settable) names.
Conversely, if you look at the Jenkins logs from a docker.image().inside() { ... }
command, you can see that it injects a lot of settings; enough bind mounts and environment variables that the environment inside the container looks more or less like the environment outside the container. I wouldn't try to reproduce this elsewhere, or try to merge Compose and Jenkins container settings.
Practically I'd expect that most of what you'd need from Compose is its default
network. You wouldn't normally need volume mounts because Jenkins mounts the workspace directory. The other configuration you get from Compose can be ignored while you're running its tests.
So a Jenkinsfile might look like (scripted pipeline syntax):
// Assign a unique (and known) Compose project name
def projectName = env.BUILD_TAG
try {
// Start up the Compose stack
sh "docker-compose -p ${projectName} up --build -d"
// (Consider limiting to dependencies only, without --build)
// sh "docker-compose -p ${projectName} up -d mysql redis"
// Build an image out of the service we're testing
def image = docker.build("my-image:${env.BUILD_ID}")
// Run the integration tests, attached to the Compose-provided network
// (This is the `docker run --network` option)
image.inside("--network ${projectName}_default") {
sh 'make test'
}
} finally {
// Tear down the Compose stack
sh "docker-compose -p ${projectName} down"
}
You could in principle sh 'docker-compose exec ...'
, but you wouldn't have any of the Jenkins-provided volume mounts and correspondingly you'd have trouble getting out things like a JUnit-format test report.