I am a little bit confused how containers run. I am developing on a Mac and when I copy my compiled sources into a docker image and Debian OS, I get an error that the file can not be executed. I googled it and it has something to do with different CPU architectures, I needed to cross compile. That makes sense.
This however work:
FROM rust:1.65 AS builder
WORKDIR app
COPY . .
RUN cargo build --release
FROM debian:buster-slim
COPY --from=builder ./app/target/release/hello ./app/myapp
CMD ["./app/myapp"]
I can build a binary without knowing in advance which architecture I am compiling for right? This is because I just do a cargo build
on a builder called rust:1.65
. I am curious how it does know it will be ran on Debian and on the correct CPU.
How does FROM rust:1.65
compile for the correct architecture? Or is it just all the same default architecture in a Dockerfile?
CodePudding user response:
You can compile for the given architecture.
Run the following command to see all target availlable. doc
docker run --rm -ti rust:1.65 rustc --print target-list
and in you Toml config you setup the buil option. doc
[build]
target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"]
CodePudding user response:
Which operating system is (likely) a more significant variable than which processor architecture.
The Docker core doesn't run natively on MacOS. Docker Desktop runs a hidden Linux virtual machine. In the case where you're compiling the binary on the host, you get a MacOS binary, but then you try to run it in a Linux container, which results in an error. If you do the compilation in a container too, it's all Linux.
More generally, there are also lurking problems around shared libraries, support files, permissions, ... and unless you're confident in what you're doing I would not try to build binaries on the host and copy them into an image or container. Install them in the image, either compiling them yourself or using the base image distribution's package manager.