I use libusb using the libusb_control_transfer
function to send information to a USB device.
My host is WINDOWS 10 and my guest is Ubuntu 20.04 within Vmware Workstation version 16.2.3.
The problem is that communication using the libusb_control_transfer
function is 20 times slower than I run it on Ubuntu without using vmware For example, sending 180KB takes me about 20 seconds with Vmware Workstation But when I do this without Vmware Workstation it takes 1-2 seconds.
I made sure that USB is set up correctly for me in Setting -> USB Controller.
ldd myBinary :
linux-vdso.so.1
libusb-1.0.so.0 => /lib/x86_64-linux-gnu/libusb-1.0.so.0
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
/lib64/ld-linux-x86-64.so.2
My questions:
Why is using libusb_control_transfer in Vmware Workstation much slower than using it in Ubuntu without VM?
Can someone make a recommendation how to fix that?
CodePudding user response:
You are in a virtual machine. All hardware "plugged" in your VM is bridged (in a way or another) to your host operating system, and it costs time.
You can't expect a VM to run at the same performances as a native machine - it's not even the case on a bare-metal hypervisor like ESX, which doesn't even have the extra cost of a host operating system... Unless you use ONLY CPU, RAM, SATA disk and network - all these devices are easily and natively "shared" across applications, that's why virtual servers on bare-metal hypervisors are running very well, in particular if they use a dedicated physical mass storage for each server (and sometimes even their own network card).
It starts to be more complicated with resources/hardware which have exclusive/expensive access: GPU / screens, sound card, communication ports (from serial to USB passing by parallel ports), optical drives / slow disks, human interface devices in general (mouse, keyboard, ...), etc.
It means that your execution path is, more or less: VM application (VM user) -> VM drivers (VM kernel) -> VM VMWare bridges (VM kernel) -> Host VMWare bridges (Host user) -> Host VMWare app (Host user) -> Host drivers (Host kernel) -> Hardware.
And then, add the return path to get acknowledments and read data... Please note that some steps can be simple pass-through wrappers, so they can be quite fast, but still a lot of steps...
"A bit" longer than: Application (user) -> OS drivers (kernel) -> Hardware - that what happens for a native application.
For USB, it can be even worse, since drivers are partly in user land (USB device driver itself), and partly in kernel land (USB controller). So you have one more layer to cross compared to most drivers compared to most other hardware. And you got that on both guest and host operating systems.