I am controlling an industrial robot and some test equipment with a Python3 script. Nearly all of my timing critical functions are handled by hardware, but these two statements do need to execute in a consistent time:
if True == daq.readChannelsStart(channel_settings):
titan_rtde.Move(setp_list[step])
There is 10-15ms variation between when the DAQ starts and when the robot moves.
Both functions are communicating over a private Ethernet network with 4 devices that are only polled by the PC running the Python script so I expect the latency to be consistent (although I suppose I haven't sniffed the traffic to prove that). The data passed to these methods does nto vary much so I would expect their execution time to be consistent as long as they aren't interrupted by the OS (Windows 10 Enterprise 2016 LTSB).
Is there anything I can do to reduce this variability?
CodePudding user response:
Is there anything I can do to reduce this variability?
tl;dr: No, there isn't.
This looks like good code, in the sense that you're executing
almost nothing between the check and the move command,
just a de-ref and a call.
You could unconditionally assign temp = setp_list[step]
prior
to the check, and then pass in the cached result when the check succeeds.
That could reduce some variability in L2 cache misses,
but I can't imagine it will add up to much of an effect.
Look inside the move command to verify there aren't conditional expressions which increase the variance.
You might be able to make the whole thing run faster, with reduced variance, by pushing it into a Cython or Rust module. But again, without profiling where the jitter comes from, it's hard to imagine there would be much of an effect size.
The most important change you could make is to add scheduling to the API. Preferably on the robot end, if it is changeable. The idea is to embrace delay, and do things according to a schedule.
Imagine the move code looks like this:
def Move(self, sl):
if random_choice():
do_stuff() # Sometimes do important housekeeping
sleep(random()) # which takes varying amount of time.
send_move_command(sl)
Let's put it on a schedule:
def Move(self, sl, latency=.015):
t0 = time()
t1 = t0 latency # scheduled time to execute the move
if ...
sleep(max(0, t1 - time()))
send_move_command(sl)
Now, as long as that housekeeping consumes less than latency
seconds, we can successfully hide it from external observers,
exhibiting low variance.
In an ideal world, the robot would be sent a "move starting at timestamp t1" command.
Imagine that the robot sometimes performs a 4 msec operation which cannot be interrupted, thereby increasing the variance. You are unlucky if you try moving during a moment when it's executing. Suppose that tens of milliseconds will go by before it attempts to repeat the operation.
Probe the robot to identify when that operation is currently happening. Use that to predict a "good" time to send the move command.
Impose an external clock signal.
Use a 555 timer to produce pulses at some convenient interval, perhaps 1 PPS, one pulse per second. Make the timing signal available to the DAQ and to the robot.
Inhibit robot motion while the clock signal is low. Only send move commands while it is low. Now motion will only begin on the clock's rising edge. Consider doing the DAQ interaction on the falling edge.