Home > Net >  I created a high precision multitasking basic C code, what is the algorithm implementation called?
I created a high precision multitasking basic C code, what is the algorithm implementation called?

Time:09-06

So I always wanted to implement basic multitasking code, specifically asynchronous code (not concurrency code) without using interrupts, boost, complex threading, complex multitasking implementations or algorithms.

I did some programming on MCUs such as the ATmega328. In most cases to make most efficient and use out from the MCUs, multitasking is required in which functions run at the same time ("perceived" running at the same time) without halting the MCU to run other functions.

Such that one "function_a" requires a delay but it should not halt the MCU for the delay so that other functions like "function_b" can also run asynchronously.

To do such task with microcontrollers only having one CPU/thread, an algorithm with timers and keeping track of the time is used to implement multitasking. It's really simple and always works. I have taken the concept from MCUs and applied it to desktop PCs in C using high precision timers, the code is given below.

I am really surprised that no one uses this form of asynchronous algorithm for C and I haven't seen any examples on the internet for C .

My question now is, what exactly this algorithm and implementation is called in computer science or computer engineering? I read that this implementation is called a "State Machine" but I googled it and did not see any code that is similar to mine that uses only with the help of timers directly in C .

The code below does the following: It runs function 1 but at the same time also runs function 2 without needing to halt the application. Both functions also needs to execute such that they do not run blatantly continuously, instead the functions need to run continuously with a specified time (function_1 runs every 1sec and function_2 every 3secs). Finding similar implementation for the requirements above, given on the internet for C is really complex. The code below is simple in nature and works as intended:

// Asynchronous state machine using one CPU C   example:
// Tested working multitasking code:

#include <iostream>
#include <ctime>
#include <ratio>
#include <chrono>

using namespace std::chrono;

// At the first execution of the program, capture the time as zero reference and store it to "t2".
auto t2 = high_resolution_clock::now();
auto t3 = high_resolution_clock::now();

int main() 
{
  while (1)
  {
    // Always update the time reference variable "t1" to the current time:
    auto t1 = high_resolution_clock::now();

    // Always check the difference of the zero reference time with the current time and see if it is greater than the set time specified in the "if" argument:
    duration<double> time_span_1 = duration_cast<duration<double>>(t1 - t2);
    duration<double> time_span_2 = duration_cast<duration<double>>(t1 - t3);
      
    if(time_span_1.count() >= 1)
    {
      printf("This is function_1:\n\n");
      std::cout << time_span_1.count() << " Secs (t1-t2)\n\n";
      
      // Set t2 to capture the current time again as zero reference.
      t2 = high_resolution_clock::now();
      
      std::cout << "------------------------------------------\n\n";
    }

    else if (time_span_2.count() >= 3)
    {
      printf("This is function_2:\n\n");
      std::cout << time_span_2.count() << " Secs (t1-t3)\n\n";
      
      // Set t2 to capture the current time again as zero reference.
      t3 = high_resolution_clock::now();

      std::cout << "------------------------------------------\n\n";
    }
  }
  return 0;      
}

CodePudding user response:

I would describe the posted code as "microcontroller code", because it is assuming that it is the only program that will be running on the CPU and that it can therefore burn as many CPU-cycles as it wants to without any adverse consequence. That assumption is often valid for programs running on microcontrollers (since usually a microcontroller doesn't have any OS or other programs installed on it), but "spinning the CPU" is not generally considered acceptable behavior in the context of a modern PC/desktop OS where programs are expected to be efficient and share the computer's resources with each other.

In particular, "spinning" the CPU on a modern PC (or Mac) introduces the following problems:

  1. It uses up 100% of the CPU cycles on a CPU core, which means those CPU cycles are unavailable to any other programs that might otherwise be able to make productive use of them
  2. It prevents the CPU from ever going to sleep, which wastes power -- that's bad on a desktop or server because it generates unwanted/unnecessary heat, and it's worse on a laptop because it quickly drains the battery.
  3. Modern OS schedulers keep track of how much CPU time each program uses, and if the scheduler notices that a program is continuously spinning the CPU, it will likely respond by drastically reducing that program's scheduling-priority, in order to allow other, less CPU-hungry programs to remain responsive. Having a reduced CPU priority means that the program is less likely to be scheduled to run at the moment when it wants to do something useful, making its timing less accurate than it otherwise might be.
  4. Users who run system-monitoring utilities like Task Manager (in Windows) or Activity Monitor (under MacOS/X) will see the program continuously taking 100% of a CPU core and will likely assume the program is buggy and kill it. (and unless the program actually needs 100% of a CPU core to do its job, they'll be correct!)

In any case, it's not difficult to rewrite the program to use almost no CPU cycles instead. Here's a version of the posted program that uses approximately 0% of a CPU core, but still calls the desired functions at the desired intervals (and also prints out how close it came to the ideal timing -- which is usually within a few milliseconds on my machine, but if you need better timing accuracy than that, you can get it by running the program at higher/real-time priority instead of as a normal-priority task):

#include <iostream>
#include <ctime>
#include <chrono>
#include <thread>

using namespace std::chrono;

int main(int argc, char ** argv)
{
   // These variables will contain the times at which we next want to execute each task.
   // Initialize them to the current time so that each task will run immediately on startup
   auto nextT1Time = high_resolution_clock::now();
   auto nextT3Time = high_resolution_clock::now();

   while (1)
   {
      // Compute the next time at which we need to wake up and execute one of our tasks
      auto nextWakeupTime = std::min(nextT1Time, nextT3Time);

      // Sleep until the desired time
      std::this_thread::sleep_until(nextWakeupTime);
  
      bool t1Executed = false, t3Executed = false;
      high_resolution_clock::duration t1LateBy, t3LateBy;

      auto now = high_resolution_clock::now();
      if (now >= nextT1Time)
      {
         t1Executed = true;
         t1LateBy = now-nextT1Time;
    
         // schedule our next execution to be 1 second later
         nextT1Time = nextT1Time seconds(1);
      }

      if (now >= nextT3Time)
      {
         t3Executed = true;
         t3LateBy = now-nextT3Time;
    
         // schedule our next execution to be 3 seconds later
         nextT3Time = nextT3Time seconds(3);
      }

      // Since the calls to std::cout can be slow, we'll execute them down here, after the functions have been called but before
      // (nextWakeupTime) is recalculated on the next go-around of the loop.  That way the time spent printing to stdout during the T1
      // task won't potentially hold off execution of the T3 task
      if (t1Executed) std::cout << "function T1 was called (it executed " << duration_cast<microseconds>(t1LateBy).count() << " microseconds after the expected time)" << std::endl;
      if (t3Executed) std::cout << "function T3 was called (it executed " << duration_cast<microseconds>(t3LateBy).count() << " microseconds after the expected time)" << std::endl;
  }
  return 0;
}

CodePudding user response:

What is the algorithm...called?

Some people call it "super loop." I usually write it like this:


while (1) {
    if ( itsTimeToPerformTheHighestPriorityTask() ) {
        performTheHighestPriorityTask();
        continue;
    }
    if ( itsTimeToPerformTheNextHighestPriorityTask() ) {
        performTheNextHighestPriorityTask();
        continue;
    }
    ...
    if ( itsTimeToPerformTheLowestPriorityTask() ) {
        performTheLowestPriorityTask();
        continue;
    }
    waitForInterrupt();
}

The waitForInterrupt() call at the bottom is optional. Most processors have an op-code that puts the processor into a low-power state (basically, it halts the processor for some definition of "halt") until an interrupt occurs.

Halting the CPU when there's no work to be done can greatly improve battery life if the device is battery powered, and it can help with thermal management if that's an issue. But, the price you pay for using it is, your timers and all of your I/O must be interrupt driven.

  • Related