Home > other >  TIMER1 to measure the delay accurracy in avr atmega328p?
TIMER1 to measure the delay accurracy in avr atmega328p?

Time:06-22

unsigned long slptime=0;
unsigned long wdttime_count = 0;


void timer1_init()
{
    //TCNT1=0xFF4E;//16ms
    TCNT1=0xFFF5;//1ms
    // TCNT1=0xFF9B; //10ms
    TIMSK1=0x01;
    TCCR1A &= ~(1<<WGM10); // RV09_H, Date: 05-May-2022, set Normal mode operation
    TCCR1A &= ~(1<<WGM11);
    TCCR1B &= ~(1<<WGM13);
    TCCR1B &= ~(1<<WGM12);
    TCCR1B |= (1<<CS12) | (1<<CS10); //1024 prescalar; fosc=11059200hz; freq=fosc/1024 = 10800hz; t=0.092ms;
}

void timer1_stop()
{
    TCCR1B = 0x00;
    TIMSK1 = 0x00;
}
ISR(TIMER1_OVF_vect)
{
    //TCNT1=0xFF4E;//16ms
    TCNT1=0xFFF5;
    //TCNT1=0xFF9B;
    wdttime_count=wdttime_count 1;
}

void main()
{
    timer1_init();
    _delay_ms(250);
    timer1_stop();
    sendtimediff((wdttime_count*1000)/1080);
}

The timer1 is configured for 1080Hz by counting upto 10 at 10800hz. I was just checking the timer accuracy but the above code return 227ms instead of 250ms.

What I am missing in it? Or _delay_ms() is causing the error?

CodePudding user response:

Classic bug for free running timers, TCNT1=interval; in the ISR won't work. It needs to be something like:

volatile uint16_t next_TCNT1 = TCNT1;
next_TCNT1  = interval;
TCNT1 = next_TCNT1;

The reason for this is: you have set the interrupt to trigger when the timer compare hits a certain value. That's the point when the timer flag is set, but from the point where that happens until you reach the actual code inside the ISR, a lot of time has passed, interrupt latency. This was particularly nasty on the various old, crap architecture 8-bitters.

So by the time you update the timer counter, it isn't sitting at "interval", but rather at "interval plus interrupt latency plus execution overhead" which means the next interrupt will come much sooner than expected.

By reading the current value inside the ISR and adding the timer interval to it, you compensate for interrupt latency. Now the only real-time delay you have is those few lines inside the ISR, which are likely negligible.

CodePudding user response:

  1. When you set timer value to 0xFFF5 it increments 11 times before the overflow.

11059200 / 1024 / 11 = 981,8 Hz == 1,0185 ms.

It counts 245 times.

245 * 1000 / 1080 = 227

You probably want to set value to 0xFFF6

  1. There is no need to set the timer in each interrupt. Instead, you can use CTC mode, forcing the timer to count from zero to a value in OCR1A (Mode 4) or ICR1 (Mode 12). E.g.:
void timer1_init()
{
    TCNT1=0;
    OCR1A = 9; // from 0 to 9 == 10 timer (prescaled) clock cycles

    TIMSK1 = (1 << OCIE1A); // Use Output Compare interrupt
    TCCR1A &= ~(1<<WGM10); // Set Mode 4 (CTC)
    TCCR1A &= ~(1<<WGM11);
    TCCR1B &= ~(1<<WGM13);
    TCCR1B |= (1<<WGM12);
    TCCR1B |= (1<<CS12) | (1<<CS10); //1024 prescalar; fosc=11059200hz; freq=fosc/1024 = 10800hz; t=0.092ms;
}

ISR(TIMER1_COMPA_vect) // use Output Compare A vector, instead of Overflow
{
    // No need to reset the timer
    wdttime_count=wdttime_count 1;
}

...
  1. Keep in mind that _delay_ms macro just counts CPU cycles, therefore if there are interrupts happened during the delay, the delay can take longer time. _delay_ms and _delay_us macros are generating plain CPU loop, which counts with accuracy up to single CPU clock cycle, but only when the loop itself is not interrupted.

  2. There is no point to compare _delay_ms to timer clocked from the same main clock, as the CPU itself. The comparison result will be always the same, no matter what actual CPU speed is.

  • Related