3

I am getting acquainted with the MicroC/OS-II kernel and multi-tasking. I have programmed the following two tasks that uses semaphores:

#define TASK1_PRIORITY      6  // highest priority
#define TASK2_PRIORITY      7

void task1(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task1, 0, &err);    
    int i;

    if (sharedAddress >= 0)
    {
        printText(text1);
        printDigit(++sharedAddress);
    }
    else
    {
        printText(text2);
        printDigit(sharedAddress);                      
    }  
    OSTimeDlyHMSM(0, 0, 0, 11);  
    OSSemPost(aSemaphore_task2);  
  }
}

void task2(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task2, 0, &err);    
    sharedAddress *= -1; 
    OSTimeDlyHMSM(0, 0, 0, 4);                                 
    OSSemPost(aSemaphore_task1);
  }
}

Now I want to measure the context switch time, i.e., the time it takes to for the processor to switch between these two tasks.

Is this done by just using a function timer() like:

void task1(void* pdata)
{
  while (1)
  { 
    INT8U err;
    OSSemPend(aSemaphore_task1, 0, &err);    
    int i;

    if (sharedAddress >= 0)
    {
        printText(text1);
        printDigit(++sharedAddress);
    }
    else
    {
        printText(text2);
        printDigit(sharedAddress);                      
    }    
     OSTimeDlyHMSM(0, 0, 0, 11);
     OSSemPost(aSemaphore_task2);
     timer(start);
  }
}

void task2(void* pdata)
{
  while (1)
  { 
    timer(stop):
    INT8U err;
    OSSemPend(aSemaphore_task2, 0, &err);    
    sharedAddress *= -1;  
    OSTimeDlyHMSM(0, 0, 0, 4);                                
    OSSemPost(aSemaphore_task1);
  }
}

or have I gotten this completely wrong?

Smajjk
  • 394
  • 3
  • 16
  • First, Can you guarantee these two tasks run on same CPU core all the time? Then, lets think about measuring the context switch duration. – lashgar Oct 06 '12 at 12:06
  • Yes, it runs on the same core all the time. – Smajjk Oct 06 '12 at 13:18
  • There is a function clock() returning the current clock ticks elapsed since the application started. You can use it instead of time. Refer to this link for usage: http://www.cplusplus.com/reference/clibrary/ctime/clock/ – lashgar Oct 06 '12 at 14:55
  • 1
    Does the kernel have instrumentation for this? It may well. One of the really handy things of VxWorks when I last used it was the WindView tool - it let you capture kernel events and then displayed them on a graph so you could see precisely when context switches happened, and how long they took. It's a safe bet that other RTOSs have got the functionality in the intervening years. – marko Oct 06 '12 at 21:02
  • can you find the context switching function? You might be able to look at the number of instructions required for the switch and deduce the time from that. – Josh Petitt Feb 04 '13 at 05:16

3 Answers3

3

I'm afraid you won't be able to measure context switch time with any of µC/OS primitives. The context switch time is far too small to be measured by µC/OS soft timers which are most likely based on a multiple of the system tick (hence a few ms) - even if it depends on the specific µC/OS port to your CPU architecture.

You will have to directly access a HW timer of your processor - you probably want to configure its frequency to the maximum it can handle. Set it to be a free running timer (you don't need any interrupt) and use its counting value as a time base to measure the switching time.

Or you can read the ASM of OS_TASK_SW() for your architecture and compute the number of cycles required ;)

Pepito
  • 131
  • 4
2

For doing performance measurements, the standard approach is to first calibrate your tools. In this case it is your timer, or the suggested clock (if you use C++).

To calibrate it, you need to call it many times (eg 1000) and see how long each takes on average. Now you know the cost of measuring the time. In this case, it is likely to be in a similar range (at best) to the feature you are trying to measure - the context switch.

So the calibration is important.

Let us know how you go.

andy256
  • 2,821
  • 2
  • 13
  • 19
2

You can use OSTimeGet API to get execution time. uCOS doesn't use timer() function to get execution time.