0

I have a thread which runs a while(1) loop. During that loop I keep checking the time as I need to perform certain tasks on certain times. However, when I print the time to the screen I see that once in a few seconds I get a "hole" of almost 700ms. I tried setting the prcoess priority:

 policy =  SCHED_FIFO;
 param.sched_priority = 18; 
 if( sched_setscheduler( id, policy, &param ) == -1 ) 
 {
            printf("Error setting scheduler/priority!\n");
  }

as well as the thread priority:

pthread_attr_t attr;
struct sched_param param;
pthread_attr_init(&attr);
pthread_attr_setschedpolicy(&attr, SCHED_RR);
param.sched_priority = 50;
pthread_attr_setschedparam(&attr, &param);
    m_INVThreadID  = pthread_create( &m_BaseStationLocatorsThread, &attr,        
                                     ThreadBaseStationLocatorsHandler, (void*) 
                                                                (this));//Linux

But it didn't help.

The way I get the time is either with:

 struct timespec start;


      clock_gettime( CLOCK_MONOTONIC_RAW, &start);
            //gettimeofday(&tim, NULL);
            //wInitTime = tim.tv_sec*1000 + tim.tv_usec/1000.0;
            double x = start.tv_sec;
            double y = start.tv_nsec;
            x=x*1000;
            y = y/1000000;
            double result = x+ y;
            return result;

Or:

STime   TimeHandler::GetTime()
{
    STime tmpt;
    time_t rawtime;
        tm * timeinfo;
        time(&rawtime);
        timeinfo=localtime(&rawtime);   
        tmpt.day_of_month = timeinfo->tm_mday;
        tmpt.month = timeinfo->tm_mon+1;
        tmpt.year = timeinfo->tm_year+1900;
        tmpt.Hours =  timeinfo->tm_hour;
        tmpt.Min = timeinfo->tm_min;
        tmpt.Sec =  timeinfo->tm_sec;
        tmpt.MilliSeconds = GetCurrentTimeMilliSeconds();
        return tmpt;
}

and now print the time with:

STime timeinfo = GetTime();
    string curTime;
    int datePart; 
    string datePartSTR;
    std::ostringstream convert;

    datePart =timeinfo.day_of_month;
    convert << datePart;
    //curTime.append( convert.str());
    convert << "/";
    datePart = timeinfo.month;
    convert << datePart;
    //curTime.append( convert.str());
        convert << "/";
    datePart =timeinfo.year;
    convert << datePart;
    //curTime.append( convert.str());   

        convert << " ";
        datePart =timeinfo.Hours;
            if (timeinfo.Hours<10)
            convert <<0;
    convert << datePart;
    //curTime.append( convert.str());
        convert << ":";
        datePart =timeinfo.Min;
            if (timeinfo.Min<10)
            convert <<0;
    convert << datePart;
    //curTime.append( convert.str());
        convert << ":";
        datePart =timeinfo.Sec;
        if (timeinfo.Sec<10)
            convert <<0;
    convert << datePart;
        convert << ":";
            datePart =timeinfo.MilliSeconds;
        if (timeinfo.MilliSeconds<100)
                convert << 0;
        if (timeinfo.MilliSeconds<10)
                convert << 0;
    convert << datePart;
    curTime.append( convert.str());


        return curTime;

Any ideas?

Thanks!

user1997268
  • 287
  • 1
  • 6
  • 15
  • What's the longest pause you are willing to tolerate? – NPE Sep 11 '13 at 10:29
  • Maybe some task execution takes a long time? – Alex F Sep 11 '13 at 10:30
  • 3
    Going back to the original problem you seem to have (instead of focusing on your solution, see e.g. [what is the XY problem](http://meta.stackexchange.com/questions/66377/what-is-the-xy-problem)), how do you get the time? And how do you present it? And what are you doing in your thread, is it something that sometimes take longer time? – Some programmer dude Sep 11 '13 at 10:30
  • 5
    You are aware that Linux is not a Realtime OS? http://en.wikipedia.org/wiki/Real-time_operating_system –  Sep 11 '13 at 10:33
  • Also, is there any regularity with these "pauses", or is it just "every few seconds" (where "few" can be anything from one to five seconds or similar)? And is the "hole" always "almost 700 ms", or can that differ too? – Some programmer dude Sep 11 '13 at 10:35
  • How accurate is your timer? – doctorlove Sep 11 '13 at 10:44
  • did you include a flush, when "printing the time to screen"? For example, std::endl contains a flush and this may take your printing operation take a significat amount of time depending on outside factors – b.buchhold Sep 11 '13 at 10:44
  • I tried it with an empty while loop (just with the printf) to make sure it is not anything that I am doing. The longest pause I can tolerate is no longer than 50 ms. I've added the way I get the time to the original post. – user1997268 Sep 11 '13 at 11:06
  • @user1997268 I think this question could use a bit more of context. It sounds like you're trying to use desktop Linux as real time system and this will not work. It looks a bit like you're trying to write device driver in user space (that's wild guess, hence why I think a bit more context would be useful). – elmo Sep 11 '13 at 11:24
  • @elmo I'm writing for a controller SOM operating TI Sitara linux. – user1997268 Sep 11 '13 at 11:36
  • @user1997268 Then I think you either need to write a device driver (free book about that http://lwn.net/Kernel/LDD3/ ) or use some flavour of RT Linux. – elmo Sep 11 '13 at 11:55
  • Also, are you sure it's really a pause, and not, say, the network time daemon adjusting your time-of-day clock? And when you print from this embedded Sitara, what are you outputting via? If it's something over JTAG (like TI's CIO for bare metal apps), that introduces its own lengthy pauses. – Joe Z Sep 11 '13 at 12:46
  • I think the line tmpt.MilliSeconds = GetCurrentTimeMilliSeconds(); is definitely wrong, as the MilliSeconds field is expecting a value between 0 and 999, while GetCurrentTimeMilliSeconds() is probably returning a value-since-epoch, which will almost always be much larger than that. – Jeremy Friesner Sep 11 '13 at 15:17

2 Answers2

1

First of all, it's could be nice to employ Cron for job scheduling instead of manually waiting for necessary moment in loop and then manualy start a job.

According to the "time hole": as jdv-Jan de Vaan said in comment, Linux is not real-time OS (as well as Windows and most of other consumer-oriented OSes). Given that, you never can be sure that your thread will be active at expected slice of time with milliseconds precision. OS scheduler, system activities, even CPU throttling/energy saving may cause your app to sleep longer than a pack of milliseconds. So, instead of fixed time it's better to consider some threshold interval.

Yury Schkatula
  • 5,291
  • 2
  • 18
  • 42
0

Well, this could be a scheduler delay problem, but on the other hand 700mS is an awfully long time on a modern computer, and I wouldn't expect that much scheduler delay unless the computer was really overloaded or underpowered.

The other possibility is that there isn't a time gap so much as an error in the logic you are using to print out the current time. I suggest you have your program convert the timespace value into microseconds-since-epoch, and then have it print out the elapsed time in that format instead. That way you'll avoid the vagaries of calendar dates. Something like this:

uint64_t GetMicrosecondsSinceEpoch()
{
   struct timespec ts;
   if (clock_gettime(CLOCK_MONOTONIC, &ts) != 0) return 0;  // error!
   return (((uint64_t)ts.tv_sec)*1000000) + ((uint64_t)(ts.tv_nsec)/1000)));
}

[...]

static uint64_t prevMicros = 0;
uint64_t nowMicros = GetMicrosecondsSinceEpoch();
int64_t elapsedMicros = nowMicros-prevMicros;
if (prevMicros != 0)
{ 
   printf("Elapsed microseconds since previous time is %lli\n", elapsedMicros);
   if (elapsedMicros >= 500000) printf("ERROR, more than 500mS passed since last time!?\n");
}
prevMicros = nowMicros;

If the above shows errors, then it's probably a scheduling problem; if not, it's probably a date-conversion problem.

Btw if you calculate the microseconds-since-epoch value for each of the events you want to wake up for, you can just take the minimum of all of those values, subtract the current value returned by GetMicrosecondsSinceEpoch() from that minimum value, and sleep for that amount of time... and that way you will wake up only when it's (approximately) time for you to handle the next event. That will give you better time-accuracy and less CPU/power usage than waking up on a regular basis only to go back to sleep again.

Jeremy Friesner
  • 70,199
  • 15
  • 131
  • 234
  • Btw make sure you are using the appropriate clock for your needs... see here: http://stackoverflow.com/questions/3523442/difference-between-clock-realtime-and-clock-monotonic – Jeremy Friesner Sep 11 '13 at 15:20
  • I tried your suggestion and indeed the "holes" decreased to about 334 milliseconds, which is still too much... Thanks, – user1997268 Sep 12 '13 at 06:21
  • Is there any chance that your program also blocks in some other part of its event loop, and/or takes a significant amount of time to do some I/O or a long computation somewhere? Either of those could hold off the thread and cause the symptoms you see... – Jeremy Friesner Sep 12 '13 at 06:43
  • FWIW on the Linux systems I work with (2GHz quad-core Xeons), scheduler latency is typically no worse than, say 10-20mS, unless there are active threads chewing on all available cores. So I think there must be something else going here, not just Linux being non-real-time. – Jeremy Friesner Sep 12 '13 at 06:46