23

I'm new to using HAL functions. The description of the function HAL_GetTick() says that it "provides a tick value in millisecond".

I don't understand if this function returns ticks or milliseconds. Of course to convert from ticks to milliseconds I need to know how many ticks are in a millisecond, and it's CPU specific.

So what does HAL_GetTick() exactly return?


Edit:

My real problem is knowing how to measure time in microseconds. So I thought to get ticks from HAL_GetTick() and convert them to microseconds. This is addressed in the comments and at least in one of the answers so I'm mentioning this here too and I edited the title.

Alaa M.
  • 4,961
  • 10
  • 54
  • 95

6 Answers6

36

HAL_GetTick() should return the number of milliseconds elapsed since startup since a lot of HAL functions depend on it. How do you achieve it is up to you. By default, HAL_Init() queries the system clock speed, and sets the SysTick frequency to the 1/1000th of that:

__weak HAL_StatusTypeDef HAL_InitTick(uint32_t TickPriority)
{
  /*Configure the SysTick to have interrupt in 1ms time basis*/
  HAL_SYSTICK_Config(SystemCoreClock /1000);

  /*Configure the SysTick IRQ priority */
  HAL_NVIC_SetPriority(SysTick_IRQn, TickPriority ,0);

   /* Return function status */
  return HAL_OK;
}     

Then the default SysTick interrupt handler calls HAL_IncTick() to increment an internal counter once every ms, and HAL_GetTick() returns the value of that counter.

All these functions are defined as weak, so you can override them, as long as your version of HAL_GetTick() returns the elapsed time in milliseconds, it'll be OK. You can e.g. replace HAL_InitTick() to let SysTick run at 10 kHz, but then you should ensure that HAL_IncTick() gets called only at every 10th interrupt. On a 216 MHz STM32F7 controller (or the barely released 400MHz STM32H743), you can actually go down to 1 MHz Systick, but then you should be very careful to return as quickly as possible from the handler. And it would still be a horrible waste of precious processor cycles unless you do something in the handler that a hardware counter can't.

Or you may do it without configuring SysTick at all (override HAL_InitTick() with an empty function), but set up a 32-bit hardware timer with a sufficient prescaler to count on every microsecond, and let HAL_GetTick() return the timer counter.


Getting back to your real problem, measuring time in the order of microseconds, there are better ways.

If you have a 32-bit timer available, then you can put the MHz value of the respective APB clock in the prescaler, start it, and there is your microseconds clock, not taking away processing time from your application at all. This code should enable it (not tested) on a STM32L151/152/162STM32F4:

__HAL_RCC_TIM5_CLK_ENABLE();
TIM5->PSC = HAL_RCC_GetPCLK1Freq()/1000000 - 1;
TIM5->CR1 = TIM_CR1_EN;

then get its value anytime by reading TIM5->CNT.

Check your reference manual which hardware timers have 32-bit counters, and where does it get its clock from. It varies a lot across the whole STM32 series but should be there on an F4.

If you can't use a 32-bit timer, then there is the core cycles counter. Just enable it once with

CoreDebug->DEMCR |= CoreDebug_DEMCR_TRCENA_Msk;
DWT->CYCCNT = 0;
DWT->CTRL |= DWT_CTRL_CYCCNTENA_Msk;

and then read the value from DWT->CYCCNT. Note that as it returns the elapsed processor cycles, it will overflow in a couple of seconds.

EDIT:

I've just noted that you're using an STM32L0. So, forget 32-bit timers and 200+ MHz cores. Use DWT->CYCCNT, or think very carefully about how long are the intervals you'd like to measure, and with what accuracy, then take a 16-bit timer. You could post it as a separate question, describing in more detail how your hardware looks like and what it should it do. There might be a way to trigger a counter start/stop directly by the events you'd like to time..

lazzy_ms
  • 1,169
  • 2
  • 18
  • 40
3

It's both. Most of the time the function which increments the HAL tick counter is hooked to SysTick interrupt, which is configured to tick every 1ms. Therefore HAL_GetTick() will return the number of milliseconds since the SysTick interrupt is configured (essentially since the program start). This can also be though of as "the number of times the SysTick interrupt has 'ticked'".

J_S
  • 2,985
  • 3
  • 15
  • 38
  • Aha. I Understand. Is there a way to take time in microseconds resolution using HAL functions? **edit:** I see now that `HAL_IncTick()`, **"in its default implementation"**, increments the timer each 1ms. So can I change this to increment each 1us? – Alaa M. Mar 12 '17 at 12:21
  • If you want the number of milliseconds since the system has started, the `HAL_GetTick()` does exactly what you want. If you want current date and time, RTC can work with sub-second precision. I recommend referring to reference manual such as [this](http://www.st.com/content/ccc/resource/technical/document/reference_manual/3d/6d/5a/66/b4/99/40/d4/DM00031020.pdf/files/DM00031020.pdf/jcr:content/translations/en.DM00031020.pdf) if you want to figure out what the maximum precision is. When working with RTC, keep in mind LSI (internal oscillator) is not be suitable as it's not precise enough. – J_S Mar 12 '17 at 12:32
  • I don't want the date and time. I just need to measure passed time, but I need it in microseconds. – Alaa M. Mar 12 '17 at 12:35
  • 1
    I don't see a reason why SysTick couldn't be configured this way. Search for `HAL_SYSTICK_Config` function call and change the parameter to be 1000 times bigger (most likely - remove the division of the system clock by 1000). From now on, `HAL_GetTick()` returned value will mean "number of microseconds since the system started". If you want to keep `HAL_GetTick()` to indicate milliseconds as it normally does, you can always configure your own 1 microsecond timer and increment your own variable each time it ticks, which is essentially what HAL does with SysTick. – J_S Mar 12 '17 at 12:41
  • 2
    @AlaaM. and Jacek Ślimok - this is a very bad advice. Sure - you can configure SysTick to generate an **interrupt** every 1us, but I'm more than certain that you really don't want to. `HAL_GetTick()` is interrupt based and `HAL_SYSTICK_Config()` configures interrupt period. Given that the entry and exit from the interrupt on this core is over 20 clock cycles, and that the max clock for these chips is 32MHz, you would get an overhead in the range 70-100%! That's right - up to 100%! Also keep in mind that a 32-bit counter of microseconds overflows after just one hour. – Freddie Chopin Mar 12 '17 at 13:31
  • @FreddieChopin Thanks for the heads-up! Any better way to measure time in us resolution? – Alaa M. Mar 12 '17 at 13:59
  • 2
    @AlaaM. the method should definetely **NOT** be based on interrupts. I suggest you use a free-running timer with a single cycle of 1us (obtained by configuring prescaler, for example to value 31). This can even be SysTick. If you want to measure just "short" durations (less than timer overflow), then that's all you need. Otherwise you should combine that with an interrupt at the timer overflow to get bigger range. Try to start a new question for that, as this is off-topic here <: – Freddie Chopin Mar 12 '17 at 15:31
2

Although the question was already answered, I think it would be helpful to see how HAL uses HAL_GetTick() to count milliseconds. This can be seen in HAL's function HAL_Delay(uint32_t Delay).

Implementation of HAL_Delay(uint32_t Delay) from stm32l0xx_hal.c:

/**
  * @brief This function provides minimum delay (in milliseconds) based
  *        on variable incremented.
  * @note In the default implementation , SysTick timer is the source of time base.
  *       It is used to generate interrupts at regular time intervals where uwTick
  *       is incremented.
  * @note This function is declared as __weak to be overwritten in case of other
  *       implementations in user file.
  * @param Delay specifies the delay time length, in milliseconds.
  * @retval None
  */
__weak void HAL_Delay(uint32_t Delay)
{
  uint32_t tickstart = HAL_GetTick();
  uint32_t wait = Delay;

  /* Add a period to guaranty minimum wait */
  if (wait < HAL_MAX_DELAY)
  {
    wait++;
  }

  while((HAL_GetTick() - tickstart) < wait)
  {
  }
}
Eliahu Aaron
  • 4,103
  • 5
  • 27
  • 37
2

I had the same problem, but then found a library function in PlatformIO, which returns the microseconds.

uint32_t getCurrentMicros(void)
{
  /* Ensure COUNTFLAG is reset by reading SysTick control and status register */
  LL_SYSTICK_IsActiveCounterFlag();
  uint32_t m = HAL_GetTick();
  const uint32_t tms = SysTick->LOAD + 1;
  __IO uint32_t u = tms - SysTick->VAL;
  if (LL_SYSTICK_IsActiveCounterFlag()) {
    m = HAL_GetTick();
    u = tms - SysTick->VAL;
  }
  return (m * 1000 + (u * 1000) / tms);
}

It is located in ~/.platformio/packages/framework-arduinoststm32/libraries/SrcWrapper/src/stm32/clock.c

But it looks like STM32CubeIde doesn't have it, so I just copied it from PlatformIO. Also I had to copy the LL_SYSTICK_IsActiveCounterFlag() function:

static inline uint32_t LL_SYSTICK_IsActiveCounterFlag(void)
{
  return ((SysTick->CTRL & SysTick_CTRL_COUNTFLAG_Msk) == (SysTick_CTRL_COUNTFLAG_Msk));
}
Sergey
  • 324
  • 4
  • 9
1

When viewing my debugger, I can see I have available the uwTick global variable which seems to be the same as the result of calling HAL_GetTick() against my own defined global variable.

As per the docs:

void HAL_IncTick (void )

This function is called to increment a global variable "uwTick" used as application time base.

Note:
In the default implementation, this variable is incremented each 1ms in Systick ISR.
This function is declared as __weak to be overwritten in case of other implementations in user file.

Eliahu Aaron
  • 4,103
  • 5
  • 27
  • 37
scottc11
  • 716
  • 2
  • 7
  • 20
-3

yI needed a timestamp at 1uS precision, and using TIM5 as described above worked, but a few tweaks were necessary. Here's what I came up with.

/* Initialization */
__HAL_RCC_TIM5_CLK_ENABLE();
TIM5->PSC = HAL_RCC_GetPCLK1Freq() / 500000;
TIM5->CR1 = TIM_CR1_CEN;
TIM5->CNT = -10;

/* Reading the time */
uint32_t microseconds = TIM5->CNT << 1;

I did not fully explore why I had to do what I did. But I realized two things very quickly. (1) The prescalar scaling was not working, although it looked right. This was one of several things I tried to get it to work (basically a half-us clock, and divide the result by 2). (2) The clock was already running, and gave strange results at first. I tried several unsuccessful things to stop, reprogram and restart it, and setting the count to -10 was a crude but effective way to just let it complete its current cycle, then very quickly start working as desired. There are certainly better ways of achieving this. But overall this is a simple way of getting an accurate event timestamp with very low overhead.

  • it is an example of a very bad coding DV. – 0___________ Jul 14 '18 at 13:13
  • And wrong in at least two ways: A `PSC` value of `0` means prescaling of factor `1`, so your calculated prescaler value is off by 1 (needs a `-1`). Your code doesn't *divide* the `CNT` value but *multiplies* by two via a *left* shift. - So you 'tweaked' the correct solution given to one that gives a slightly off (prescaler!) value which is 4x what it should be and say that this works (better) for you? – JimmyB Jul 20 '18 at 13:34