5

To find the loss during training a model we can use cntk.squared_error() function, like this:

loss = cntk.squared_error(z, l)

But I am interested in finding the loss in terms of absolute error. The below code doesn't work:

loss = cntk.absolute_error(z, l)

It gives error as:

AttributeError: module 'cntk' has no attribute 'absolute_error'

Is there any inbuilt function in CNTK toolkit to find the absolute error? I am new to deep learning so I don't know much. Thanks for help!

Ank
  • 1,864
  • 4
  • 31
  • 51

1 Answers1

4

There's no out-of-the-box L1 loss function in CNTK, but you can provide a custom one:

def absolute_error(z, l):
  return cntk.reduce_mean(cntk.abs(z - l))
Maxim
  • 52,561
  • 27
  • 155
  • 209
  • Looks cool, but I get some pretty funny results. I have a small dataset on which I run the squared_error. This gives me a value of `5.8343764679341374`. Now when I use the absolute_error function above, I get the value `17.3852909909019` for the same data samples. Which is weird in my opinion. Squared should be much higher. – Willem Meints Jan 08 '19 at 15:11