The numeric extension for boost::gil contains algorithms like this:
template <typename Channel1,typename Channel2,typename ChannelR>
struct channel_plus_t : public std::binary_function<Channel1,Channel2,ChannelR> {
ChannelR operator()(typename channel_traits<Channel1>::const_reference ch1,
typename channel_traits<Channel2>::const_reference ch2) const {
return ChannelR(ch1)+ChannelR(ch2);
}
};
When filled with two uint8 channel values, an overflow will occur if ChannelR is also uint8.
I think the calculation should
- use a different type for the processing (how to derive this from the templated channel types?)
- clip the result to the range of the ChannelR type to get a saturated result (using
boost::gil::channel_traits<ChannelR>::min_value()
/ ...max_value()
?)
How to do this in a way that allows for performance-optimized results?
- Convert to the biggest possible type? Sounds counter productive...
- Provide an arsenal of template specializations? Any better idea?