Example C++ solution:
#include <algorithm>
#include <iterator>
#include <iostream>
#include <boost/iterator/counting_iterator.hpp>
#include <boost/lexical_cast.hpp>
namespace {
bool is_palindrome(unsigned int i) {
const std::string& s = boost::lexical_cast<std::string>(i);
return std::equal(s.begin(), s.end(), s.rbegin());
}
const unsigned int stop = 1000000;
}
int main() {
std::remove_copy_if(boost::counting_iterator<unsigned int>(1),
boost::counting_iterator<unsigned int>(stop),
std::ostream_iterator<unsigned int>(std::cout, "\n"),
std::not1(std::ptr_fun(is_palindrome)));
}
(I used std::remove_copy_if
to make up for the lack of std::copy_if
which is in C++0x)
For completeness sake I implemented a version that generates the palindromes rather than testing candidates against a predicate:
#include <boost/lexical_cast.hpp>
#include <boost/iterator/counting_iterator.hpp>
#include <boost/iterator/transform_iterator.hpp>
#include <algorithm>
#include <iterator>
#include <iostream>
#include <cassert>
namespace {
template <int ver>
unsigned int make_palindrome(unsigned int i) {
std::string s = boost::lexical_cast<std::string>(i);
assert(s.size());
s.reserve(s.size()*2);
std::reverse_copy(s.begin(), s.end()-ver, std::back_inserter(s));
return boost::lexical_cast<unsigned int>(s);
}
}
int main() {
typedef boost::counting_iterator<unsigned int> counter;
std::merge(boost::make_transform_iterator(counter(1), make_palindrome<1>),
boost::make_transform_iterator(counter(999), make_palindrome<1>),
boost::make_transform_iterator(counter(1), make_palindrome<0>),
boost::make_transform_iterator(counter(999), make_palindrome<0>),
std::ostream_iterator<unsigned int>(std::cout, "\n"));
}
(I could have used boost.range I think instead of std::merge
for this)
The discussion point from this I guess then is "is this a better* way to write it?". The thing I like about writing problems like the palindrome in this style is you get the "if it compiles it's probably correct" heuristic on your side. Even if there is a bug it'll still get handled sensibly at run time (e.g. an exception from lexical_cast
).
It's a markedly different way of thinking from C programming (but strangely similar to Haskell in some ways). It brings benefits in the form of lots of extra safety, but the compiler error messages can be terrible and shifting the way you think about problems is hard.
At the end of it all though what matters is "is it less work for less bugs?". I can't answer that without some metrics to help.
*
For some definition of better.