To work correctly with Unicode you need to know always the encoding of your strings.
This code below doesn't specifies the encoding, so this is a bad starting point if you want portable code:
std::string currency = "€";
With C++11 the simplest solution is to use a encoding prefix, for example for UTF-8 we have:
std::string currency = u8"€";
Now your string is effectively always encoded as UTF-8 on all platforms and by accessing the individual chars in the string you get the individual UTF-8 bytes.
If you don't have c++11 then you probably will use wide strings:
std::wstring currency = L"€";
And then use Unicode specific libraries (ICU, ICONV, Qt, MultiByteToWideChar, etc.) to convert your string to UTF-8.
Personally if you want to write cross platform code I would stick with C++11 and use internally for all your strings std::string and the UTF-8 encoding together with u8"...". It's so much easier.
Now about converting your UTF-8 string to Windows-1252. Certainly if you only need to convert the € and a few other UTF-8 characters then you could do it yourself with a string compare. But if the needed features (or the list of strings to convert) grows then it's probably better to use one of the already mentioned libraries. And the choice is strongly influenced by the platforms on which you want to run your code.
The Unicode world contains over 100'000 characters. There exists for example many variants of the "C" character. Do you want to ignore all of them (e.g. convert them to a question mark) and consider only the plain old "C" and "c"? or do you may want to convert also a "Ć" into a "C", so that your conversion offers more compatibility?
You may want to give a look at these questions:
Portable and simple unicode string library for C/C++? and
How well is Unicode supported in C++11?