A character set is a set of characters, i.e. "glyphs" i.e. visual symbols representing units of communication. The letter a
is a glyph and so is €
(euro sign). Character sets usually map integers (codepoints) to each character, but it's the encoding that dictates the binary/byte-level representation of the character.
I'm a ruby programmer, so here are some examples to help you understand the concepts.
This reveals how the Unicode character set maps codepoints to characters, but not how each byte is stored. (ruby 1.9 defaults to Unicode strings.)
>> 'a'.codepoints.to_a
=> [97]
>> '€'.codepoints.to_a
=> [8364]
Since 8364 (base 10) is too large to fit in one byte, various encoding strategies exist to specify a translation from Unicode codepoints into one or many bytes. The UTF-8 encoding is probably the most popular of these encodings. (Wikipedia shows the UTF-8 encoding algorithm, if you want to delve into the implementation.) Note that the UTF-8 encoding only makes sense in the context of the Unicode character set.
The following reveals how the UTF-8 encoding stores each Unicode character as bytes (0 thru 255 in base-10). (Ruby 1.9's default encoding is UTF-8.)
>> 'a'.bytes.to_a
=> [97]
>> '€'.bytes.to_a
=> [226, 130, 172]
Here's the same thing in ISO-8859-15 character set:
>> 'a'.encode('iso-8859-15').codepoints.to_a
=> [97]
>> '€'.encode('iso-8859-15').codepoints.to_a
=> [164]
And the ISO-8859-15 encoding:
>> 'a'.encode('iso-8859-15').bytes.to_a
=> [97]
>> '€'.encode('iso-8859-15').bytes.to_a
=> [164]
Notice that the ISO-8859-15 codepoints match the byte representation.
Here's a blog entry that might be helpful: http://graysoftinc.com/character-encodings/what-is-a-character-encoding. Entries 1 thru 3 are good if you don't want to get too ruby-specific.