The class is defined in core.js
within the CryptoJS library:
/**
* An array of 32-bit words.
*
* @property {Array} words The array of 32-bit words.
* @property {number} sigBytes The number of significant bytes in this word array.
*/
var WordArray = C_lib.WordArray = Base.extend({
The (byte) values that are put in there are put in the most significant bits of the words (I've checked this against the source code).
For instance, if you would put the value "he"
into it as UTF-8 (or Latin1 or ASCII) then you would get a one element array with the value 68_65_00_00
in it, and words
set to the value 2. This is because UTF-8 encodes to 8-bit bytes and those bytes are grouped in the topmost 16 bits.
Generally (symmetric) cryptographic algorithms are specified to operate on bits. However, they are generally optimized to work either on 32 or 64 bit words because those are most optimal within 32 or 64 bit machines such as i86 or x64. So any library in any language will internally convert to words before the operations are performed.
Usually libraries define their operations to use bytes rather than words though. CryptoJS is a bit special in the sense that it operates on a buffer of words. That's kind of logical since JavaScript doesn't define byte arrays. It also skips a step, as you would otherwise have to convert from UTF-8 to bytes, and then to words again within the algorithm implementation.
CryptoJS also has a 64 bit word array present, undoubtedly for algorithms such as SHA-512 that are optimized for 64 bit operation.