Creating an index on a database table is a one time operation and it can be expensive based on many factors ranging from how many fields and of what type are included in the index, the size of the data table that is to be indexed, the hardware of the machine the database is running on, and possibly even more.
To give a reasonable answer on speeding things up, we would need to know the schema of the table, the definition of the index you are creating, are you reasonably sure if you are including uniqueness in your index that the data is actually unique, what are the hardware specs of your server, what are your disk speeds, how much available space on the disks, are you using a raid array, what level of raid, how much ram do you have and what is the utilization. etc...
Now all that said, this might be faster but I have not tested it.
make a structurally duplicate table of the table you wish to index.
Add the index to the new empty table.
copy the data from the old table to the new table in chunks.
drop the old table.
My theory is that it will be less expensive to index the data as it is added than to dig through the data that is already there and add the index after the fact.