You could use a mix of "sort" and "join" via bash instead of having to write it in awk/sed and it is likely to be even faster:
key.cvs (id, name)
1,homer
2,marge
3,bart
4,lisa
5,maggie
data.cvs (name,animal,owner,age)
snowball,dog,3,1
frosty,yeti,1,245
cujo,dog,5,4
Now, you need to sort both files first on the user id columns:
cat key.cvs | sort -t, -k1,1 > sorted_keys.cvs
cat data.cvs | sort -t, -k3,3 > sorted_data.cvs
Now join the 2 files:
join -1 1 -2 3 -o "2.1 2.2 1.2 2.4" -t , sorted_keys.cvs sorted_data.cvs > replaced_data.cvs
This should produce:
snowball,dog,bart,1
frosty,yeti,homer,245
cujo,dog,maggie,4
This:
-o "2.1 2.2 1.2 2.4"
Is saying what columns from the 2 files you want in your final output.
It is pretty fast for finding and replacing multiple gigs of data compared to other scripting languages. I haven't done a direct comparison to SED/AWK, but it is much easier to write a bash script wrapping this than writing in SED/AWK (for me at least).
Also, you can speed up the sort by using an upgraded version of gnu coreutils so that you can do the sort in parallel
cat data.cvs | sort --parallel=4 -t, -k3,3 > sorted_data.cvs
4 being how many threads you want to run it in. I was recommended 2 threads per machine core will usually max out the machine, but if it is dedicated just for this, that is fine.