In SQL Server, the performance of temp tables is much better (in the means of time) compared to table variables when working with large data (say inserting or updating 100000 rows) (reference: SQL Server Temp Table vs Table Variable Performance Testing)
I've seen many articles comparing temp table and table variable, but still don't get what exactly makes temp tables more efficient when working with large data? Is it just how they are designed to behave or anything else?