Disabling indexes is a good idea when it comes to loading large quantities of data, but... the big problem is clustered indexes. If you disable a clustered index, you’ve disabled the entire table.
Several options suggest themselves, and none of them are simple.
1) Loop through the system views (sys.indexes), extract the table and index name, generate and execute dynamic SQL to disable the index. Have an “undo” routine to re-enable them. (Be wary--was it a unique index or a unique constraint?) This, alas, only works if you do not use clustered indexes. Good luck with that.
2) As for 1, but skip any clustered indexes. When you load data, make sure it gets loaded in (clustered index) sequential order, otherwise you'll have poor load times and fragmented tables. (If you data providers are like mine, good luck with that one, too.)
3) Create tables in your database containing definitions of the indexes on your “loading” tables. Build a routine that loops through them and drops all the indexes (clustered indexes last). This will be fast if you truncate the tables first. Load your data, then loop through and recreate the indexes from scratch (clustered first). Use table partitioning to make less horrible on the rest of the system (e.g. do all the above on the “loading” tables, then use partition switching to move the loaded data into your “live” tables). It took me no little time to build such a system, but it can and will work.