I've done something like this before but the constraints I had were:
The invocation was by a third party (using Control-M -- other scheduling tools are available including cron).
The target was an ETL into MySQL.
The platform was Linux at my end and (obviously)/Windows for the MSSQLSERVER end.
To present the service, we used Apache and a trivial trio of PHP scripts that took a noun (database schema name) & verb (start/stop/status) that queued, killed and peeked the job status for the relevant extract.
The start was invoked by pushing a task onto a queue that was monitored by a simple (cronnable probably) loop which was written in bash. This invoked isql to dump each of the schema's tables.
We needed to discover the schema and the table structure dynamically so we used something like...
SELECT TABLE_NAME from INFORMATION_SCHEMA.TABLES where TABLE_NAME like'<Schema>.%';
We needed to filter out temporary tables that matched simple patterns -- I think I used sed for that. If you are not worried about reimporting to a different RDBMS, we can skip the step of reconstructing a create table statement (we didn't find a MSSQLSERVER equivalent of show create table and did something else).
To dump the table, we simply invoked the isql client. If you are on Windows, you can use the native MSSQLSERVER client
echo "select * from $table" | isql -b -q -d\| $schema $username $p/w > /tmp/$table.csv
This dumps using pipes (|) rather than commas since we had a lot of commas in free text data. It also quote encapsulated string fields. I think I also edited the source for isql (since it was open source and brilliant for the job) to escape embedded quotation marks within string types (I had to do this for our Oracle sources too) to make the load into MySQL easier.
The stop was similar where a job would invoke a number of process kills (found via a process tree, more elegant methods are available I'm sure) -- it needed to be brutal and immediate since its invocation was in the event of colliding with the online day. If we missed one day of extract it was deemed less important than affecting the start of the business day. The script also tidied up the status and marked the extract as bad for the downstream services so they ignored it and continued with the previous unload.
The status was a convenience for the client of this whole service. What we learned is that you should prepare for them to issue a start followed by a busy loop on status (between 20&200 per second!) Until fifteen minutes to three hours later when the status returns idle followed by the client issuing a stop (make sure you no-op a stop on an idle extract). We simply added a sleep to the status service as it was sadly too difficult to convince the client to change their logic.
This was for a central government enforcement agency, the client above was one of our outsourced IT service providers. This ecosystem was (is!) running against about half a dozen sources including Oracle, MSSQLSERVER and SESAM.