Perhaps I'm not imaginative enough but I think you'll need to download it to your server and then upload it to the FTP.
You're missing just reading from S3; using ruby-aws-sdk it's simple, look here: http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html
But if the files grow larger than 5MB, you can use IO streams.
As far as I know Net:SFTP#upload! accepts an IO stream as an input. This is one side of the equation.
Then use ruby-aws-sdk to download the CSVs using streaming reads (again reference: http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html). So in one thread write to 'buffer' (an instance of a class deriving from 'IO'):
s3 = AWS::S3.new
obj = s3.buckets['my-bucket'].objects['key']
obj.read do |chunk|
buffer.write(chunk)
end
In another thread run the upload using the 'buffer' object as the source.
Note that I haven't used this solution myself but this should get you started.
Also note that you'll buffer incoming data. Unless you use a temporary file and you have sufficient disk space on the server, you need to limit the amount of data you store in the 'buffer' (i.e. call #write only if you're below the maximum size of the object).
This is Ruby; it's not as if it has first-class support for concurrency.
I'd personally either upload to S3 and SFTP from the same code or if that is impossible, download the entire CSV file and then upload it to the SFT. I'd switch to streams only if this is necessary as an optimization. (Just my $.0002).