Ideally, you'd use the UNLOAD command that will move this to S3 storage then take that to your local machine. Here is an example:
UNLOAD ('select * from my_table')
TO 's3://bucket_name/path/to/my_filename_prefix'
WITH CREDENTIALS
'aws_access_key_id=<my_access_key>;
aws_secret_access_key=<my_secret_key>'
MANIFEST
GZIP
ALLOWOVERWRITE
ESCAPE
NULL AS '\\N'
ref http://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD_command_examples.html
If a local query is the only or preferred option, here are two things to limit memory usage:
Minimize operations
Loop through results from the query one at a time
Here is an example snippet if edits are done to your data:
result = conn.execute(s)
row = result.fetchone()
with open(outfile, 'ab') as csvfile:
while row is not None:
csvfile.write(txt)
row = result.fetchone()
Otherwise just write all rows:
# dump column titles (optional)
csvfile.writerow(x[0] for x in cursor.description)
# dump rows
csvfile.writerows(cursor.fetchall())
outfile.close()
snippet credit: https://stackoverflow.com/a/2952829/7853322