Snowflake's COPY INTO LOCATION
statement writes in the ndjson format which already makes it very simple to divide the records down with a little local processing.
It appears you've already tried doing a row-by-row iteration to perform such single row exports and have found it expectedly slow. It may still be a viable option if this is only a one-time operation.
Snowflake does not offer any parallel split and per-row export techniques (that I am aware of) so it may be simpler instead to export the entire table normally, and then use a downstream parallel processing framework (such as a Spark job) to divide the input into individual record files. The ndjson format's ready-to-be-split nature makes processing the file easy in distributed program frameworks.
P.s. Specifying the MAX_FILE_SIZE
copy option to a very low value (lower than the minimum bound of your row size) will not guarantee a single file per row as the writes are done over sets of rows read together from the table.