2

I'm using the CsvHelper library in an ASP.NET MVC project to export data to CSV, and am finding that the exported data is either cut off, or in the case of smaller lists, no data is being written at all, and I'm receiving a blank CSV file.

My base controller has a method like this (which is called by controllers inheriting from this class to export lists of entities):

protected FileContentResult GetExportFileContentResult(IList data, string filename)
    {
        using (var memoryStream = new MemoryStream())
        {
            using (var streamWriter = new StreamWriter(memoryStream))
            {
                using (var csvWriter = new CsvWriter(streamWriter))
                {
                    csvWriter.WriteRecords(data);
                    return File(memoryStream.ToArray(), "text/csv", filename);
                }
            }
        }
    }

With exports of lists 1k+ items, it seems like the last few items get cut off. When the list of items is less than ~100, the CSV file returned is blank and contains no data.

I've tried writing straight to the output stream instead of a MemoryStream, and received the same results.

Also tried removing the using statements in case the stream was being disposed too early, but didn't result in any change either.

What is the correct way to use this library to create CSV files properly (ie. contains all rows, and works regardless of size of list)?

Edit

Decided to scrap using CsvHelper and went with a different library called CsvTools instead. This works without any problems. My code is below for reference.

protected FileContentResult GetExportFileContentResult(IList data, string filename)
{
    using (var memoryStream = new MemoryStream())
    {
            using (var streamWriter = new StreamWriter(memoryStream))
            {
                var dt = DataTable.New.FromEnumerable(data);
                dt.SaveToStream(streamWriter);
                return File(memoryStream.ToArray(), "text/csv", filename);
        }
    }
}

On a side note, tried Simon's suggestion below of using the memory stream directly instead of calling ToArray but got an error about the stream being closed, and haven't got round to debugging this yet.

Charles
  • 50,943
  • 13
  • 104
  • 142
Mun
  • 14,098
  • 11
  • 59
  • 83
  • As an aside: Consider using the actual `MemoryStream` object to return from `File()`.. so that it becomes a `FileStreamResult`. This is streamed to the browser in chunks and doesn't allocate large objects. – Simon Whitehead Feb 20 '14 at 04:58
  • 1
    Hey this post is useful but how can we do this without MemoryStream ? – DharaPPatel Jul 11 '16 at 19:08

1 Answers1

9

The reason is because you're not flushing the data in the writer to the stream. The writer will periodically flush itself when it's full, but you need to make sure to do it at the end.

Option 1:

using (var memoryStream = new MemoryStream())
using (var streamWriter = new StreamWriter(memoryStream))
using (var csvWriter = new CsvWriter(streamWriter))
{
    csvWriter.WriteRecords(data);
    streamWriter.Flush();
    memoryStream.Position = 0;
    return File(memoryStream, "text/csv", filename);
}

Option 2:

using (var memoryStream = new MemoryStream())
{
    using (var streamWriter = new StreamWriter(memoryStream))
    using (var csvWriter = new CsvWriter(streamWriter))
    {
        csvWriter.WriteRecords(data);
    } // The stream gets flushed here.
    memoryStream.Position = 0;
    return File(memoryStream, "text/csv", filename);
}
Josh Close
  • 22,935
  • 13
  • 92
  • 140
  • I'm using the CsvFactory class for CsvHelper so a bit different, but textWriter.Flush() did the trick for me! Thanks! – Karl Nov 21 '14 at 15:39
  • 1
    A nitpick, but in Option 2 instead of setting Position back to 0 you can just use memoryStream.ToArray() as the first parameter of File(). – Djorge Jul 20 '17 at 15:41