In my experience, in short, yes, NTFS can handle it, but avoid exploring FILESTREAM directories (explorer can’t handle this volume of files, it’ll crash). Some white papers recommend the use of FileStream when file size is 256KB or larger, but the performance its evident in files larger than 1MB.
Here are some tricks recommended for best practices:
- Disabling the indexing service (disable indexing on the NTFS volumes
where FILESTREAM data is stored.)
- Multiple datafiles for FileStreamfilegroup in separate volumes.
- Configuring correct NTFS cluster size (64KB recommended)
- Configuring antivirus (cant delete some file of FILESTREAM or your DB will be corrupted).
- Disabling the Last AccessTime attribute.
- Regular disk defragmentation.
- Disabling short file names (8dot3)
- Keep FILESTREAM data containers on a separate disk volume (mdf, ndf
and log).
Right now, we're doing some tests to migrate our FileUpload database (8TB and growing with 25MM of records) from varbinary(max) to use FileTable. Our approach is to split a very large database in a database per year.
I would like know if you are currently working on this in production environment and know your experience.
You can find more info in a free ebook: Art of FileStream