This reminds me of a file system I came up with that loaded level files of CD in an amazingly short time (it improved the load time from 10s of seconds to near instantaneous) and it works on non-CD media as well. It consisted of three versions of a class to wrap the file IO functions, all with the same interface:
class IFile
{
public:
IFile (class FileSystem &owner);
virtual Seek (...);
virtual Read (...);
virtual GetFilePosition ();
};
and an additional class:
class FileSystem
{
public:
BeginStreaming (filename);
EndStreaming ();
IFile *CreateFile ();
};
and you'd write the loading code like:
void LoadLevel (levelname)
{
FileSystem fs;
fs.BeginStreaming (levelname);
IFile *file = fs.CreateFile (level_map_name);
ReadLevelMap (fs, file);
delete file;
fs.EndStreaming ();
}
void ReadLevelMap (FileSystem &fs, IFile *file)
{
read some data from fs
get names of other files to load (like textures, object definitions, etc...)
for each texture file
{
IFile *texture_file = fs.CreateFile (some other file name)
CreateTexture (texture_file);
delete texture_file;
}
}
Then, you'd have three modes of operation: debug mode, stream file build mode and release mode.
In each mode, the FileSystem object would create different IFile objects.
In debug mode, the IFile object just wrapped the standard IO functions.
In stream file building, the IFile object also wrapped the standard IO but had the additional functions of writing to the stream file (the owner FileSystem opened the stream file) every byte that was read, and writing the return value of any file pointer position queries (so if anything needed to know a file size, that information is written to the stream file). This would sort of concatenate the various files into one big file, but only the data that was actually read.
The release mode would create an IFile that did not open files or seek within files, it just read from the streaming file (as opened by the owner FileSystem object).
This means that in release mode, all data is read in one sequential series of reads (the OS would buffer it nicely) rather than lots of seeks and reads. This is ideal for CDs where seek times are really slow. Needless to say, this was developed for a CD based console system.
A side effect is that the data is stripped of unnecessary meta data that would normally be skipped.
It does have drawbacks - all the data for a level is in one file. These can get quite large and the data can't be shared between files, if you had a set of textures, say, that were common across two or more levels, the data would be duplicated in each stream file. Also, the load process must be the same every time the data is loaded, you can't conditionally skip or add elements to a level.