3

Ok, so I'm new to IndexedDB and not particularly experienced with Javascript. Right now, I've got code working such that the user can browse to a file on their file system, and, when selected, the file is broken up into chunks (by using slice), those chunks are converted to Hex strings, and the strings (along with a key) are stored in IndexedDB.

The nice thing about this modality is that a user could select a very large file, and, since I'm using the right constructs, the whole file doesn't have to be loaded in memory all at once.

Now I'm attempting to read those Hex strings back out and have the browser pop a "Save as" dialog when the user clicks a button. The issue is that I haven't managed to find a way to "stream" the data out of IndexedDB into a Save As input. Are any of you aware of a way to do this? I'm coming up pretty short on my end.

Thanks

Cody S
  • 4,744
  • 8
  • 33
  • 64
  • I don't think you can stream it. I could easily figure out a way to just combine the data and save it, but streaming: Unless you're willing to upload to server and then stream it (which completely defeats the point), I don't think there's a way – markasoftware Dec 19 '13 at 00:05
  • Right. I've considered both scenarios, and I'm concerned that if the file is too large, concatenating all of the hex (or base64 or blobs or whatever) together will likely throw a Memory exception. The FileReader slice operation set me up so that I could deal with a file a chunk at a time going in...I was really hoping that there'd be a viable option for the file coming out. – Cody S Dec 19 '13 at 00:24
  • and also, if it's THAT big you might want to be worried about indexedDB's maximum storage space – markasoftware Dec 19 '13 at 01:03
  • According to http://stackoverflow.com/questions/5692820/max-size-in-indexeddb there isn't one...As for how big...well, big. I'd say up to a Gig to start. If we could pull that off, I'm sure my boss would be pleased. – Cody S Dec 19 '13 at 01:08
  • and a gigabyte? Are you sure this is feasible? I would reccomend using the filesystem api instead of indexedDB – markasoftware Dec 19 '13 at 01:12
  • AFAIK, FileSystemAPI is only implemented by Chrome...And it's still pretty experimental there...http://caniuse.com/filesystem – Cody S Dec 19 '13 at 01:14
  • don't forget Opera. It's a browser too :( – markasoftware Dec 19 '13 at 01:26
  • I've had the same problem. Anything more than ~5 MB seems impossible to work with currently http://stackoverflow.com/questions/17783719/import-and-export-indexeddb-data/17790272#17790272 – dumbmatter Dec 19 '13 at 23:05
  • IMO, it's going to continue to be pretty hard to take Javascript seriously until they can offer something similar to streams. It's good that they've finally got binary representation pretty well set (between blobs and the CryptoJS WordArrays), as well as local storage...but all of that is taken out at the knees if I have to pull all the data into memory at once to manipulate it. – Cody S Dec 19 '13 at 23:17
  • I'm currently looking for something similar. The guys at mega.co.nz have implemented this, but I'm still trying to debug how it works exactly. – unwichtich Oct 28 '14 at 15:28

2 Answers2

1

IndexedDB cannot do partial (or projection) read/write on record. FileSystem API seems, likely, since it is for such use case like sequential seek over a large file, but I am not sure.

You might want to check out this recent discussion, IndexedDB, Blobs and partial Blobs - Large Files. They discuss workaround on your problem as well.

Kyaw Tun
  • 12,447
  • 10
  • 56
  • 83
  • 1
    The discussion you linked to dealt with getting data into IndexedDB, which is not my issue. My desire is to present the end user with a link that when clicked, will cause the browser to prompt them to save a file somewhere on their local system, and then, when they select a location, to read data one entry at a time from IndexedDB, concatenating the entries one after the other, to make up the resultant file. – Cody S Dec 19 '13 at 01:02
0

So I'm also looking to do this and will update this answer as the implementation proceeds.

The idea is to create two entity types in indexdb.

  1. file
  2. page

There will be an entry in the file entity for each file stored in index db.

The content of the file will be stored in the page entity. The content will be split up into 'pages' where each page is some (arbitrary) fixed size such as 16KB.

My code will be written in dart (which cross compiles to javascript) but the logic should be easy to port directly to js.

I will create a class that will operate some what like a random access file.

Note: this is just pseudo code at this point.

  class RandomAccessFile {
       const int pageSize = 16192;
       int offset = 0;
       int page = 0;
       int inPageOffset = 0;
       List<int> currentPageData;
       String pathToFile;
       late int fileSize;
       RandomAccessFile(this.pathToFile)
       {
          var file = indexDb.read('File', this.pathToFile);
          this.fileSize = file.size;
       }

       void seekTo(int offset)
       {
          this.offset = offset;
          this.page = offset/pageSize;
          this.inPageOffset = offset % pageSize;
        }    
      Stream<int> read() async * {
      { 
         
         _loadPage();
         while (this.offset < fileSize)
         {
             
             while (this.inPageOffset < currentPageData.length) 
             {  
                  yield currentPageData[this.inPageOffset++];
             }
             _loadNextPage()
         }
      }
      void _loadPage()
      {
          currentPageData =  indexDb.read('Page', where file = pathTofile and pageNo = page);
          pageOffset = 0;
       }

  void _loadNextPage()
      {
         page++;
         _loadPage();
           
       }
}
Brett Sutton
  • 3,900
  • 2
  • 28
  • 53