4

Our application uses berkeley db for temporary storage and persistance.A new issue has risen where tremendous data comes in from various input sources.Now the underlying file system does not support such large file sizes.Is there anyway to split the berkeley DB files into logical segments or partitions without losing data inside it.I also need it to set using berkeley DB properties and not cumbersome programming for this simple task.

Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
Madusudanan
  • 1,017
  • 1
  • 15
  • 36

2 Answers2

0

Modern BDB has means to add additional directories either using DB_CONFIG (recommended) or with API calls.

See if these directives (and corresponding API calls) help: add_data_dir set_create_dir set_data_dir set_lg_dir set_tmp_dir

Note that adding these directives is unlikely to transparently "Just Work", but it shouldn't be too hard to use db_dump/db_load to recreate the database files configured with these directives.

Jeff Johnson
  • 2,310
  • 13
  • 23
0

To my knowledge, BDB does not support this for you. You can however implement it yourself by creating multiple databases.

I did this before with BDB, programatically. i.e. My code partitioned a potentially large index file into seperate files and created a top level master index over those sub files.

ScrollerBlaster
  • 1,578
  • 2
  • 17
  • 21