18

I have a collection of text files containing anonymised medical data (age, country, symptoms, diagnosis etc). This data goes back for at least 30 years so as you can imagine I have quite a large sized data set. In total I have around 20,000 text files totalling approx. 1TB.

Periodically I will be needing to search these files for occurances of a particular string (not regex). What is the quickest way to search through this data?

I have tried using grep and recursively searching through the directory as follows:

LC_ALL=C fgrep -r -i "searchTerm" /Folder/Containing/Files

The only problem with doing the above is that it takes hours (sometimes half a day!) to search through this data.

Is there a quicker way to search through this data? At this moment I am open to different approaches such as databases, elasticsearch etc. If I do go down the database route, I will have approx. 1 billion records.

My only requirements are:

1) The search will be happening on my local computer (Dual-Core CPU and 8GB RAM)

2) I will be searching for strings (not regex).

3) I will need to see all occurances of the search string and the file it was within.

M9A
  • 3,168
  • 14
  • 51
  • 79
  • 1
    Parse the data and put it in a database. Index your data. Profit. – Tony Stark May 29 '20 at 22:56
  • @Tony Stark - data within the text files is more or less in the correct format so not sure parsing is required. I can write a script that loops through the directory and sends a load data infile query to rapidly upload the text files. 1) would this be a problem having a 1TB table (plus more for an index? 2) would the search really be that much more quicker considering the database will still be on my local HDD? 3) I would need to return the file name to know which file the data belonged to, how could I do this with a database? – M9A May 29 '20 at 23:05
  • 1TB is nothing for today's db systems (you could even load that into memory on a server). I'd never do that on a laptop with a hdd but that's your choice and the performance would be better on dedicated hardware with SSDs (reading is much faster). While putting the data into your db you can always create columns with your data source (file name). – Tony Stark May 29 '20 at 23:10
  • @Tony Stark Unfortunately I can only do this on a local machine. My problem with creating a column for data source is the high number of duplications. For example if one file (1.txt) had 50,000 lines, then I’d have 50,000 rows in the database that all have 1.txt in the data source column. This will really drive up the size of the database – M9A May 29 '20 at 23:17
  • 2
    You can use the Elasticsearch single node with logstash. – Jinna Balu May 30 '20 at 10:52
  • @JinnaBalu will be enough to have 8GB of RAM to store indexes for 1TB of data in Elasticsearch? – cassandrad Jun 01 '20 at 08:17
  • Your question is better suited to [Unix & Linux Stack Exchange](http://unix.stackexchange.com/tour). This page is dedicated to questions about software development. – Cyrus Jun 01 '20 at 13:00
  • @Matt9Atkins regarding this `For example if one file (1.txt) had 50,000 lines, then I’d have 50,000 rows in the database that all have 1.txt in the data source column. This will really drive up the size of the database` - don't be afraid, you can use denormailization to overcome this by just putting a small document_id of where to lookup the file name (or even database engine, like ElasticSearch can do that for you) – llytvynenko Jun 01 '20 at 16:40
  • Are you say you search for strings. Do these strings represent substrings, full words, partial sentences or full sentences? If they contain words only, you can index your entire database based on the words. – kvantour Jun 03 '20 at 15:58
  • Related: https://stackoverflow.com/questions/13913014/grepping-a-huge-file-80gb-any-way-to-speed-it-up – kvantour Jun 03 '20 at 16:03
  • I have similar task at hand, my approach is the oldest one - full-text exact search, soon will release the C source of the fastest exact searcher, feel free to join the thread where I will share (in few weeks) the results. https://www.overclock.net/threads/cpu-benchmark-finding-linus-torvalds.1754066/page-2#post-28644885 – Georgi Oct 10 '20 at 04:39
  • @OpsterElasticsearchNinja In case you have access to AVX512 machine, it would be great to help me benchmark NyoTengu_XMM, NyoTengu_YMM and NyoTengu_ZMM. These days I cannot sit and write, no time, but the unfinished (benchmarks only) NyoTengu is here: http://www.sanmayce.com/Railgun/Benchmark_Linus-Torvalds_unfinished_Nyotengu.zip – Georgi Oct 10 '20 at 11:55
  • @OpsterElasticsearchNinja You see, my goal is to offer one 100% FREE open-source tool for traversing huge files - I already wrote it, it is called Kazahana and is capable not only in exact/wildcard searches but the unique exhaustive fuzzy! See here if you are interested: https://www.overclock.net/threads/16-cores-extravaganza-stressing-l1-l2-caches-with-fuzzy-search.1773223/#post-28628095 AFAIK, Kazahana traverses the 50GB Wikipedia at 3GB/s, so a nvme SSD will be utilized quite well, thus 1000GB/3GB/s= 5minutes. – Georgi Oct 10 '20 at 11:57

8 Answers8

5

There are a lot of answers already, I just wanted to add my two cents:

  1. Having this much huge data(1 TB) with just 8 GB of memory will not be good enough for any approach, be it using the Lucene or Elasticsearch(internally uses Lucene) or some grep command if you want faster search, the reason being very simple all these systems hold the data in fastest memory to be able to serve faster and out of 8 GB(25% you should reserve for OS and another 25-50% at least for other application), you are left with very few GB of RAM.
  2. Upgrading the SSD, increasing RAM on your system will help but it's quite cumbersome and again if you hit performance issues it will be difficult to do vertical scaling of your system.

Suggestion

  1. I know you already mentioned that you want to do this on your system but as I said it wouldn't give any real benefit and you might end up wasting so much time(infra and code-wise(so many approaches as mentioned in various answers)), hence would suggest you do the top-down approach as mentioned in my another answer for determining the right capacity. It would help you to identify the correct capacity quickly of whatever approach you choose.
  2. About the implementation wise, I would suggest doing it with Elasticsearch(ES), as it's very easy to set up and scale, you can even use the AWS Elasticsearch which is available in free-tier as well and later on quickly scale, although I am not a big fan of AWS ES, its saves a lot of time of setting up and you can quickly get started if you are much familiar of ES.

  3. In order to make search faster, you can split the file into multiple fields(title,body,tags,author etc) and index only the important field, which would reduce the inverted index size and if you are looking only for exact string match(no partial or full-text search), then you can simply use the keyword field which is even faster to index and search.

  4. I can go on about why Elasticsearch is good and how to optimize it, but that's not the crux and Bottomline is that any search will need a significant amount of memory, CPU, and disk and any one of becoming bottleneck would hamper your local system search and other application, hence advising you to really consider doing this on external system and Elasticsearch really stands out as its mean for distributed system and most popular open-source search system today.
Amit
  • 30,756
  • 6
  • 57
  • 88
3

To speed up your searches you need an inverted index. To be able to add new documents without the need to re-index all existing files the index should be incremental.

One of the first open source projects that introduced incremental indexing is Apache Lucense. It is still the most widely used indexing and search engine although other tools that extend its functionality are more popular nowadays. Elasiticsearch and Solr are both based on Lucense. But as long as you don't need a web frontend, support for analytical querying, filtering, grouping, support for indexing non-text files or an infrastrucutre for a cluster setup over multiple hosts, Lucene is still the best choice.

Apache Lucense is a Java library, but it ships with a fully-functional, commandline-based demo application. This basic demo should already provide all the functionality that you need.

With some Java knowledge it would also be easy to adapt the application to your needs. You will be suprised how simple the source code of the demo application is. If Java shouldn't be the language of your choice, its wrapper for Pyhton, PyLucene may also be an alternative. The indexing of the demo application is already reduced nearly to the minimum. By default no advanced functionlity is used like stemming or optimization for complex queries - features, you most likely will not need for your use-case but which would increase size of the index and indexing time.

rmunge
  • 3,653
  • 5
  • 19
  • Lucene's index is stored in the file system. The size of the index depends on your data but with enough rendundancy it should be less than 30% of the size of the indexed documents. If you should have the possiblity to add a fast SSD to your system, you could move the index to another drive and speed up queries even more. – rmunge Jun 06 '20 at 15:42
3

You clearly need an index, as almost every answer has suggested. You could totally improve your hardware but since you have said that it is fixed, I won’t elaborate on that.

I have a few relevant pointers for you:

  1. Index only the fields in which you want to find the search term rather than indexing the entire dataset;
  2. Create multilevel index (i.e. index over index) so that your index searches are quicker. This will be especially relevant if your index grows to more than 8 GB;
  3. I wanted to recommend caching of your searches as an alternative, but this will cause a new search to again take half a day. So preprocessing your data to build an index is clearly better than processing the data as the query comes.

Minor Update:

A lot of answers here are suggesting you to put the data in Cloud. I'd highly recommend, even for anonymized medical data, that you confirm with the source (unless you scraped the data from the web) that it is ok to do.

displayName
  • 13,888
  • 8
  • 60
  • 75
1

I see 3 options for you.

  1. You should really consider upgrading your hardware, hdd -> ssd upgrade can multiply the speed of search by times.

  2. Increase the speed of your search on the spot. You can refer to this question for various recommendations. The main idea of this method is optimize CPU load, but you will be limited by your HDD speed. The maximum speed multiplier is the number of your cores.

  3. You can index your dataset. Because you're working with texts, you would need some full text search databases. Elasticsearch and Postgres are good options. This method requires you more disk space (but usually less than x2 space, depending on the data structure and the list of fields you want to index). This method will be infinitely faster (seconds). If you decide to use this method, select the analyzer configuration carefully to match what considered to be a single word for your task (here is an example for Elasticsearch)

1

Worth covering the topic from at two level: approach, and specific software to use.

Approach: Based on the way you describe the data, it looks that pre-indexing will provide significant help. Pre-indexing will perform one time scan of the data, and will build a a compact index that make it possible to perform quick searches and identify where specific terms showed in the repository.

Depending on the queries, it the index will reduce or completely eliminate having to search through the actual document, even for complex queries like 'find all documents where AAA and BBB appears together).

Specific Tool

The hardware that you describe is relatively basic. Running complex searches will benefit from large memory/multi-core hardware. There are excellent solutions out there - elastic search, solr and similar tools can do magic, given strong hardware to support them.

I believe you want to look into two options, depending on your skills, and the data (it will help sample of the data can be shared) by OP. * Build you own index, using light-weight database (sqlite, postgresql), OR * Use light-weight search engine.

For the second approach, using describe hardware, I would recommended looking into 'glimpse' (and the supporting agrep utility). Glimple provide a way to pre-index the data, which make searches extremely fast. I've used it on big data repository (few GB, but never TB).

See: https://github.com/gvelez17/glimpse

Clearly, not as modern and feature rich as Elastic Search, but much easier to setup. It is server-less. The main benefit for the use case described by OP is the ability to scan existing files, without having to load the documents into extra search engine repository.

dash-o
  • 13,723
  • 1
  • 10
  • 37
1

Can you think about ingesting all this data to elasticsearch if they have a consistent data structure format ?

If yes, below are the quick steps:
1. Install filebeat on your local computer
2. Install elasticsearch and kibana as well.
3. Export the data by making filebeat send all the data to elasticsearch. 
4. Start searching it easily from Kibana.
Preyas
  • 244
  • 2
  • 7
0

Fs Crawler might help you in indexing the data into elasticsearch.After that normal elasticsearch queries can you be search engine.

Ani Guner
  • 106
  • 3
0

I think if you cache the most recent searched medical data it might help performance wise instead of going through the whole 1TB you can use redis/memcached

Skerrepy
  • 354
  • 4
  • 6