0

I have an application that was developed in 2001. I have a table named Locations which is being used by my application and some of my business partners to insert/update/delete data from that particular table.

This table has around 1 million rows. Performance wise, it's working fine and we have indexes or using other techniques to get the data.

Now, I want to implement full text search on it and SQL Server full text search is not fast enough and I decided to use ElasticSearch for it. For this, I need to ingest data from SQL Server into Elastic search that I can do using logstash but I'm confused about the following points:

  1. How I will update the Elastic Search Index if some new record is added?
  2. How I will update the Elastic Search Index if some record is updated?
  3. How I will update the Elastic Search Index if some record is deleted?

If we have one source to update the locations table then I can think about it but there are too many clients those can update the locations table. What should I do in this case?

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Umer Waheed
  • 4,044
  • 7
  • 41
  • 62
  • You can keep track of the changes in your Locations table by some means and sync data with elasticsearch. For example https://stackoverflow.com/questions/45708928/how-to-sync-mssql-to-elasticsearch is a similar question. When we did this, we used a very complicated manually written code that use change tracking to find out the changes, your case is quite simple since you only have one table – siggemannen May 13 '23 at 15:55
  • Btw, 1 million rows, full text search should be fast enough or you're doing something wrong – siggemannen May 13 '23 at 15:57

0 Answers0