3

Problem summary:

  • C# (MVC), entity framework 5.0 and Oracle.
  • I have a couple of million rows in a view which joins two tables.
  • I need to populate dropdownlists with filter-posibilities.
  • The options in these dropdownlists should reflect the actual contents of the view for that column, distinct.
  • I want to update the dropdownlists whenever you select something, so that the new options reflect the filtered content, preventing you from choosing something that would give 0 results.
  • Its slow.

Question: whats the right way of getting these dropdownlists populated?

Now for more detail.

-- Goal of the page --

The user is presented with some dropownlists that filter the data in a grid below. The grid represents a view (see "Database") where the results are filtered.

Each dropdownlist represents a filter for a column of the view. Once something is selected, the rest of the page updates. The other dropdownlists now contain the posible values for their corresponding columns that complies to the filter that was just applied in the first dropdownlist.

Once the user has selected a couple of filters, he/she presses the search button and the grid below the dropdownlists updates.

-- Database --

I have a view that selects almost all columns from two tables, nothing fancy there. Like this:

SELECT tbl1.blabla, tbl2.blabla etc etc
FROM table1 tbl1, table2 tbl2
WHERE bsl.bvz_id = bvz.id AND bsl.einddatum IS NULL;

There is a total of 22 columns. 13 VARCHARS (mostly small, 1 - 20, one of em has a size of 2000!), 6 DATES and 3 NUMBERS (one of them size 38 and one of them 15,2).

There are a couple of indexes on the tables, among which the relevant ID's for the WHERE clause.

Important thing to know: I cannot change the database. Maybe set an index here and there, but nothing major.

-- Entity Framework --

I created a Database first EDMX in my solution and also mapped the view. There are also classes for both tables, but I need data from both of them, so I don't know if I need them. The problem by selecting things from either table would be that you can't apply half of the filtering, but maybe there are smart way's I didn't think of yet.

-- View --

My view is strongly bound to a viewModel. In there I have a IEnumerable for each dropdownlist. The getter for these gets its data from a single IEnumerable called NameOfViewObjects. Like this:

public string SelectedColumn1{ get; set; }

private IEnumerable<SelectListItem> column1Options;
public IEnumerable<SelectListItem> Column1Options
{
    get
    {
        if (column1Options == null)
        {
            column1Options= NameOfViewObjects.Select(item => item.Column1).Distinct()
            .Select(item => new SelectListItem
                  {
                       Value = item,
                       Text = item,
                       Selected = item.Equals(SelectedColumn1, StringComparison.InvariantCultureIgnoreCase)
                  });
        }
        return column1Options;
    }
}

The two solutions I've tried are:

- 1 - Selecting all columns in a linq query I need for the dropdownlists (the 2000 varchar is not one of them and there are only 2 date columns), do a distinct on them and put the results into a Hashset. Then I set NameOfViewObjects to point towards this hashset. I have to wait for about 2 minutes for that to complete, but after that, populating the dropdownlists is almost instant (maybe a second for each of them).

model.Beslissingen = new HashSet<NameOfViewObject>(dbBes.NameOfViewObject
                .DistinctBy(item => new
                    {
                        item.VarcharColumn1,
                        item.DateColumn1,
                        item.DateColumn2,
                        item.VarcharColumn2,
                        item.VarcharColumn3,
                        item.VarcharColumn4,
                        item.VarcharColumn5,
                        item.VarcharColumn6,
                        item.VarcharColumn7,
                        item.VarcharColumn8
                    }
                )
            );

The big problem here is that the object NameOfViewObject is probably quite large, and even though using distinct here, resulting in less than 100.000 results, it still uses over 500mb of memory for it. This is unacceptable, because there will be a lot of users using this screen (a lot would be... 10 max, 5 average simultaniously).

- 2 - The other solution is to use the same linq query and point NameOfViewObjects towards the IQueryable it produces. This means that every time the view wants to bind a dropdownlist to a IEnumerable, it will fire a query that will find the distinct values for that column in a table with millions of rows where most likely the column it's getting the values from is not indexed. This takes around 1 minute for each dropdownlist (I have 10), so that takes ages.

Don't forget: I need to update the dropdownlists every time one of them has it's selection changed.

-- Question -- So I'm probably going at this the wrong way, or maybe one of these solutions should be combined with indexing all of the columns I use, maybe I should use another way to store the data in memory, so it's only a little, but there must be someone out there who has done this before and figured out something smart. Can you please tell me what would be the best way to handle a situation like this?

Acceptable performance:

  • having to wait for a while (2 minutes) while the page loads, but everything is fast after that.
  • having to wait for a couple of seconds every time a dropdownlist changes
  • the page does not use more than 500mb of memory
Community
  • 1
  • 1
Jon Koeter
  • 1,005
  • 2
  • 16
  • 25
  • So what if there are a lot of users on the screen. The list is stored on the client. What you have meets your acceptable performance. – paparazzo May 09 '16 at 12:04
  • You are going to run that on the server and have separate for each user? I can't help you. – paparazzo May 09 '16 at 12:19
  • @JonKoeter As others have said, I think you need to revisit your User Interface design. You could perhaps look into servicing a drop down list using ajax in an AutoComplete fashion. – Paul Zahra May 09 '16 at 13:03
  • That would still need to query the entire table wouldn't it? Or am I misinterpreting your suggestion? – Jon Koeter May 09 '16 at 13:08
  • When you want speed with large datasets then caching is the only option, you have to trade storage for speed, – MikeT May 09 '16 at 13:14
  • Yes, I'm afraid you are right. Right now I'm trying to store all (needed) data (which means: distinct on all columns I'm using for dropdownlists) in an IEnumerable, which will (I think) use a lot less space than HashSet – Jon Koeter May 09 '16 at 13:23

4 Answers4

3

Of course you should have indexes on all columns and combinations in WHERE clauses. No index means table scan and O(N) query times. Those cannot scale under any circumstance.

You do not need millions of entries in a drop down. You need to be smarter about filtering the database down to manageable numbers of entries.

I'd take a page from Google. Their type ahead helps narrow down the entire Internet graph into groups of 25 or 50 per page, with the most likely at the top. Maybe you could manage that, too.

Perhaps a better answer is something like a search engine. If you were a Java developer you might try Lucene/SOLR and indexing. I don't know what the .NET equivalent is.

duffymo
  • 305,152
  • 44
  • 369
  • 561
  • Do you think all problems would disappear if I put an index on all columns used in the dropdownlists? I'm not using all data in the dropdownlists, just the distinct per column. That leaves 3 - 15 per dropdownlist... – Jon Koeter May 09 '16 at 12:56
  • Disappear no, improve probably – MikeT May 09 '16 at 12:57
  • No. Millions of entries in a drop down is unusable by any GUI. You need to be smarter about filtering. Think "type ahead", not "SELECT". Indexing is your jam. http://searchengineland.com/how-google-instant-autocomplete-suggestions-work-62592 – duffymo May 09 '16 at 12:57
  • Maybe this SO question can help: http://stackoverflow.com/questions/3670831/how-does-google-instant-work – duffymo May 09 '16 at 12:59
  • No, I know... I'm not doing that... If a dropdownlists has the option to filter on status, it will not show all entries, just the unique ones. Which would leave about 5 of them for the dropdownlist. – Jon Koeter May 09 '16 at 12:59
  • Try it and see. You know your implementation. – duffymo May 09 '16 at 12:59
  • Google takes advantage of NoSql and pre cached tag lists, which are similar to the SO tags on this question, so would require a major rethink on their database and architecture – MikeT May 09 '16 at 13:02
  • Thinking never hurts anyone. I'd do the easy thing by adding indexes and thinking in the background. Solr wouldn't be hard to put in place. – duffymo May 09 '16 at 13:06
  • I agree. I will consult the DB admins and see if we can at least run a test. I don't know what the consequences would be of putting 6 or 7 indexes on a table with 6 million rows, but they will. – Jon Koeter May 09 '16 at 13:32
  • 1
    There'll be a one-time hit to calculate and store those indexes, so there are CPU and storage impacts. When you INSERT or UPDATE a row the indexes will have to be created. That's it. – duffymo May 09 '16 at 15:07
  • I'm accepting this answer as the final solution was to revisit the index structure we had on the tables. You were the first to comment on that. Thank you! – Jon Koeter May 10 '16 at 09:48
2

First point you need to check is your DB, make sure you have to right indexes and entity relations in place,

next if you want to dynamical build your filter options then you need to run the query with the existing filters to obtain what the next filter can be. there are several ways to do this,

firstly you can query the data and extract the values from the return, this has a huge load time and wastes time returning data you don't want (unless you are live updating the results with the filter and dont have paging, in which case you might aswell just get all the data and use linqToObjects to filter)

a second option is to have a parallel queries for each filter that returns the possible filters, so filter A = all possible values of A from data, filter b = all possible values of B when filtered by A in the data, C = all possible values of C when filtered by A & B in the data, etc. this is better than the first but not by much

another option is the use aggregates to speed things up, ie you have a parallel query as above but instead of returning the data you return how many records are returned, aggregate functions are always quicker so this will cut your load time dramatically but you are still repeatedly querying a huge dataset to it wont be exactly nippy. you can tweak this further using exist to just return a 0 or 1.

in this case you would look at a table with all possible filters and then remove the ones with no values from the parallel query

the next option will be the fastest by a mile is to cache the filters in the DB, with a separate table then you can query that and say from Cache, where filter = ABC select D, the problem with this maintaining the cache, which you would have to do in the DB as part of the save functions, trigggers etc.

MikeT
  • 5,398
  • 3
  • 27
  • 43
1

Another solution that can be added in addition to the previous suggestions is to use the /*+ result_cache */ hint, if your version of Oracle supports it (Oracle version 11g or later). If the output of the query is small enough for a drop-down list, then when a user enters criteria that matches the same criteria another user used, the results are returned in a few milliseconds instead of a few seconds or minutes. Result cache is wonderful for queries that return a small set of rows out of millions.

select /*+ result_cache */ item_desc from some_table where item_id ...

The result cache is automatically flushed when any insert/updates/deletes occur on the database tables.

Mark Stewart
  • 2,046
  • 4
  • 22
  • 32
  • 1
    Great idea. I would have to add some SP's though. Not sure if thats going to be posible. I'm asuming I cannot apply this through EF alone. This looks neat though. Thank you. – Jon Koeter May 10 '16 at 09:46
0

I've done something 'kind of' similar in the past - if you can add a table to the database then I'd explore introducing a 'scratchpad' type table where results are temporarily stored as the user refines their search. Since multiple users could be working simultaneously the table would have to have an additional column for identifying the user.

I'd think you'd see some performance benefit since all processing is kept server-side and your app would simply be pulling data from this table. Since you're adding this table you would also have total control over it.

Essentially I'd imagine the program flow would go something like:

  1. User selects some filters and clicks 'Search'.
  2. Server populates scratchpad table with results from that search.
  3. App populates results grid from scratchpad table.
  4. User further refines search and clicks 'Search'.
  5. Server removes/adds rows to scratchpad table as necessary.
  6. App populates results grid from scratchpad table.
  7. And so on.

Rather than having all the users results in one 'scratchpad' table you could possibly explore having temporary 'scratchpad' tables per user.

grae22
  • 104
  • 11
  • I can't change the database. Is there any way to do this from within the webapp? Would EF support temp tables? – Jon Koeter May 09 '16 at 12:55
  • Sorry, I was banking on you being able to make changes that don't impact on the existing db design! Unfortunately (for you) I don't believe the entity framework supports the concept of temporary tables. Perhaps this would be useful: [link](http://stackoverflow.com/questions/17231943/create-temp-table-with-entity-framework). – grae22 May 09 '16 at 12:59
  • EF wont create DB temp Tables but will once you instaniate your query you will have a memory storage of your data that you can manupulate like a temp table, however once you move away from the database you lose much of the performance boost of using a database to manipulate large datasets – MikeT May 09 '16 at 13:19
  • Good point @MikeT for pointing out the loss of performance, sounds like that would be significant in this case. – grae22 May 09 '16 at 13:29