-1

There is a table that has some rows and every row is isolated by this format:

row 1: g_  
row 2: g_1_  
row 3: g_1_2_  
row 4: g_2_  
row 5: g_99999_  
row 6: g_99999_44_  

I want to select rows by like command that are between g_1_ and g_99999_ in this example they are: g_1_ , g_2_ and g_99999_

Parsa Soroori
  • 63
  • 3
  • 12

1 Answers1

1

I'm pretty sure you can design your database a lot more efficient but if you can't change the database one slow way of solving it would be:

SELECT * FROM yourtable WHERE yourcolumn RLIKE '^g_[0-9]+_$'

I would instead suggest that you add a parent column. And parent would have the id of the group that it is under. So the table would look something like this:

row 1: 0
row 2: 1
row 3: 2
row 4: 1
row 5: 1
row 6: 5

And then you add an index on that column and lookups will now be crazy fast.

WizKid
  • 4,888
  • 2
  • 23
  • 23
  • Thanks. How can I make it more efficient? Actually every row point to a group and subgroup. For example g_ is main group and g_1_ is a subgroup of g_ and g_1_1_ is a subgroup of g_1_ – Parsa Soroori May 15 '14 at 05:23
  • 3
    Does g_1_1_1_ also exists? Can it be nested forever? If not just create two columns. One for group and one for subgroup and create an index that is (group, subgroup). With the query I suggested the database would have to look at every single row every time. If you have a (group, subgroup) index the database would just need to look at the rows you need – WizKid May 15 '14 at 05:26
  • Infinite sequences in a single column indeed become a pain to query efficiently. However, much depends on what your searches are _usually_ like instead of 'any search'. If it's often level dependent, adding a subgrooup_count column might already help a lot here. Also, if the 'number' of a group is limited to a certain number for the foreseeable future, a query could also take advantage of prefixed 0 columns like 'g_000001_000123_' etc (as it can use sorting and ranges). A index on the combination of the 2 could give yet another boost, bot normalizing the scheme to another table may be wiser. – Wrikken May 15 '14 at 06:07