0

The table structure is as follows:

enter image description here

When I run this query, the execute time is about 2-3 minutes:

select id,name,infohash,files from tb_torrent where id between 0 and 10000;

There's just over 200,000 data, why is the execution so slow? And how to fix it?

icedwater
  • 4,701
  • 3
  • 35
  • 50
J.Joe
  • 81
  • 1
  • 7
  • 2
    `20W+`? Does `W` here stand for "webscale"? – zerkms Aug 29 '17 at 03:15
  • just 20 0000 .. sorry i'll change this , make it clear . – J.Joe Aug 29 '17 at 03:20
  • 1
    is it 20 something or 2 hundred thousand? still not clear... – Jorge Campos Aug 29 '17 at 03:24
  • Does `id` really have to be a `BIGINT`? and does it really need 255 display digits? – user207421 Aug 29 '17 at 03:25
  • I think this is a Chinese counting unit, right @J.Joe? :P If so, see if my edit looks better. – icedwater Aug 29 '17 at 03:25
  • You know that [the length of an INT or BIGINT means nothing](https://stackoverflow.com/questions/3135804/types-in-mysql-bigint20-vs-int20/3135854#3135854), right? – Bill Karwin Aug 29 '17 at 03:26
  • And unless you're planning on having 10^20 rows in the table, an `INT UNSIGNED` would be more than sufficient for 200,000 rows, and several orders of magnitude beyond. – user207421 Aug 29 '17 at 03:29
  • thanks for editing , you are right this is a Chinese counting unit . @icedwater – J.Joe Aug 29 '17 at 03:35
  • I'm not sure what your say , you mean thd 'id' type bigint is not suitable for this ? And what type mapping the INT UNSIGHNED , if i change the type , is right helpful ? @EJP – J.Joe Aug 29 '17 at 03:43
  • Smaller index columns are faster. Unless you're planning on > 4Gig rows over the life of the table you can use INT UNSIGNED. At least try it now, if it doesn't help by all means put it back to BIGINT. – user207421 Aug 29 '17 at 03:48
  • I've try , but still slow .. – J.Joe Aug 29 '17 at 03:53

1 Answers1

1

The unnecessary use of BIGint is not enough to explain the sluggishness. Let's look for other issues.

Does that "key" icon mean that there is an index on id? Perchance is it PRIMARY KEY?

What ENGINE is in use? If it is MyISAM, then you have the drawback of the PK not being 'clustered' with the data, thereby making the 10K lookups slower.

What will you do with 10K rows? Think of the networks costs. And the memory costs in the client.

But maybe this is the real problem... If this is InnoDB, and if the TEXT columns are "big", then the values are stored "off record". This leads to another disk hit to get any big text values. Change them to some realistic max len of VARCHAR(...).

How much RAM do you have? What is the value of innodb_buffer_pool_size? Did you time the query twice? (The first time would be I/O-bound; the second might be hitting cache. How big (in MB or GB) is the table?

Rick James
  • 135,179
  • 13
  • 127
  • 222
  • oh you're awesome !!!, thanks a lot , as you say the problem is network costs . My server only support 1M , so when transfer the 10K data is so solw . :) – J.Joe Aug 29 '17 at 06:24