0

I have a simple new API endpoint, which involves querying my newly setup and populated table - inventory_products.

The schema of inventory_products table is :

CREATE TABLE `inventory_products` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  `inventory_product_id` binary(16) DEFAULT NULL,
  `product_id` binary(16) DEFAULT NULL,
  `status` enum('ACTIVE','INACTIVE','DELETED') DEFAULT 'ACTIVE',
  `priority` int(11) DEFAULT '0',
  `inventory_service_id` tinyint(3) DEFAULT NULL,
  `created_at` datetime DEFAULT NULL,
  `updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `uc_inventory_product_id` (`inventory_product_id`),
  KEY `idx_product_status` (`product_id`,`status`)
) ENGINE=InnoDB AUTO_INCREMENT=76312817 DEFAULT CHARSET=utf8;

The API endpoint mainly does the below:

def lookup_inventory_service_id
    return render_error_response(AUTH_ERROR, {status: 403}) unless client.name == PERMITTED_NAME

    ip_fetch_start = Time.now
    inventory_product = InventoryProducts.find_by_inventory_product_id(resource_attributes[:inventory_product_id])
    Rails.logger.info({inventory_product_id: resource_attributes[:inventory_product_id], inventory_product_fetch_time_ms: (Time.now - ip_fetch_start)*1000}.as_json)
    return head :not_found unless inventory_product
....

Problem: The inventory_product lookup (find_by_inventory_product_id) is the standard function provided by Rails's ActiveRecord (I have not overwritten it in my model). This function takes from 10ms to sometimes even 650ms (found this from the logs I added). Why would this take up so much time in some cases and so less time in some other in spite of Mysql index existing on the column used in the lookup?

I have mentioned inventory_product_id as a unique key in my schema and the MySQL query triggered by the above function is using inventory_product_id as an index from the below explain statement.

SELECT  inventory_products.*
    FROM  inventory_products
    WHERE  inventory_products.inventory_product_id = 0x3a288cdce78d44618eadd72e240f26a4
    LIMIT  1

 id select_type      table       type  key                key_len ref rows Extra
 1     SIMPLE inventory_products const uc_inventory_product_id 17 const  1 NULL

Is there something wrong in my schema? Do I explicitly need to mention inventory_product_id as a mysql index in the schema? Something like:

KEY `idx_product_status` (`product_id`,`status`)

Any help is greatly appreciated as I am struggling with this problem for long and unable to find the fix. Thanks, in advance!!

I use Mysql 5.6 and rails - 3.2.8. Also, my rails application runs on a tomcat server (version - Apache Tomcat/7.0.16) inside jruby (1.7.0)/

Rick James
  • 135,179
  • 13
  • 127
  • 222
user3903418
  • 143
  • 1
  • 1
  • 11

3 Answers3

0

I don't see an index in your schema file (I think that's a schema file?), just a unique key reference

It's easy to double check, just run rails g migration addIndexToInventoryProductIds. This will make a migration file you can populate with:

class addIndexToInventoryProductIds < ActiveRecord::Migration[5.2]

  def up
    add_index :inventory_products, :inventory_product_id
  end

  def down
    remove_index :inventory_products, :inventory_product_id
  end
end

After that run rake db:migrate - if it doesn't error an index didn't exist there.

Mark
  • 6,112
  • 4
  • 21
  • 46
  • Hi Mark, thanks for your answer. If I am not wrong, the unique key constraint automatically creates an index right? Then do we still need to explicitly add an index? – user3903418 Oct 10 '19 at 03:57
  • https://stackoverflow.com/questions/9764120/does-a-unique-constraint-automatically-create-an-index-on-the-fields – user3903418 Oct 10 '19 at 06:03
0

You might have to look for following things to understand why the index performance is bad.

  • High swap usage and less freeable memory. You can easily get this data in AWS console if you are using RDS. Your index table could be constantly being flushed out of main memory due to insufficient memory. Increasing the instance RAM or rethinking the indexes used in the application would fix this issue - ref
  • Increasing queue depth. Your instance might be maxing out on IOPS and the queries might be getting throttled, and hence increasing queue depth - check burst balance in RDS console for more info - increasing the size of the instance or opting for provision IOPS would fix this issue
  • Scale of writes to the table - if the table is accepting writes at high rate, since index tables need to updated(also considering the locking needed to ensure uniqueness for column with unique index) on insert or update, you need to check the scale at which writes are done to this table. If this is the issue, horizontal scaling can be a solution, wherein you read from multiple read replicas but write to only master
Sujan Adiga
  • 1,331
  • 1
  • 11
  • 20
0

You have UNIQUE(inventory_product_id), which is an optimal index for your query.

Even on a cold system, it would not take more than about 10 disk hits to fetch the desired row. That might take 100ms on HDD; faster on SSD. In a smoothly running system, most of the blocks would be cached in the buffer_pool, so 100ms would be rare.

If something else is going on on the table, there could be interference with locks, CPU, I/O, etc. Still, it is a stretch to see 630ms. What is the average?

The binary(16) implies that you are using some kind of hash or UUID? That means that caching is not very useful, hence your min of 10ms.

What is the value of innodb_buffer_pool_size? How much RAM do you have? What version of MySQL?

Rick James
  • 135,179
  • 13
  • 127
  • 222