47

How can I get row count values in MySQL as @@ROWCOUNT does in mssql?

Eric Leschinski
  • 146,994
  • 96
  • 417
  • 335
abs
  • 640
  • 2
  • 8
  • 19

5 Answers5

75

For SELECTs you can use the FOUND_ROWS construct (documented here):

SELECT SQL_CALC_FOUND_ROWS something FROM your_table WHERE whatever;
SELECT FOUND_ROWS( ) ;

which will return the number of rows in the last SELECT query (or if the first query has a LIMIT clause, it returns the number of rows there would've been without the LIMIT).

For UPDATE/DELETE/INSERT, it's the ROW_COUNT construct

INSERT INTO your_table VALUES (1,2,3);
SELECT ROW_COUNT();

which will return the number of affected rows.

AndiDog
  • 68,631
  • 21
  • 159
  • 205
  • @@ROWCOUNT helps in having the value after the select is executed - which is what i want; and then do further calculation in a stored procedure by taking the total number of rows returned – abs Feb 09 '10 at 13:16
  • Sorry, must've mistaken that. Edited my answer. – AndiDog Feb 09 '10 at 13:27
  • >it returns the number of rows there would've been without the LIMIT – tymtam Dec 07 '11 at 07:43
  • 5
    From docs: In the absence of the SQL_CALC_FOUND_ROWS option (...) FOUND_ROWS() returns the number of rows in the result set returned by that statement. If the statement includes a LIMIT clause, FOUND_ROWS() returns the number of rows up to the limit. – tymtam Dec 07 '11 at 08:04
  • 1
    Yea, but in Oracle you can do this: delete from myTable where rowcount > 100. This will delete everything but first 100 rows. How can you do this in a single SQL statement with mySql? FYI, mySQl does not support LIMIT in subquery, so you cannot say delete from myTable where id not in (select * LIMIT 0, 100) ? – Mitja Gustin May 10 '13 at 08:48
  • Consider also http://stackoverflow.com/questions/23954516/is-rowcount-after-update-reliably-a-measure-of-matching-rows. MySQL may report less rows for UPDATE. – Irawan Soetomo Oct 11 '16 at 10:06
6
mysql> SELECT SQL_CALC_FOUND_ROWS * FROM tbl_name

    -> WHERE id > 100 LIMIT 10;

mysql> SELECT FOUND_ROWS();

Read more about this here

Lazarus
  • 41,906
  • 4
  • 43
  • 54
1

The simplest way would be to use a variable:

mysql> SELECT @rowcount:=COUNT(*) FROM my_table;
mysql> SELECT @rowcount;

Or you can use FOUND_ROWS() construct after putting a SQL_CALC_FOUND_ROWS in a SELECT statement.

mysql> SELECT SQL_CALC_FOUND_ROWS * FROM my_table;
mysql> SELECT FOUND_ROWS();
Yada
  • 30,349
  • 24
  • 103
  • 144
1

There is another way:

CREATE TEMPORARY TABLE `results` AS ( *** Your query without LIMIT *** );

Get the row count

SELECT COUNT(*) FROM `results`;

Get your subset

SELECT * FROM `results` LIMIT 5,10;

The temporary table exists only in the current session. I would still clean-up afterwards

DROP TEMPORARY TABLE `results`;
Rolf
  • 5,550
  • 5
  • 41
  • 61
  • 1
    would you please explain benefits of your solution over other older solutions? or let me give you a scenario, you have a table with 6 million records, each second 15 new records are added and another 10 being updated. now you want to run a query, copy its result to another table then count those results and then drop that temp table?Unless you have a very strong point to back it up, I would call your solution wasting server resources at best. – AaA Apr 17 '20 at 14:47
  • 1
    @AaA actually it saves processing at the cost of RAM. `SELECT COUNT(*) FROM table` is especially fast and removes the cost of duplicating query complexity. it also saves you the cost on subsequent paging if you can persist the temp table until it's no longer used somehow – That Realty Programmer Guy Jul 27 '20 at 11:19
  • @GaretClaborn, You are already running a query to filter your records and write it to an external table which is most probably will not sit in memory because of its size (refer to my scenario) and then your query result is invalid in just few seconds, since there will be new records. Also since temp table is just for the same session, if you are crunching your data from multiple clients, you will require a very high end server machine (even in mySQL clusters) – AaA Jul 28 '20 at 07:02
  • @AaA workloads holding imported data which do not update so frequently are much more common. Your scenario is far more narrow than the OP question; for ex, millions of records updating every 15 minutes is a common scenario in my work. In both cases, the actual count is faster and saves processing. Though in my case transactions can provide additional cost savings. Additionally, RAM is cheap. Even Digital Ocean's cheapest RAM-optimized droplets could handle 6 million rows in memory easily. https://www.digitalocean.com/pricing/#memory-optimized-droplets – That Realty Programmer Guy Jul 29 '20 at 11:59
  • @AaA in my use case it made sense because I would run other queries on the temporary data as well. Creating a temporary table is not necessarily the right solution. By the way, this code is more portable than FOUND_ROWS(). – Rolf Jul 29 '20 at 20:39
-1

Count the number of rows in a sub-query in the where clause. Then test if the total number of rows is greater than zero.

SELECT customerNumber, customerName, creditLimit FROM customers where (SELECT count(*) as tot FROM customers) > 0;

Blake
  • 1