607

How can I trace MySQL queries on my Linux server as they happen?

For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?

Jeffrey Bosboom
  • 13,313
  • 16
  • 79
  • 92
barfoon
  • 27,481
  • 26
  • 92
  • 138
  • Depending on how bad a problem is, I would highly recommend trying MySql Proxy. B/c it can be placed on the app server, a) it is scalable, b) doesn't have to affect all traffic to the db. It's in 'alpha', but has been around a long time. http://dev.mysql.com/downloads/mysql-proxy/ – Jeff Maass Oct 04 '12 at 13:15
  • 1
    I've reworded this question to omit any reference to tools. I think this question is perfectly on-topic here, as "are we running the queries we should be?" is a great first step in debugging a database-related problem. – Jeffrey Bosboom Feb 20 '16 at 00:23
  • 1
    @MaasSql mysql proxy is not helpful for php developers while using PDO as the query and values get bound only at the server. – Ananda Jun 07 '16 at 09:58

13 Answers13

631

You can log every query to a log file really easily:

mysql> SHOW VARIABLES LIKE "general_log%";

+------------------+----------------------------+
| Variable_name    | Value                      |
+------------------+----------------------------+
| general_log      | OFF                        |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+

mysql> SET GLOBAL general_log = 'ON';

Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log

Then don't forget to

mysql> SET GLOBAL general_log = 'OFF';

or the performance will plummet and your disk will fill!

Martin Konecny
  • 57,827
  • 19
  • 139
  • 159
artfulrobot
  • 20,637
  • 11
  • 55
  • 81
  • 49
    Good answer! You can use `tail -f -n300 /var/run/mysqld/mysqld.log` to keep live track of your log file – Claudio Bredfeldt Oct 01 '13 at 11:22
  • 5
    Note that MySQL 5.1.12 or higher is required for these variables. Before that, you'd have to restart MySQL to change these settings. – jlh Dec 08 '13 at 19:47
  • Is there any way to get the parameterized variables written to the log? I am seeing `SELECT name FROM person where id=?` but I don't know what `id` is. – Jeff Sep 16 '15 at 00:44
  • `SHOW VARIABLES` didn't work for me. However `SELECT @@GLOBAL.general_log_file;` works fine. (MariaDB 10.1.9) – phil pirozhkov Dec 25 '15 at 11:23
  • So very handy! I was hacking my way into seeing the mysql logs by setting **slow_query_log** in the conf to very low and hoping it'd log in slow.log. – Siddhartha Jul 28 '16 at 00:23
  • this is the only non-linux and correct answer so far. And so, the only useful one. – nurettin Sep 28 '17 at 11:51
  • this really helped me resolve an issue with Dapper/MySql on dotnet core. – moke Nov 03 '17 at 18:49
  • 3
    **important** - You must check logging output with `SHOW VARIABLES LIKE "log_output%"`. If it is set to `table`, then logs will be saved into the database itself, table `mysql.general_log` not in file system. You can change it to `file` with `SET GLOBAL log_output = 'file';` – Arnis Juraga Mar 10 '18 at 14:02
  • 1
    Very good answer. +1 for cautioning about switching off the log :) – Parasu May 07 '18 at 15:52
  • I found it useful to flatten the output, putting queries in one line, i.e. to be better able to list which queries were run very often: `awk '$2 ~ /^Query$/ { gsub(/[ \t][ \t]+/," ") ; sub(/^ [0-9]+ /,"") ; printf("\n%s ", $0) } $1 !~ /^Query$/ { gsub(/[ \t][ \t]+/," ") ; printf("%s ", $0) }' /var/run/mysqld/mysqld.log` - Basically, remove the Query-id (first column), truncate space/tabs, print "not-Query-lines" without line-breaks, print "Query-lines" with an initial line-break. – sastorsl Sep 18 '20 at 05:16
371

You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.

The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.

Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.


Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.

rogerdpack
  • 62,887
  • 36
  • 269
  • 388
Chad Birch
  • 73,098
  • 23
  • 151
  • 149
  • 6
    This may sound silly but how exactly can I enable the GQL? I have added log_output=file, general_log=1, and general_log_file=/pathtofile, and tail'd the log file, hit the site and got nothing. What am I doing wrong? – barfoon Feb 20 '09 at 07:35
  • I can't be sure of anything, but make sure that you restarted the server, and also that the file you chose is one that mysql would have write access to. – Chad Birch Feb 20 '09 at 15:22
  • Chad - the file/directory is accessible by mysql user, and ive restarted the server several times. Still nothing. Any other ideas? – barfoon Feb 20 '09 at 21:13
  • Hmm, you did use "tail -f ", right? Is the file appearing, or not being written at all? Another possibility: make sure that you're using the correct settings for your mysql version, that page of the documentation explains several different ways, which will only work on the right version. – Chad Birch Feb 20 '09 at 21:37
  • 8
    I have figured it out - all I needed in my.cnf was log=/path/to/log Then I just did the tail on that and it displays all the queries. – barfoon Feb 20 '09 at 21:53
  • 4
    As far as I can tell, there is no way to *trigger* anything on a SELECT statement. Triggers only apply to INSERT, UPDATE, DELETE... or am I misinformed? – gabe. Dec 08 '11 at 21:14
  • Note that if you use 'SHOW FULL PROCESSLIST' it won't truncate the info field to 100 bytes (where your query is). Usually pretty useless without doing it this way as you'll only see the first part of your query unless it is a very short query. – Peter Aug 25 '14 at 17:16
  • 1
    I was in the same spot earlier. Currently, I am using [Monyog](https://www.webyog.com/product/monyog) which does all of this with very less overhead, hence nothing slows down. –  May 02 '17 at 09:57
  • If you’re reading this, scroll down to [this answer](https://stackoverflow.com/a/24883130/735926) which has the best solution. – bfontaine Mar 23 '21 at 15:52
267

Even though an answer has already been accepted, I would like to present what might even be the simplest option:

$ mysqladmin -u bob -p -i 1 processlist

This will print the current queries on your screen every second.

  • -u The mysql user you want to execute the command as
  • -p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
  • i The interval in seconds.
  • Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)

There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.

Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.

Community
  • 1
  • 1
halfpastfour.am
  • 5,764
  • 3
  • 44
  • 61
74

Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.

SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';

The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.

You can also add conditions to make it more specific just any SQL query.

e.g. Shows all queries running for 5 seconds or more:

SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;

e.g. Show all running UPDATEs:

SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';

For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html

pythonjsgeo
  • 5,122
  • 2
  • 34
  • 47
24

strace

The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:

sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1

Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:

grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"

So you should see fairly clean SQL queries with no-time, without touching configuration files.

Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).

dtrace

Use MySQL probes to view the live MySQL queries without touching the server. Example script:

#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
     printf("Query: %s\n", copyinstr(arg1));
}

Save above script to a file (like watch.d), and run:

pfexec dtrace -s watch.d -p $(pgrep -x mysqld)

Learn more: Getting started with DTracing MySQL

Gibbs MySQL Spyglass

See this answer.

Logs

Here are the steps useful for development proposes.

Add these lines into your ~/.my.cnf or global my.cnf:

[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log

Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.

then restart your MySQL/MariaDB by (prefix with sudo if necessary):

killall -HUP mysqld

Then check your logs:

tail -f /tmp/mysqld.log

After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.

kenorb
  • 155,785
  • 88
  • 678
  • 743
  • 1
    No need to kill the server if you set the `general_log` from a MySQL query. It will start to write to the file that `general_log_file` is pointing to. – Robert Brisita Nov 08 '18 at 17:36
19

I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:

Create a bash script called "showsqlprocesslist":

#!/bin/bash

while [ 1 -le 1 ]
do
         mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done

Execute the script:

./showsqlprocesslist > showsqlprocesslist.out &

Tail the output:

tail -f showsqlprocesslist.out

Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.

Michael Krauklis
  • 3,914
  • 2
  • 28
  • 28
  • Ha! Delicious. Love it. – Darth Egregious Apr 24 '14 at 15:08
  • It needs some delay to avoid too much verbose output. See my Edit please. – Slyx May 21 '14 at 04:33
  • @Slyx thanks for the suggestion of putting a sleep in the loop. However, if you're looking for short lived queries that live for less time than the sleep you'll potentially miss what you're looking for. If you're really just looking for a snapshot in time this shouldn't be run in a loop. It should also be noted that this could still potentially miss very short lived queries. – Michael Krauklis Aug 27 '14 at 18:01
17

From a command line you could run:

watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"

Replace the values [x] with your values.

Or even better:

 mysqladmin -u root -p -i 1 processlist;
recurse
  • 624
  • 6
  • 15
15

This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.

Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”

#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.

log = /var/log/mysql/mysql.log

Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:

sudo /etc/init.d/mysql restart

Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.

tail -f /var/log/mysql/mysql.log

Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)

FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/

Wil
  • 1,778
  • 1
  • 16
  • 15
13

Check out mtop.

Chris KL
  • 4,882
  • 3
  • 27
  • 35
  • 1
    Yes, but good luck installing it on Debian or Ubuntu: https://bugs.launchpad.net/ubuntu/+source/mtop/+bug/77980 – mlissner Aug 27 '11 at 22:01
  • Managed to get it running on debian, but its kindof worthless since it misses a lot of queries. I can see the query counter going up constantly but it rarely displays any queries. Looks like it only displays the queries that take longer than approx 1 second. – Cobra_Fast Jan 17 '12 at 11:07
  • @Cobra_Fast clearly stated on the mtop Sourceforge page: ```mtop (MySQL top) monitors a MySQL server showing the queries which are taking the most amount of time to complete.``` http://mtop.sourceforge.net/ It's quite useful sometimes. – Ian Lewis Dec 16 '14 at 09:16
7

I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.

Steps to create the log file (some duplication of other posts, all here for simplicity):

  1. Edit the file located at:

    C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
    

    Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).

  2. Use MySql workbench to open a command line, enter the password.

    Run the following to turn on general logging which will record all queries ran:

    SET GLOBAL general_log = 'ON';
    
    To turn off:
    
    SET GLOBAL general_log = 'OFF';
    

    This will cause running queries to be written to a text file at the following location.

    C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
    
  3. Create / Run a console app that will output the log information in real time:

    Source available to download here

    Source:

    using System;
    using System.Configuration;
    using System.IO;
    using System.Threading;
    
    namespace LiveLogs.ConsoleApp
    {
      class Program
      {
        static void Main(string[] args)
        {
            // Console sizing can cause exceptions if you are using a 
            // small monitor. Change as required.
    
            Console.SetWindowSize(152, 58);
            Console.BufferHeight = 1500;
    
            string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
    
            Console.Title = string.Format("Live Logs {0}", filePath);
    
            var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
    
            // Move to the end of the stream so we do not read in existing
            // log text, only watch for new text.
    
            fileStream.Position = fileStream.Length;
    
            StreamReader streamReader;
    
            // Commented lines are for duplicating the log output as it's written to 
            // allow verification via a diff that the contents are the same and all 
            // is being output.
    
            // var fsWrite = new FileStream(@"C:\DuplicateFile.txt", FileMode.Create);
            // var sw = new StreamWriter(fsWrite);
    
            int rowNum = 0;
    
            while (true)
            {
                streamReader = new StreamReader(fileStream);
    
                string line;
                string rowStr;
    
                while (streamReader.Peek() != -1)
                {
                    rowNum++;
    
                    line = streamReader.ReadLine();
                    rowStr = rowNum.ToString();
    
                    string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
    
                    Console.WriteLine(output);
    
                    // sw.WriteLine(output);
                }
    
                // sw.Flush();
    
                Thread.Sleep(500);
            }
        }
      }
    }
    
Janus Troelsen
  • 20,267
  • 14
  • 135
  • 196
gbro3n
  • 6,729
  • 9
  • 59
  • 100
  • 1
    This looks real cool and I am definitely going to take a look at it, be great to take this on as a OSS project and create a profiling tool! – Rippo Feb 22 '12 at 16:49
  • I think that's a good idea. I've put an SVN repo on google code. Probably the smallest OS project ever, but this has been very useful so far. I'll probably extend it, be interested to see if anyone else takes it further. http://code.google.com/p/livelogs/ – gbro3n Feb 22 '12 at 21:43
  • OP needs it to work on his Linux machine. It looks like your answer is meant for a Windows machine. Although this answer reflects creativity, it may not be helpful to others. – halfpastfour.am Jul 22 '14 at 09:09
1

In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:

SET GLOBAL log_output = 'FILE';

The default setting was 'NONE'.

David B
  • 93
  • 6
1

Gibbs MySQL Spyglass

AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).

GitHub: AgilData/gibbs-mysql-spyglass

Learn more: Packet Capturing MySQL with Rust

Install command:

curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
kenorb
  • 155,785
  • 88
  • 678
  • 743
  • 3
    Spyglass seems to require an API key from a server which is down, and the last commit was 3 years ago. Not sure this product is still supported/working. – Dan Tenenbaum May 30 '19 at 18:28
0

If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management

But it is a server based system, and it is not very trivial for launch.

It has also live demo system for test.

PRIHLOP
  • 1,441
  • 1
  • 16
  • 16