446

I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL?

Edit: For example I have the following

Name   id  Col1  Col2
Row1   1    6     1
Row2   2    2     3
Row3   3    9     5
Row4   4    16    8

I want to combine all the following Updates into one query

UPDATE table SET Col1 = 1 WHERE id = 1;
UPDATE table SET Col1 = 2 WHERE id = 2;
UPDATE table SET Col2 = 3 WHERE id = 3;
UPDATE table SET Col1 = 10 WHERE id = 4;
UPDATE table SET Col2 = 12 WHERE id = 4;
Taryn
  • 242,637
  • 56
  • 362
  • 405
Teifion
  • 108,121
  • 75
  • 161
  • 195

19 Answers19

721

Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.

Using your example:

INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)
ON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);
Michiel de Mare
  • 41,982
  • 29
  • 103
  • 134
  • 34
    If there is no duplicates then i dont want that row to be inserted. what should id do? because i am fetching information from another site which maintains tables with id's. I am inserting values with respect to that id. if the site has new records then i will end up inserting only the ids and count except all other information. if and only if there is an entry for the id then it should update else it should skip. what shall i do? – Jayapal Chandran Aug 30 '10 at 13:30
  • so if it is like insert into skip null or skip empty on duplicate update ... then that would be nice. – Jayapal Chandran Aug 30 '10 at 13:34
  • 39
    Note: this answer also assumes ID is the primary key – JM4 Dec 12 '12 at 18:01
  • 1
    So you have to select values first? Doesn't sound like an optimized update. Second answer seems to be more robust. – strannik Sep 04 '13 at 12:59
  • 15
    @JayapalChandran you should use INSERT IGNORE together with ON DUPLICATE KEY UPDATE. http://dev.mysql.com/doc/refman/5.5/en/insert.html – Haralan Dobrev Sep 16 '13 at 14:15
  • 23
    @HaralanDobrev Using INSERT IGNORE still inserts the non duplicated records. which Jayapal wanted to avoid. INSERT IGNORE just turns any errors into warning :( http://stackoverflow.com/questions/548541/insert-ignore-vs-insert-on-duplicate-key-update/548570#comment361469_548575 – Takehiro Adachi Oct 24 '13 at 17:32
  • 1
    @JM4: this answer assumes ID is the primary key OR a unique key (but, of course, most probably the primary key) – Walter Tross Feb 07 '14 at 22:32
  • 2
    This answer assumes the ID is a unique key (can be primary as others said) but more importantly it assumes there are no other unique keys. If there are any, it can throw a spanner in the works. – Steve Horvath Dec 03 '14 at 05:43
  • 3
    You also should care about your increment (if it is exists), it ALSO increase if you do insert O.D.K.U. even if new record did not inserted! If you will do much updates this way, your autoincrement will overflow so soon! – user2602807 Mar 05 '15 at 11:51
  • My solution to the update-only-but-not-insert-new-records problem is to have a field in the insert part that is absent in the update part, and after I delete all records that have values in that field. example: insert into tbl (id, title, workaround) values ('realId','realTitle','This row must die!') on duplicate key update id=values(id), title=values(title); delete from tbl where workaround = 'This row must die!' – santiago arizti Aug 08 '16 at 19:13
  • @user2602807 Just tested and this does not seem to be the case – Dakusan Oct 03 '16 at 11:06
  • 2
    Another caveat is that any additional columns declared `NOT NULL` without a default value will have to be included in the insert. They can be ignored in the update though, so you can use dummy values. – shmosel Jan 12 '17 at 10:24
  • I am getting a '1064 syntax error'. Is the format for `DUPLICATE KEY UPDATE Col1=VALUES(Col1)` colName1=VALUES(colName1)? i.e. where ` would surround colName1 – ballade4op52 Jan 27 '18 at 20:35
  • Note that I've also tried without ` surrounding colName1... presently testing to verify it isn't a particular row's data causing the error – ballade4op52 Jan 28 '18 at 17:37
  • I posted a question on the issue: https://stackoverflow.com/questions/48490630/what-is-syntactically-wrong-with-this-on-duplicate-key-update-query – ballade4op52 Jan 28 '18 at 20:58
  • That's not perfect solution. Here is much better https://stackoverflow.com/questions/25674737/mysql-update-multiple-rows-with-different-values-in-one-query/25674827 – Serhii Smirnov Jul 03 '19 at 09:50
  • user2602807 @Dakusan you are both right, depending on 2 things: 1. myisam does not increase the auto increment. 2. innodb increases by default, you can prevent it by changing the auto increment mode to classic. – Ömer An May 15 '20 at 09:03
  • @Ömer An - Yes. I'd updated my answer below with that info a while ago. Unfortunately, stack overflow will not let me edit my above comment. – Dakusan May 16 '20 at 12:05
148

Since you have dynamic values, you need to use an IF or CASE for the columns to be updated. It gets kinda ugly, but it should work.

Using your example, you could do it like:

UPDATE table SET Col1 = CASE id 
                          WHEN 1 THEN 1 
                          WHEN 2 THEN 2 
                          WHEN 4 THEN 10 
                          ELSE Col1 
                        END, 
                 Col2 = CASE id 
                          WHEN 3 THEN 3 
                          WHEN 4 THEN 12 
                          ELSE Col2 
                        END
             WHERE id IN (1, 2, 3, 4);
Community
  • 1
  • 1
Harrison Fisk
  • 7,074
  • 3
  • 24
  • 14
  • maybe not so pretty to write for dynamic updating but interesting look at the functionality of casing... – me_ Feb 07 '17 at 21:55
  • 1
    @user2536953 , it can be nice for dynamic updating too. For example, I used that solution in loop in php: `$commandTxt = 'UPDATE operations SET chunk_finished = CASE id '; foreach ($blockOperationChecked as $operationID => $operationChecked) $commandTxt .= " WHEN $operationID THEN $operationChecked "; $commandTxt .= 'ELSE id END WHERE id IN ('.implode(', ', array_keys(blockOperationChecked )).');';` – Boolean_Type Feb 15 '17 at 08:43
97

The question is old, yet I'd like to extend the topic with another answer.

My point is, the easiest way to achieve it is just to wrap multiple queries with a transaction. The accepted answer INSERT ... ON DUPLICATE KEY UPDATE is a nice hack, but one should be aware of its drawbacks and limitations:

  • As being said, if you happen to launch the query with rows whose primary keys don't exist in the table, the query inserts new "half-baked" records. Probably it's not what you want
  • If you have a table with a not null field without default value and don't want to touch this field in the query, you'll get "Field 'fieldname' doesn't have a default value" MySQL warning even if you don't insert a single row at all. It will get you into trouble, if you decide to be strict and turn mysql warnings into runtime exceptions in your app.

I made some performance tests for three of suggested variants, including the INSERT ... ON DUPLICATE KEY UPDATE variant, a variant with "case / when / then" clause and a naive approach with transaction. You may get the python code and results here. The overall conclusion is that the variant with case statement turns out to be twice as fast as two other variants, but it's quite hard to write correct and injection-safe code for it, so I personally stick to the simplest approach: using transactions.

Edit: Findings of Dakusan prove that my performance estimations are not quite valid. Please see this answer for another, more elaborate research.

Community
  • 1
  • 1
Roman Imankulov
  • 8,547
  • 1
  • 19
  • 14
87

Not sure why another useful option is not yet mentioned:

UPDATE my_table m
JOIN (
    SELECT 1 as id, 10 as _col1, 20 as _col2
    UNION ALL
    SELECT 2, 5, 10
    UNION ALL
    SELECT 3, 15, 30
) vals ON m.id = vals.id
SET col1 = _col1, col2 = _col2;
leemes
  • 44,967
  • 21
  • 135
  • 183
newtover
  • 31,286
  • 11
  • 84
  • 89
  • 6
    This is the best. Especially if you are pulling the values for update from another SQL query as i was doing. – v010dya Dec 30 '14 at 08:55
  • 1
    This was great for an update on a table with a massive amount of columns. I will probably be using this query a lot in the future. Thanks! – Casper Wilkes Jan 09 '15 at 20:18
  • I have tried this type of query. But when records reach 30k Boundary server stopped. Is there any other solution? – Bhavin Chauhan Sep 27 '16 at 06:27
  • This looks great. I will try combining this with a WHERE clause where the primary keys are not updated, but used to identify the columns to change. – nl-x Mar 02 '20 at 13:30
  • @BhavinChauhan Have you tried using a temporary table instead of the join-select to circumvent the issue? – nl-x Mar 02 '20 at 13:32
  • I discovered the same approach when I was trying to avoid [explicit] temporary tables. @BhavinChauhan, you should always avoid making query strings too long. Batch them instead. In my (limited) measurements, 2500 rows was the ideal batch size. – Timo May 26 '20 at 09:48
54

All of the following applies to InnoDB.

I feel knowing the speeds of the 3 different methods is important.

There are 3 methods:

  1. INSERT: INSERT with ON DUPLICATE KEY UPDATE
  2. TRANSACTION: Where you do an update for each record within a transaction
  3. CASE: In which you a case/when for each different record within an UPDATE

I just tested this, and the INSERT method was 6.7x faster for me than the TRANSACTION method. I tried on a set of both 3,000 and 30,000 rows.

The TRANSACTION method still has to run each individually query, which takes time, though it batches the results in memory, or something, while executing. The TRANSACTION method is also pretty expensive in both replication and query logs.

Even worse, the CASE method was 41.1x slower than the INSERT method w/ 30,000 records (6.1x slower than TRANSACTION). And 75x slower in MyISAM. INSERT and CASE methods broke even at ~1,000 records. Even at 100 records, the CASE method is BARELY faster.

So in general, I feel the INSERT method is both best and easiest to use. The queries are smaller and easier to read and only take up 1 query of action. This applies to both InnoDB and MyISAM.

Bonus stuff:

The solution for the INSERT non-default-field problem is to temporarily turn off the relevant SQL modes: SET SESSION sql_mode=REPLACE(REPLACE(@@SESSION.sql_mode,"STRICT_TRANS_TABLES",""),"STRICT_ALL_TABLES",""). Make sure to save the sql_mode first if you plan on reverting it.

As for other comments I've seen that say the auto_increment goes up using the INSERT method, this does seem to be the case in InnoDB, but not MyISAM.

Code to run the tests is as follows. It also outputs .SQL files to remove php interpreter overhead

<?php
//Variables
$NumRows=30000;

//These 2 functions need to be filled in
function InitSQL()
{

}
function RunSQLQuery($Q)
{

}

//Run the 3 tests
InitSQL();
for($i=0;$i<3;$i++)
    RunTest($i, $NumRows);

function RunTest($TestNum, $NumRows)
{
    $TheQueries=Array();
    $DoQuery=function($Query) use (&$TheQueries)
    {
        RunSQLQuery($Query);
        $TheQueries[]=$Query;
    };

    $TableName='Test';
    $DoQuery('DROP TABLE IF EXISTS '.$TableName);
    $DoQuery('CREATE TABLE '.$TableName.' (i1 int NOT NULL AUTO_INCREMENT, i2 int NOT NULL, primary key (i1)) ENGINE=InnoDB');
    $DoQuery('INSERT INTO '.$TableName.' (i2) VALUES ('.implode('), (', range(2, $NumRows+1)).')');

    if($TestNum==0)
    {
        $TestName='Transaction';
        $Start=microtime(true);
        $DoQuery('START TRANSACTION');
        for($i=1;$i<=$NumRows;$i++)
            $DoQuery('UPDATE '.$TableName.' SET i2='.(($i+5)*1000).' WHERE i1='.$i);
        $DoQuery('COMMIT');
    }
    
    if($TestNum==1)
    {
        $TestName='Insert';
        $Query=Array();
        for($i=1;$i<=$NumRows;$i++)
            $Query[]=sprintf("(%d,%d)", $i, (($i+5)*1000));
        $Start=microtime(true);
        $DoQuery('INSERT INTO '.$TableName.' VALUES '.implode(', ', $Query).' ON DUPLICATE KEY UPDATE i2=VALUES(i2)');
    }
    
    if($TestNum==2)
    {
        $TestName='Case';
        $Query=Array();
        for($i=1;$i<=$NumRows;$i++)
            $Query[]=sprintf('WHEN %d THEN %d', $i, (($i+5)*1000));
        $Start=microtime(true);
        $DoQuery("UPDATE $TableName SET i2=CASE i1\n".implode("\n", $Query)."\nEND\nWHERE i1 IN (".implode(',', range(1, $NumRows)).')');
    }
    
    print "$TestName: ".(microtime(true)-$Start)."<br>\n";

    file_put_contents("./$TestName.sql", implode(";\n", $TheQueries).';');
}
miken32
  • 42,008
  • 16
  • 111
  • 154
Dakusan
  • 6,504
  • 5
  • 32
  • 45
  • 5
    You're doing the LORD's work here ;) Much appreciated. – chili Jan 29 '18 at 12:14
  • Testing some performance between GoLang and PHP, using 40k rows on MariaDB, I was getting 2 secs on PHP and more than 6 secs on golang .... Well, I always been told that GoLang would run faster than PHP !!! SO, I start to wonder how to improve the performance ... Using the INSERT ... ON DUPLICATE KEY UPDATE ... I got 0.74 secs on Golang and 0.86 secs on PHP !!!! – Diego Favero Apr 02 '19 at 18:13
  • 2
    The point of my code is to limit the timing results to strictly the SQL statements, and not the code for the language or libraries. GoLang and PHP are 2 completely separate languages meant for completely different things. PHP is meant for a single run scripting environment on a single thread with mostly limited and passive garbage collection. GoLang is meant for long running compiled applications with aggressive garbage collection and multithreading as one of the primary language features. They could barely be more different in terms of language functionality and reason. [Continued] – Dakusan Apr 03 '19 at 03:35
  • So when running your tests, make sure to limit the speed measurements to strictly the "Query" function calls for the SQL statement. Comparing and optimizing the other parts of the source code that are not strictly the query call is like comparing apples and oranges. If you limit your results to this (having the strings precompiled and ready to go) then the results should be very similar. Any differences at that point are the fault of the language's SQL library, and not necessarily the language itself. In my opinion, the INSERT ON DUPLICATE solution was and will always be the best option.[Cont] – Dakusan Apr 03 '19 at 03:41
  • As for your comment on GoLang being faster, that is an incredibly broad statement that does not take into account any of the multitude of caveats or nuances of these languages and their designs. Java is a interpreted language, but I found out 15 years ago it actually can almost match (and maybe even sometimes beat) C in speed in certain scenarios. And C is a compiled language, and the most common of the lowest level system languages, besides assembler. I really love what GoLang is doing and it definitely has the power and fluidity to become one of the most common systems and optimized [Cont] – Dakusan Apr 03 '19 at 03:46
  • application languages. It is compiled with much much faster compile times when compared to C/C++ . It has strict syntax, which helps remove code formatting wars (I personally don't like that though). Its garbage collection is revolutionary, with common GC runs that take MICROSECONDS. And the ease of its IPC channel format is almost unparalleled. It is rather revolutionary, for what it is built for. For an example, running an empty for-loop and comparing speeds between languages doesn't really tell you much. OK, I've already typed way too much and am not gonna let this run into another comment. – Dakusan Apr 03 '19 at 03:50
10
UPDATE table1, table2 SET table1.col1='value', table2.col1='value' WHERE table1.col3='567' AND table2.col6='567'

This should work for ya.

There is a reference in the MySQL manual for multiple tables.

Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
UnkwnTech
  • 88,102
  • 65
  • 184
  • 229
9

Use a temporary table

// Reorder items
function update_items_tempdb(&$items)
{
    shuffle($items);
    $table_name = uniqid('tmp_test_');
    $sql = "CREATE TEMPORARY TABLE `$table_name` ("
        ."  `id` int(10) unsigned NOT NULL AUTO_INCREMENT"
        .", `position` int(10) unsigned NOT NULL"
        .", PRIMARY KEY (`id`)"
        .") ENGINE = MEMORY";
    query($sql);
    $i = 0;
    $sql = '';
    foreach ($items as &$item)
    {
        $item->position = $i++;
        $sql .= ($sql ? ', ' : '')."({$item->id}, {$item->position})";
    }
    if ($sql)
    {
        query("INSERT INTO `$table_name` (id, position) VALUES $sql");
        $sql = "UPDATE `test`, `$table_name` SET `test`.position = `$table_name`.position"
            ." WHERE `$table_name`.id = `test`.id";
        query($sql);
    }
    query("DROP TABLE `$table_name`");
}
miken32
  • 42,008
  • 16
  • 111
  • 154
Laymain
  • 1,450
  • 18
  • 25
6

Why does no one mention multiple statements in one query?

In php, you use multi_query method of mysqli instance.

From the php manual

MySQL optionally allows having multiple statements in one statement string. Sending multiple statements at once reduces client-server round trips but requires special handling.

Here is the result comparing to other 3 methods in update 30,000 raw. Code can be found here which is based on answer from @Dakusan

Transaction: 5.5194580554962
Insert: 0.20669293403625
Case: 16.474853992462
Multi: 0.0412278175354

As you can see, multiple statements query is more efficient than the highest answer.

If you get error message like this:

PHP Warning:  Error while sending SET_OPTION packet

You may need to increase the max_allowed_packet in mysql config file which in my machine is /etc/mysql/my.cnf and then restart mysqld.

mononoke
  • 609
  • 6
  • 9
  • All below comparisons are ran against the INSERT test. I just ran the test in the same conditions and, without transactions, it was *145x* slower on 300 rows and *753x* slower for 3000 rows. I had originally started with the 30,000 rows, but I went to make myself lunch and came back and it was still going. This makes sense as running individual queries and flushing each to the database individually would be ridiculously expensive. Especially with replication. Turning on transactions though makes a big difference. At 3,000 rows it took *1.5x* more and at 30,000 rows *2.34x*. [continued] – Dakusan Nov 26 '17 at 22:04
  • But you were right about it being fast (with transactions). At both 3,000 and 30,000 rows it was faster than everything but the INSERT method. There is absolutely no way you are going to get better results from running 1 query than 30,000 queries, even if they are batched in a special MySQL API call. Running only 300 rows, it was MUCH faster than all other methods (to my surprise), which follows about the same graph curve as the CASE method. This being faster can be explained by 2 ways. The first being that the INSERT method essentially always inserts 2 rows due to the "ON DUPLICATE KEY [cont] – Dakusan Nov 26 '17 at 22:18
  • UPDATE" causing both an "INSERT" And "UPDATE". The other being that it's less work in the SQL processor for only editing 1 row at a time due to index lookups. I'm not sure how you got different results than I, but your additional test looks solid. I'm actually not even sure how replication would handle this call. This would also only work for doing UPDATE calls. Insert calls will ALWAYS be fastest with the single INSERT query. – Dakusan Nov 26 '17 at 22:21
  • I was doing 300 UPDATEs at a time on a table to revise an error within a for loop which took 41 seconds. Putting the same UPDATE queries into one `$mysqli->multi_query($sql)` took "0" seconds. However, subsequent queries failed, causing me to make this a separate "program". – Krista K Jan 11 '18 at 18:30
  • Thanks. Was able to update about 5k rows (didn't test more) in a minute using multi queries. If someone is looking for a PDO solution: https://stackoverflow.com/questions/6346674/pdo-support-for-multiple-queries-pdo-mysql-pdo-mysqlnd – Scofield Feb 20 '19 at 19:20
3

You can alias the same table to give you the id's you want to insert by (if you are doing a row-by-row update:

UPDATE table1 tab1, table1 tab2 -- alias references the same table
SET 
col1 = 1
,col2 = 2
. . . 
WHERE 
tab1.id = tab2.id;

Additionally, It should seem obvious that you can also update from other tables as well. In this case, the update doubles as a "SELECT" statement, giving you the data from the table you are specifying. You are explicitly stating in your query the update values so, the second table is unaffected.

eggmatters
  • 1,130
  • 12
  • 28
3

There is a setting you can alter called 'multi statement' that disables MySQL's 'safety mechanism' implemented to prevent (more than one) injection command. Typical to MySQL's 'brilliant' implementation, it also prevents user from doing efficient queries.

Here (http://dev.mysql.com/doc/refman/5.1/en/mysql-set-server-option.html) is some info on the C implementation of the setting.

If you're using PHP, you can use mysqli to do multi statements (I think php has shipped with mysqli for a while now)

$con = new mysqli('localhost','user1','password','my_database');
$query = "Update MyTable SET col1='some value' WHERE id=1 LIMIT 1;";
$query .= "UPDATE MyTable SET col1='other value' WHERE id=2 LIMIT 1;";
//etc
$con->multi_query($query);
$con->close();

Hope that helps.

Brooks
  • 2,082
  • 2
  • 18
  • 26
  • 6
    This is the same as sending the queries separately. The only difference is that you send it all in one network packet, but the UPDATEs will be still processed as separate queries. Better is to wrap them in one transaction, then the changes will be commited to the table at once. – Marki555 Apr 12 '14 at 15:01
  • 3
    How to wrap them in one transaction? Show us, please. – TomeeNS Sep 22 '14 at 18:20
  • @TomeeNS Use `mysqli::begin_transaction(..)` before sending the query and `mysql::commit(..)` after. Or use `START TRANSACTION` as first and `COMMIT` as last statement in the query itself. – Juha Palomäki Nov 15 '16 at 23:55
3

No-one has yet mentioned what for me would be a much easier way to do this - Use a SQL editor that allows you to execute multiple individual queries. This screenshot is from Sequel Ace, I'd assume that Sequel Pro and probably other editors have similar functionality. (This of course assumes you only need to run this as a one-off thing rather than as an integrated part of your app/site).

A screenshot showing the ability to Run All Queries in Sequel Ace's Query Editor

James Smith
  • 528
  • 3
  • 9
2

You may also be interested in using joins on updates, which is possible as well.

Update someTable Set someValue = 4 From someTable s Inner Join anotherTable a on s.id = a.id Where a.id = 4
-- Only updates someValue in someTable who has a foreign key on anotherTable with a value of 4.

Edit: If the values you are updating aren't coming from somewhere else in the database, you'll need to issue multiple update queries.

Shawn
  • 19,465
  • 20
  • 98
  • 152
1

And now the easy way

update my_table m, -- let create a temp table with populated values
    (select 1 as id, 20 as value union -- this part will be generated
     select 2 as id, 30 as value union -- using a backend code
     -- for loop 
     select N as id, X as value
        ) t
set m.value = t.value where t.id=m.id -- now update by join - quick
Stan Sokolov
  • 2,140
  • 1
  • 22
  • 23
0

Yes ..it is possible using INSERT ON DUPLICATE KEY UPDATE sql statement.. syntax: INSERT INTO table_name (a,b,c) VALUES (1,2,3),(4,5,6) ON DUPLICATE KEY UPDATE a=VALUES(a),b=VALUES(b),c=VALUES(c)

sara191186
  • 35
  • 3
0

use

REPLACE INTO`table` VALUES (`id`,`col1`,`col2`) VALUES
(1,6,1),(2,2,3),(3,9,5),(4,16,8);

Please note:

  • id has to be a primary unique key
  • if you use foreign keys to reference the table, REPLACE deletes then inserts, so this might cause an error
Justin Levene
  • 1,630
  • 19
  • 17
0

I took the answer from @newtover and extended it using the new json_table function in MySql 8. This allows you to create a stored procedure to handle the workload rather than building your own SQL text in code:

drop table if exists `test`;
create table `test` (
  `Id` int,
  `Number` int,
  PRIMARY KEY (`Id`)
);
insert into test (Id, Number) values (1, 1), (2, 2);

DROP procedure IF EXISTS `Test`;
DELIMITER $$
CREATE PROCEDURE `Test`(
    p_json json
)
BEGIN
    update test s
        join json_table(p_json, '$[*]' columns(`id` int path '$.id', `number` int path '$.number')) v 
        on s.Id=v.id set s.Number=v.number;
END$$
DELIMITER ;

call `Test`('[{"id": 1, "number": 10}, {"id": 2, "number": 20}]');
select * from test;

drop table if exists `test`;

It's a few ms slower than pure SQL but I'm happy to take the hit rather than generate the sql text in code. Not sure how performant it is with huge recordsets (the JSON object has a max size of 1Gb) but I use it all the time when updating 10k rows at a time.

Liam
  • 5,033
  • 2
  • 30
  • 39
-2

The following will update all rows in one table

Update Table Set
Column1 = 'New Value'

The next one will update all rows where the value of Column2 is more than 5

Update Table Set
Column1 = 'New Value'
Where
Column2 > 5

There is all Unkwntech's example of updating more than one table

UPDATE table1, table2 SET
table1.col1 = 'value',
table2.col1 = 'value'
WHERE
table1.col3 = '567'
AND table2.col6='567'
Community
  • 1
  • 1
GateKiller
  • 74,180
  • 73
  • 171
  • 204
-4
UPDATE tableName SET col1='000' WHERE id='3' OR id='5'

This should achieve what you'r looking for. Just add more id's. I have tested it.

Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
UnkwnTech
  • 88,102
  • 65
  • 184
  • 229
-6
UPDATE `your_table` SET 

`something` = IF(`id`="1","new_value1",`something`), `smth2` = IF(`id`="1", "nv1",`smth2`),
`something` = IF(`id`="2","new_value2",`something`), `smth2` = IF(`id`="2", "nv2",`smth2`),
`something` = IF(`id`="4","new_value3",`something`), `smth2` = IF(`id`="4", "nv3",`smth2`),
`something` = IF(`id`="6","new_value4",`something`), `smth2` = IF(`id`="6", "nv4",`smth2`),
`something` = IF(`id`="3","new_value5",`something`), `smth2` = IF(`id`="3", "nv5",`smth2`),
`something` = IF(`id`="5","new_value6",`something`), `smth2` = IF(`id`="5", "nv6",`smth2`) 

// You just building it in php like

$q = 'UPDATE `your_table` SET ';

foreach($data as $dat){

  $q .= '

       `something` = IF(`id`="'.$dat->id.'","'.$dat->value.'",`something`), 
       `smth2` = IF(`id`="'.$dat->id.'", "'.$dat->value2.'",`smth2`),';

}

$q = substr($q,0,-1);

So you can update hole table with one query

Satish Sharma
  • 9,547
  • 6
  • 29
  • 51
  • I didn't downvote, but i think the objection is to doing the set, when it's not needed (and you are still doing it, when you are setting `something` to `something`) – v010dya Dec 30 '14 at 09:04