This is more of a theory question.
If I'm running 50,000 queries that insert new rows, and 50,000 queries that updates those rows, which one will take less time?
This is more of a theory question.
If I'm running 50,000 queries that insert new rows, and 50,000 queries that updates those rows, which one will take less time?
Insert would be faster because with update you need to first search for the record that you are going to update and then perform the update.
Though this hardly seems like a valid comparison as you never have a choice whether to insert or update as the two fill two completely different needs.
EDIT: I should add too that this is with the assumption that there are no insert triggers or other situations that could cause potential bottlenecks.
Insert Operation : Create -> Store
Update Operation : Retrieve -> Modify -> Store
Insert Operation faster.
With an insert into the same table, you can always insert all the rows with one query, making it much faster than inserting one by one. When updating, you can update several rows at a time, but you cannot apply this to every update situation, and often you have to run one update query at a time (when updating a specific id) - and on a big table this is very slow having to find the row and then update it every time. It is also slower even if you have indexed the table, by my experience.
As an aside here, don't forget that by doing loads more inserts than updates, you have more rows when you come to select, so you'll slow down the read operation.
So the real question then becomes - what do you care about more, a quick insert or a speedy read. Again, this is dependant on certain factors - particularly (and not yet mentioned) DB engine, such as InnoDB (which is now standard in PHPMyAdmin incidentally).
I agree with everyone else though - there's too much to consider on a case-by-case basis and therefore you really need to run your own tests and assess the situation from there based on your needs.
There's a lot of non-practical answers here. Yes, theoretically inserts are slower because they have to do the extra step of looking up the row. But this is not at all the full picture if you're working with a database made after 1992.
Short answer: they're the same speed. (Don't pick one operation over the other for the sake of speed, just pick the right operation).
Long answer: When updating, you're writing to memory pages and marking them as dirty. Any modern database will detect this and keep these pages in cache longer (this is opposed to a normal select statement which doesn't set this flag). This cache is also smart enough to hold on to pages that are accessed frequently (See LRU-K). So subsequent updates to the same rows will be pretty much instant, no lookups needed. This is assuming you're updating based on index'd columns such as IDs (I'll talk about that in a second).
Compare this to a rapid amount of inserts, new pages will need to be made and these pages needed to be loaded into the cache. Sure you can put multiple new rows on the same page, but as you continue to insert this page is filled up and tossed away never to be used again. Thus, not taking advantage of re-using pages in the cache. (And just as a note, "loading pages into the cache" is also known as a "page fault", which is the #1 slower-downer of database technology in most environments, MonogoDB is always inclined to share this idea).
If you're inserting on basis of a column that isn't index: yeah that is WAY slower than inserting. This should be made infrequent in any app. But mind you, if you DO have indexes on a table, it will speed up your updating but also will slow your inserting because this means newly inserted rows will have to insert new index data as well (as compared to updates which re-use existing index data instead of generating new ones). See here for more details on that in terms of how MySQL does it.
Finally, Multi-threaded/multi-processing environments can also turn this idea on its head. Which, I'm not going to get into that. That's a whole 'nother can of worms. You can do your research on your type of database + storage engine for this as well as gauge your apps use of concurrent enviroment... Or, you can just ignore all that and just use the most intuitive operation.