2

Is there any way to output multiple table rows if a certain field in the table is greater than 1.

Here's my example:

I'm building an auction website, where we sell tickets for a raffle. The tickets are stored in a table like so:

id, order_id, product_id, qty, price

When the time comes to print the tickets, I want to dump all of it into a CSV. So far, I'm doing this query (simplifying, omitting INNER JOIN):

SELECT id, order_id, product_id, qty, price FROM order_details

And then running something like the following loop on it:

foreach($rows as $row) {
   for($i = 0; $i < $row['qty']; $i++) {
      $tickets[] = $row;
   }
}

so that I get a separate entry for each qty (so that people get the correct amount of entries...).

Is there any way to accomplish this in SQL itself, so that each row is multiplied x times, where x is a certain field in the table (qty in this example)?

John Alt
  • 21
  • 3
  • excellent question would like to see the answer. – Johan May 08 '11 at 03:40
  • Are you saying that you want to have a single row be repeated X times in the SQL results where X is the value stored in your qty field? I am also confused by your description of "so that I get a separate entry for each qty". Can you clarify your question so you can get an appropriate response. – Zach Dennis May 08 '11 at 03:44

6 Answers6

2

You can accomplish this purely in MySQL using a blackhole table and a trigger

Set up tables
First create the blackhole table you're going to insert to and the memory (or temporary table) the blackhole will reroute to.

CREATE TABLE Blackhole1 LIKE order_details ENGINE = BLACKHOLE;

CREATE TABLE temp_order_results LIKE order_details ENGINE = MEMORY;

Set up trigger
Now create a trigger on the blackhole table that will reroute the insert to the memory table, duplicating the rows with qty > 1.

DELIMITER $$

CREATE TRIGGER ai_Blackhole1_each AFTER INSERT ON blackhole1 FOR EACH ROW
BEGIN
  DECLARE ACount INTEGER;
  SET ACount = new.qty;
  WHILE ACount > 1 DO BEGIN
    INSERT INTO temp_order_results 
    VALUES (new.id, new.order_id, new.product_id, 1, new.price)
    SET ACount = ACount - 1;
  END; END WHILE;
END $$

DELIMITER ;

Statements to do the query
Now do a insert .. select into the blackhole

INSERT INTO blackhole1 
  SELECT id, order_id, product_id, qty, price FROM order_details;

And a select on temp_order_results.

SELECT id, order_id, product_id, qty, price FROM order_details;
Johan
  • 74,508
  • 24
  • 191
  • 319
  • Nifty, never heard of blackhole tables before, certainly never would have thought of a use for it like this. Very hacky, but it works! – eykanal May 08 '11 at 04:15
2

To expand on @zdennis' answer, you could do this in MySQL:

    SELECT order_details.*
      FROM order_details
INNER JOIN kilo
           ON kilo.i < order_details.qty;

where the "kilo" relation has the integers 0 - 999, a contrivance adapted from a post by xaprb:

CREATE TABLE deca (i integer not null);
INSERT INTO deca (i) VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9);
CREATE VIEW kilo (i) AS
      SELECT iii.i * 100 + ii.i * 10 + i.i
        FROM deca iii
  CROSS JOIN deca ii
  CROSS JOIN deca i;
Community
  • 1
  • 1
pilcrow
  • 56,591
  • 13
  • 94
  • 135
1

There's not really a performance reason to. MySQL has a couple of strong suits: sorting, indexing, searching, storing, etc. You might as well do this in PHP.

Nick ODell
  • 15,465
  • 3
  • 32
  • 66
  • Thanks for your speedy reply. I will do it in PHP then. However, it seems from your language that it IS possible to do (just that there's no gain in it). My interest is piqued: how would you do it? – John Alt May 08 '11 at 03:37
  • @Nick, strongly disagree with you, writing a csv file and parsing that are not trivial operations and can tax a php server quite a bit. If your MySQL server is not taxed, it is much more efficient to do it there. – Johan May 08 '11 at 03:42
  • And how might one go about doing it there? – John Alt May 08 '11 at 03:43
  • I actually don't know the best way to do it. However, repeating an item is not particularly expensive. If you profile your application and notice that repeating is costing you a lot of cpu time, then by all means optimize it. But I seriously doubt it will use more than 1 cent of cpu time. – Nick ODell May 08 '11 at 03:46
  • Actually, I have another reason I want to it in SQL... I'm using codeigniter, and they have a special library to just dump a query into a CSV, so that I don't have to do it manually. Any help is highly appreciated!! – John Alt May 08 '11 at 03:49
  • Here's the deal on scaling - in most web sites, the database server is already the most powerful, it's the most expensive the upgrade, already the hardest worked, and the hardest to scale out by just adding servers. By contrast, it's relatively easy to just add a load balancer in front of your php servers as you scale. Thus, anything that pushes load from your database to your web tier will help you scale. Of course, this only applies to sites big enough to need several servers. Until you reach that point you need to **profile** to know where your bottleneck is. – Joel Coehoorn May 08 '11 at 04:14
0

I think this might be possible in Sql Server or Oracle by using a recursive common table expression (CTE) that joins the original table to itself and includes Qty-1 as an expression in place of Qty in the select list of the CTE. Sadly, last I heard MySql doesn't support CTEs yet.

Another option is to build a simple sequence table that just includes a numeric column and rows that start with 1 and end with the largest number you'll realistically have in the Qty column of your original table. You can join this to your orders table with a WHERE clause limiting the digits results to less than the Qty field and duplicate the rows this way. To quickly build the sequence table, create a digits table with records for 0 through 9 and cross join it to itself once for each power of 10.

Joel Coehoorn
  • 399,467
  • 113
  • 570
  • 794
0

The appropriate response is likely to use dual connect by level. See this question for related information: How can I return multiple identical rows based on a quantity field in the row itself?

Although this doesn't work in MySQL, see: How do I make a row generator in MySQL?

If you're using MySQL you'll need to be content with doing it in PHP or doing something gross (like the trigger that Johan posted). I'd vote to simply do it in PHP if that was the case.

Community
  • 1
  • 1
Zach Dennis
  • 1,754
  • 15
  • 19
0

I was required to do the same thing in order to avoid a cursor. My solution is for SQL Server and is really simple because for my case, qty is never greater than 99, so here is a sample using temporary tables:

create table #t (
     id int
    ,qty int
)

insert into #t values (1,2)
insert into #t values (2,3)

create table #n (
     id int
)

insert into #n values (1)
insert into #n values (2)
insert into #n values (3)
insert into #n values (4)
insert into #n values (5)

select t.*
from #t t
inner join #n n on
    n.id <= t.qty

You just need to insert into #n the max qty you expect (in my case 99).

Auresco82
  • 603
  • 6
  • 14