When working with regular SQL databases, indexes are useful for fetching a few rows, but not so useful when you are fetching a large amount of data from a table. For example, imagine you have a table with stock valuations of 10 stocks over time:
|------+--------+-------+
| time | stock | value |
|------+--------+-------+
| ... | stock1 | ... |
| ... | stock2 | ... |
| ... | ... | ... |
|------+--------+-------+
As far as I can tell, indexing it by stock (even with an enum/int/foreign key) is usually not very useful in a database like Postgres if you want to get data over a large period of time. You end up with an index spanning a large part of the table, and it ends up being faster for the database to do a sequential scan, for example, to get the average value over the whole dataset for each stock:
SELECT stock, avg(value) FROM stock_values GROUP BY stock
Given that QuestDB is row oriented, I would guess that it would result in better performance to have a separated column for each stock.
So, what schema is recommended in QuestDB for a situation like this? One column for each stock, or would a symbol column for each stock symbol be as good (or good enough) even if there are millions of results for each row?