So, I have these two tables: tableA and tableB. Upon doing a simple inner join of these tables,
SELECT *
FROM tableA
JOIN tableB
ON tableA.columnA = tableB.id
Now, tableA
contains 29000+ rows, whereas tableB
contains just 11000+ rows. tableB.id
is a primary key, hence clustered. And there exists a non-clustered index on columnA
.
According to my thinking, the query optimizer should treat tableB
as the inner table while performing the join, because it has a lesser number of rows, and treat tableA
as the outer table, as a lot of rows need to be filtered from tableA
based on the value of the tableB.id
column.
But, the exact opposite of this actually happens. For some reason, the query optimizer is treating tableA
as the inner table and tableB
as the outer table.
Can someone please explain why that happens and what error am I making in my thought process? Also, is there a way to forcefully supersede the decision of query optimizer and dictate it to treat tableB
as inner table? I am just curious to see how do the two different executions of the same query compare to each other. Thanks.