There's probably no direct way to do what you're asking with a csv file (unless I've misunderstood you). The problem is that there's no meaningful sense in which any file has "columns" unless the file is specially designed to have fixed width rows. CSV files aren't generally designed that way. On disk, they're nothing more than a giant string:
>>> import csv
>>> with open('foo.csv', 'wb') as f:
... writer = csv.writer(f)
... for i in range(0, 100, 10):
... writer.writerow(range(i, i + 10))
...
>>> with open('foo.csv', 'r') as f:
... f.read()
...
'0,1,2,3,4,5,6,7,8,9\r\n10,11,12,13,14,15,16,17,18,19\r\n20..(output truncated)..
As you can see, the column fields don't line up predictably; the second column starts at index 2, but then in the next row, the width of columns increases by one, throwing off the alignment. This is even worse when input lengths vary. The upshot is that the csv reader will have to read the entire file, throwing out the data you don't use. (If you don't mind that, then that's the answer -- read the whole file line by line, throwing out the data you won't use.)
If you don't mind wasting some space and know that none of your data will be longer than some fixed width, you could create a file with fixed-width fields, and then you could seek through it using offsets. But then, once you're doing that, you might as well start using a real database. PyTables seems to be the favorite choice of many for storing numpy arrays.