I am a beginner user of Python. I used to work with matlab intensively. Now I am shifting to python. I have a question about the dimension of an array.
I import Numpy
I first create an array X, then I use some embedded function, like, sum, to play with my array. Eventually, when I try to check the dimension of my array X, it becomes: X.shape, outputs (81,). The number 81 is what I expected, but I also expect the 2nd dimension is 1, rather than just omitted. This makes me feel very uncomfortable even though when I directly type X, it output correctly, i.e., one column and the figures in X are all as expected.
Then when I use another array Y, which literally has Y.shape, outputs (81,1), then if I type X*Y, which I expected to see one array of dimension (81,1) but instead, I saw an array of dimension (81,81).
I don't know what is the underlying mechanism to produce this results.
The way I solve this problem is very stupid. I first create a new array C = zeros((81,1)), so C literally has dimension (81,1), then I assign my X to C by typing C[:,0]=X, then C.shape = (81,1). Note that if I type C=X, then C.shape=(81,), which goes back to my problem. So I can solve my problem, but I am sure there is better method to solve my problem and I also don't understand why python would produce something like (81,), with the 2nd dimension omitted.