If you don't care about which permutations get which indices, an O(n) solution becomes possible if we consider that arithmetic operations with arbitrary integers are O(1).
For example, see the paper "Ranking and unranking permutations in linear time" by Wendy Myrvold and Frank Ruskey.
In short, there are two ideas.
(1) Consider Fisher-Yates shuffle method to generate a random permutation (pseudocode below):
p = [0, 1, ..., n-1]
for i := 0 upto n-1:
j := random_integer (0, i)
exchange p[i] and p[j]
This transform is injective: if we give it a different sequence of random integers, it is guaranteed to produce a different permutation. So, we substitute random integers by non-random ones: the first one is 0, the second one 0 or 1, ..., the last one can be any integer from 0 to n-1.
(2) There are n! permutations of order n. What we want to do now is to write an integer from 0 to n!-1 in factorial number system: the last digit is always 0, the previous one is 0 or 1, ..., and there are n possibilities from 0 to n-1 for the first digit. Thus we will get a unique sequence to feed the above pseudocode with.
Now, if we consider division of our number by an integer from 1 to n to be O(1) operation, transforming the number to factorial system is O(n) such divisions. This is, strictly speaking, not true: for large n, the number n! contains on the order of O(n log n) binary digits, and that division's cost is proportional to the number of digits.
In practice, for small n, O(n^2) or O(n log n) methods to rank or unrank a permutation, and also methods requiring O(2^n) or O(n!) memory to store some precomputed values, may be faster than an O(n) method involving integer division, which is a relatively slow operation on modern processors.
For n large enough so that the n! does not fit into a machine word, the "O(n) if order-n! integer operations are O(1)" argument stops working. So, you may be better off for both small and large n if you don't insist on it being theoretically O(n).