If you don't know the structure of your JSON file there is little you can do, other than use a faster JSON decoder (e.g. ijson which can do streaming, or ujson).
It may also be that if you need to have all the data in python in memory at the same time, the speed is affected by swapping/not having not enough physical RAM - in which case adding more RAM may help (as obvious as it is, I think it is worth mentioning this).
If you don't need a generic solution, check the structure of the file yourself and see how you can split it. E.g. if it is an array of whatever, it is probably easy to separate array elements, as complex as they might be, manually, and split into chunks of any size.
P.S. You can always test what is the lower bound, by just reading the 30GB file as binary data, discarding the data - if you are reading from the network, network speed may be the bottleneck; if you need to have all that data in memory, just create sample data of the same size, and it may take the same 5 hours due to swapping etc.