TL;DR there is a good reason for not providing a read_usize
function, because it is not consistent on different cpu-architectures.
This is a bad idea. Normally you have some kind of protocol you are trying to deserialize. This format should be independent from the cpu-architecture and therefore you can't read an usize, because it is cpu-dependent.
Let's assume you have a simple protocol where you first have the size of an array and afterwards n
elements.
+------+---------+
| size | ....... |
+------+---------+
Let's suppose the protocol says that your size is 4 byte long. Now you want to do the thing Shepmaster suggested and read the usize dependent on your architecture.
On a x86_64 OS you will now read 8 bytes and therefore swallow the first element in your array.
On a Atmega8 your usize would be 2 bytes and therefore only take the first 2 bytes of your size (which might be zero in case there are less than 65k elements and a BigEndian byte-order).
This is the reason why there is no read_usize
function and it is correct. You need to decide how long your size is, read the exact amount of bytes from your slice and then us as
to convert that into an usize
.