I'm reasoning on a big endian system (most significant byte stored at the smallest address).
char a[10]={0,1,0,1,0,1,0,1};
In binary, your array initially looks like that in your memory :
0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0000
A char only takes a byte in memory.
That means, from right to left, the first 0000.0000 is a[0], then 0000.0001 is a[1], etc.
unsigned short *p;
p=(unsigned short *)&a[0];
*p=1024;
You assigned to p the address of the array. Then you dereferenced it and put in the address stored by p an unsigned short equal to 1024. In binary, 1024 looks like :
0000.0100.0000.0000
An unsigned short takes two bytes in memory.
So,this is what your array becomes after your modification :
0000.0100.0000.0000 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0000
==> What happened is, since you treated your p as a pointer to an unsigned int, you changed the first 2 bytes instead of just the first one (if it was a char). Then, when you want to access your data via your char array, it will be treated as a char, byte by byte. Then, 0000.0100 which is 4 in decimal is a[0] and 0000.0000 which is 0 in decimal is a[1].
We can deduce that your system is little endian since you instead got 4 for a[1] and 0 for a[0] (I'll let you find more explanations about endianness on google)