18

I need to extract various fields in a byte buffer. I came up with this solution:

func (fs *FileSystem) readSB() {
    // fs.f is a *os.File
    buf := make([]byte, 1024)
    fs.f.ReadAt(buf, 1024)

    // Offset: type
    var p *bytes.Buffer

    // 0: uint32
    p = bytes.NewBuffer(buf[0:])
    binary.Read(p, binary.LittleEndian, &fs.sb.inodeCount)
    // 4: uint32
    p = bytes.NewBuffer(buf[4:])
    binary.Read(p, binary.LittleEndian, &fs.sb.blockCount)
    // 20: uint32
    p = bytes.NewBuffer(buf[20:])
    binary.Read(p, binary.LittleEndian, &fs.sb.firstDataBlock)
    // 24: uint32
    p = bytes.NewBuffer(buf[24:])
    binary.Read(p, binary.LittleEndian, &fs.sb.blockSize)
    fs.sb.blockSize = 1024 << fs.sb.blockSize
    // 32: uint32
    p = bytes.NewBuffer(buf[32:])
    binary.Read(p, binary.LittleEndian, &fs.sb.blockPerGroup)
    // 40: uint32
    p = bytes.NewBuffer(buf[40:])
    binary.Read(p, binary.LittleEndian, &fs.sb.inodePerBlock)
}

Is there a more better/idiomatic/straightforward way of doing this?

  • I want to keep offsets explicit
  • I want to read from the byte buffer, not seeking and reading from the file when possible.
knarf
  • 2,672
  • 3
  • 26
  • 31
  • Have you looked at encoding/gob? It wouldn't work with your goal of explicit offsets, but if your goal is actually just to serialize/deserialize then it is much easier to use. – Running Wild Sep 10 '12 at 19:47
  • I'm parsing an existing format (ext2fs). – knarf Sep 10 '12 at 20:35
  • 1
    What you have is pretty idiomatic. you could get fancy if you wanted with a for loop and a slice of pointers but that probably wouldn't read as clearly as what you have here. – Jeremy Wall Sep 10 '12 at 20:58
  • 2
    Having to create a bytes.Buffer each time seems wasteful. – knarf Sep 10 '12 at 21:24

2 Answers2

33

You could avoid creating a new buffer every time by using .Next() to skip the bytes you don't want to read:

{
    // Offset: type
    p := bytes.NewBuffer(buf)

    // 0: uint32
    binary.Read(p, binary.LittleEndian, &fs.sb.inodeCount)

    // 4: uint32
    binary.Read(p, binary.LittleEndian, &fs.sb.blockCount)

    // Skip [8:20)
    p.Next(12)

    // 20: uint32
    binary.Read(p, binary.LittleEndian, &fs.sb.firstDataBlock)

    // 24: uint32
    binary.Read(p, binary.LittleEndian, &fs.sb.blockSize)
    fs.sb.blockSize = 1024 << fs.sb.blockSize

    // Skip [28:32)
    p.Next(4)

    // 32: uint32
    binary.Read(p, binary.LittleEndian, &fs.sb.blockPerGroup)

    // Skip [36:40)
    p.Next(4)

    // 40: uint32
    binary.Read(p, binary.LittleEndian, &fs.sb.inodePerBlock)
}

Or you could avoid reading chunk by chunk and create a header structure which you read directly using binary.Read:

type Head struct {
    InodeCount      uint32  //  0:4
    BlockCount      uint32  //  4:8
    Unknown1        uint32  //  8:12
    Unknown2        uint32  // 12:16
    Unknown3        uint32  // 16:20
    FirstBlock      uint32  // 20:24
    BlockSize       uint32  // 24:28
    Unknown4        uint32  // 28:32
    BlocksPerGroup  uint32  // 32:36
    Unknown5        uint32  // 36:40
    InodesPerBlock  uint32  // 40:44
}

func main() {
    var header Head

    err = binary.Read(file, binary.LittleEndian, &header)

    if err != nil {
        log.Fatal(err)
    }

    log.Printf("%#v\n", header)
}
nemo
  • 55,207
  • 13
  • 135
  • 135
1

I have a package binpacker to handle those situations

example

example data:

buffer := new(bytes.Buffer)
packer := binpacker.NewPacker(buffer)
unpacker := binpacker.NewUnpacker(buffer)
packer.PushByte(0x01)
packer.PushUint16(math.MaxUint16)

unpack it:

var val1 byte
var val2 uint16
var err error
val1, err = unpacker.ShiftByte()
val2, err = unpacker.ShiftUint16()

Or

var val1 byte
var val2 uint16
var err error
unpacker.FetchByte(&val1).FetchUint16(&val2)
unpacker.Error() // Make sure error is nil
Sirui Zhuang
  • 665
  • 5
  • 12