0

I have a class which acts as a wrapper for a byte array:

public class LegayWrapper{
    private byte[] data;

    public void setName(String name){
        MarshalUtils.addName(data,15,30,name,"ENCODING"); 
      //this util not only puts the data inside of the byte array
      // it also converts it in a specific representation
    }

    public String getName(){
           return MarshalUtils.getNameAsString(data,15,30,"ENCODING");
    }

    ...
    public byte[] getBytes(){
           return data;
    }
}

In my code there are multiple areas in which I do something like this for A LOT of values (about 50-70):

legacyWrapper1.setName(legacyWrapper2.getName()); //marshalling/unmarshalling, OMG!
legacyWrapper1.setP1(legacyWrapper2.getP1());
legacyWrapper1.setP2(legacyWrapper2.getP2());
//... a lot of assignements ... 

Now, in reality I need the byte wrapper only for one of these instances, so I though of using a normal Pojo for the rest. My idea to reuse the code was to create another wrapper made like this one:

public class PojoWrapper extends LegacyWrapper{
    private String name;

    @Overrides
    public String getName(){
      return name;
    }

    @Overrides
    public void setName(String name){
         this.name = name;
    }

    //...this for every property p1,p2,etc
}

so that:

LegacyWrapper legacyWrapper1 = new LegacyWrapper(); //this one is a wrapper
legacyWrapper1.setBytes(bytes); //initialing it with byte data

LegacyWrapper legacyWrapper2 = new PojoWrapper(); //this is a pojo
legacyWrapper2.setProperty1("aProperty"); //initializing it with normal data

//...doing stuff....
legacyWrapper1.setName(legacyWrapper2.getName()); //only marshalling
legacyWrapper1.setP1(legacyWrapper2.getP1());
legacyWrapper1.setP2(legacyWrapper2.getP2());
//... a lot of assignements ... 

I though that, with this new wrapper, the execution times would have been better...guess what? They got worse!!

How is that possibile if now, the same code as before, only does marshalling? Maybe it has something to do with the dynamic binding because the JVM take time to figure out which method to use between the overridden and the original one?

Phate
  • 6,066
  • 15
  • 73
  • 138
  • 1
    Why do not you benchmark it? – Shiv Dec 18 '16 at 11:55
  • ?I did and the result was that the second option led to worse performances but I cannot understand why... – Phate Dec 18 '16 at 11:57
  • How did you measure this? I ask because there are [many pitfalls](http://stackoverflow.com/questions/504103/how-do-i-write-a-correct-micro-benchmark-in-java) when micro benchmarking Java code. – meriton Dec 18 '16 at 11:58
  • I simply ran multiple junit tests – Phate Dec 18 '16 at 12:25
  • 1
    Then you stumbled into about half of the pitfalls listed in the question I linked to. Your results have little to do with the performance of dynamic binding. For instance, in a hot code path, the JVM would just-in-time compile that code, but since you call it only once, you're measuring the interpreter instead. That matters because the compiler can often inline method calls even if they are late binding, but the interpreter will never do so. Also, if you run the code to test just once, on certain operating systems, timer inaccuracy is over a million times greater than the cost of late binding. – meriton Dec 18 '16 at 21:24

0 Answers0