1

I am looking for advice. I have developed my own encryption algorithms because I enjoy it and I can. Now, I am looking to try a new idea.

My idea involves consolidating a number my algorithms into a larger one. For instance, you call X.Encrypt() then it uses A.Encrypt(), B.Encrypt(), C.Encrypt() etc. When you perform this kind of operation one byte per A, B, C method call the method overhead becomes killer. Going from a few ms to several minutes. So, any questions?

I am merely looking for code design tips and tricks to maybe lessen the issue.

Thanks ahead of time.

Update

Code example of the issue:

//fast
moduleA.Transform(true, buffer, 0, buffer.Length);
moduleB.Transform(true, buffer, 0, buffer.Length);

//slow
for (int L = 0; L < buffer.Length; )
{
    moduleA.Transform(true, buffer, L++, 1);
    moduleB.Transform(true, buffer, L++, 1);
}

I know this problem is inherent to how it is being called. My goal is to change how I am doing it. I know inside the Transform methods there can be improvement. The fast operates in about 24s while the slow takes many minutes. Clearly, overhead from the methods, no profiler needed :)

I do have an idea I am going to try. I am thinking about using "run-modes" where I instead of looping outside of the Transform methods I change how it runs inside each method to fit my needs. So, I could do an every-other-byte encryption performed inside the Transform methods and as a batch. I believe this would eliminate the overhead I am getting.

FINAL UPDATE (Solved my own issue, still open to ideas!)

Incrementing the loop rate inside the Transform method has worked!

What I've done is the following and it seems to work well:

ITransformationModule moduleA = TransformationFactory.GetModuleInstance("Subspace28");
ITransformationModule moduleB = TransformationFactory.GetModuleInstance("Ataxia");
moduleA.IncrementInterval = 2;
moduleB.IncrementInterval = 2;
moduleA.Transform(true, buffer, 0, buffer.Length);
moduleB.Transform(true, buffer, 1, buffer.Length);

This runs at about 12s for 100MB on my work VM. Thank you all who contributed! It was a combination of response that helped lead me to try it this way. I appreciate you all greatly!

This is just proof of concept at the moment. It is building towards greater things! :)

WaffleTop
  • 157
  • 1
  • 10
  • Have profiled the method calls are the problem? – H H May 13 '11 at 18:31
  • 1
    Obligatory remark: Doing your own encryption for _fun & education_ is fine, but don't even think about using it for real anywhere. – H H May 13 '11 at 18:32
  • Any reason in particular you shouldn't use your own encryption algorithms? – FlyingStreudel May 13 '11 at 18:38
  • @Henk Holterman Until I got it validated and tested :) – WaffleTop May 13 '11 at 18:48
  • 1
    @Flying because it's almost guaranteed that your algorithm is a) insecure, b) slow, or c) both. There's a reason that new algorithms are rare, and that standards orgnanizations hold multi-year competitions between multiple teams of experts before selecting them for use. – dlev May 13 '11 at 18:50
  • RE: Update. It seems to me like those two samples you provided would have drastically different results. One encrypts the entire buffer using moduleA and then re-encrypts it using moduleB. The second alternates by bit. The output of the first option should not look like the output of the second using the same buffer for both, correct? – FlyingStreudel May 13 '11 at 19:24
  • Right! I know. I thought I added that into my update but must of forgot it! I just wanted to give an example. Being able to efficiently alternate is the current mini-goal. As I said, I believe allowing myself to set the interval at which things are processed inside of the Transformation methods will fix the issue. I'm guessing it'll run at < 24s when I'm done. – WaffleTop May 13 '11 at 19:54

5 Answers5

2

Are you encrypting the data by calling methods on a byte-by-byte basis? Why not call the method on a chunk of data and loop within that method? Also, while it is definitely fun to try out your own encryption methods, you should pretty much always use a known, tested, and secure algorithm if security is at all a concern.

dlev
  • 48,024
  • 5
  • 125
  • 132
  • Why is it that someone always comments that way when someone says "their own encryption"? Did I say I wanted to sell you it? Make you use it? Offer it to anyone? No. I've spent years studying encryption algorithms the concepts and ideas behind them. I do it because I enjoy the challenge of creating something better than what someone else has. If the people who developed those well known algorithms followed your advice theirs wouldn't be here. – WaffleTop May 13 '11 at 18:44
  • 1
    That's perfectly fair, and like I said, I fully understand the fun. But 99.99999% the people creating their own algorithms for use in a "production"-style environment don't know what they are doing, and end up with something that can be compromised. When new algorithms are created, it is usually by a team of people, all of whom are experts, who are competing with numerous other teams of experts. That being said, I did not intend to be discouraging. – dlev May 13 '11 at 18:46
  • Yep! I know. I'd love to sit and pick their brains. Its cool! I know it is easy to take things differently here on the net. Many people tell me to only use known and you are wasting your time. My only response is what if I create something game changing? That would be worth the years of failure. One kind of cool thing is with AES's substitution boxes I discovered an oddity that if you substitute repeatedly it goes up to 27 times before it repeats a single sequence (bar two numbers). I use that concept in an algorithm of mine called Subspace28. I'd be happy to share if anyone was interested. – WaffleTop May 13 '11 at 19:13
0

You could try to implement your algorithm such that your code makes chunky calls then chatty calls. That is instead of calling functions hundred of time, you could have less function calls such that each function has more work to do. This is one advice, you might have to make your algorithm efficient as well such that its not processor intensive. Hope this help.

FIre Panda
  • 6,537
  • 2
  • 25
  • 38
  • @abdul-muqtadir Well currently I am making my calls like this `A.encrypt(buffer, offset, length)` which runs pretty fast with the length equal to the buffer size. When I limit the length to 1 and loop through it it slows down. I know that this kind of issue isa inherent to how things work. I am not really sure what you mean by chunky/chatty calls. I did get an idea to change how I run within each method. For instance, process every other piece of data inside of the encrypt method call. In effect it would be the same as doing a forloop on the outside that skips it without the overhead. – WaffleTop May 13 '11 at 18:27
  • Call less functions such that each function has more responsibility to do, this call called chunky calls. You could run a profiler on your code to know which portions are causing bottleneck. – FIre Panda May 13 '11 at 18:30
  • I am currently doing chunky calls. See updated question for better information on what I mean. – WaffleTop May 13 '11 at 19:16
0

You want to have class X call methods from class A, B, C, D, E, F, G, etc...without the method call overhead. At first, that seems absurd. You might be able to find a way to do it using System.Reflection.Emit. That is, dynamically create a method that does A+B+C+D+E+F+G, then call that.

Jay Sullivan
  • 17,332
  • 11
  • 62
  • 86
0

Firstly profile your code so you know where you should operate first, then ask again :)

BlackBear
  • 22,411
  • 10
  • 48
  • 86
0

Would something like this work? Of course you would have to modify it to fit your encryption arguments and return types....

static class Encryptor
{
    delegate void Transform(bool b, byte[] buffer, int index, int length);
    static Transform[] transformers = new Transform[3];

    static Encryptor()
    {
        transformers[0] = (b, buffer, index, length) => { /*Method A*/ };
        transformers[1] = (b, buffer, index, length) => { /*Method B*/ };
        transformers[2] = (b, buffer, index, length) => { /*Method C*/ };
    }

    public static void Encrypt(bool b, byte[] buffer)
    {
        int length = buffer.Length;
        int nTransforms = transformers.Length;
        for (int i = 0; i < length;)
        {
            for (int j = 0; j < nTransforms; j++)
            {
                transformers[i % nTransforms](b, buffer, i++, 1);
            }
        }
    }
}

Edit So this would do the second example

Encryptor.Encrypt(yourBoolean, yourBuffer);

I don't know the specifics of your implementation, but this shouldn't have overhead issues.

FlyingStreudel
  • 4,434
  • 4
  • 33
  • 55
  • Ok ok, thanks for your contribution! Yeah, I am trying to get the second to work fast. I think perhaps having transformation settings like increment interval (setting one to 1, another to 2, etc) and calling the methods in the different orders will accomplish my overall goal. – WaffleTop May 13 '11 at 19:46
  • Oh nevermind, you are just applying a different transform to each byte lol. One sec Ill change that. – FlyingStreudel May 13 '11 at 19:58