2

AssemblyA.dll refers to AssemblyB.dll

AssemblyB was rebuilt with new code, but not AssemblyA. Therefore, we no longer know for sure if AssemblyA is compatible or not. Maybe it will crash at runtime because some method or property was removed.

Theoretically speaking, is it possible to validate whether AssemblyA is compatible or not with AssemblyB, without having to actually rebuild it ?

1201ProgramAlarm
  • 32,384
  • 7
  • 42
  • 56
  • By the time .NET was developed, we had to deal with what you describe (affectionally called the DLL Hell: https://en.wikipedia.org/wiki/DLL_Hell). Afaik, .NET has design properties to avoid this. | For unambigious references, version and zertificate are included. – Christopher Nov 21 '19 at 16:04
  • It's not a simple thing. Take a look at https://stackoverflow.com/questions/199823/best-practices-for-assembly-naming-and-versioning for some discussion and links. One thing to consider is to manage assembly versioning (which will completely break compatibility) different from file versioning (where following standard semantic versioning rules makes more sense). The trick is to look at your use cases (how often you change things, who consumes your assemblies, etc.), read up on various possible systems and then come up with something that matches your requirements - and then stick to it. – Flydog57 Nov 21 '19 at 16:48
  • Static MSIL code analysis might help, based on libraries such as Mono Cecil, but again, rebuild is the quickest way. – Lex Li Nov 21 '19 at 16:48
  • I'm considering mono Cecil. I'm just not sure if it's easy to make my validation exhaustive. I can easy validate all the method signatures. Is there other things to consider ? – Utilitaire CCV Nov 21 '19 at 18:44

1 Answers1

0

The scenario you describe is called the DLL Hell. Wich is just the Windows Specific Subset of Dependency Hell. And prior to .NET (and outside of it) it is dang common. It comes from basically only identifying a DLL by it's name and path.

The .NET Developers knew of it and tried their damndest to avoid it. .NET will not just identify a referenced DLL by the name. It will use at least the Name, Version and Zertificate.

Two dll can have the same name. As long as their version is differnt, .NET will have no issue keeping them appart. .NET does not even has issues keeping them both in memory at the same time. You do not just build against the "System.DLL". You build agianst the "System.DLL. Version Y, Zerficate X".

Christopher
  • 9,634
  • 2
  • 17
  • 31
  • My problem is that I use versionning "Minimum version, inclusive" so to avoid having to rebuild all my projects because a single dll has changed. I save a lot of compilation time then. However, I have no way of making sure the code actually works. I think I could code a validator that would use reflection in order to achieve this. But I was wondering if there was a tool somewhere that could do that for me. Analysing a dll would be way faster than rebuilding. – Utilitaire CCV Nov 21 '19 at 16:15
  • @UtilitaireCCV: The versioning means that the wrong (new) DLL will not be loaded or used by accident. The right (old) DLL being around at runtime is a seperate problem altogether. When in doubt, the .NET Runtime will just tell you "DLL Y, Version X is missing. Please fix that, I can not run this progrmam like that." – Christopher Nov 21 '19 at 16:21
  • I'm using "Minimum version, inclusive" precisely because I need the most recent dll to be used all the time. Therefore, under no possible circumstances will I face this "version X is missing", because the code accept any version, that's the whole point, to always automatically accept the new version of any dll. I understand that this is also the source of my other problem. – Utilitaire CCV Nov 21 '19 at 19:44
  • I understand your point. Does that mean it is absolutely unrealistic to use "Minimum version, inclusive" at all, or any other version range other than "exact match" ? From the perspective that we cannot trust our developers to avoid introducing breaking changes by accident, I don't see what else I have as a solution. – Utilitaire CCV Nov 22 '19 at 15:19
  • @UtilitaireCCV. Approximately 90% of every higher language is there for the sole purpose of you being unable to trust you fellow programmers to not introduce errors: private/proctect, readonly, Type Safety and Generics, out variables, non-nullable references. All of those only there so somebody else using your code does not mess up. | You can propably count the keywords that have a performance or programm flow effect off on 2 hands. – Christopher Nov 26 '19 at 02:50
  • To give a little more context, we started from a monolith library that we splitted into dozens of libraries, that we immediately refer by packages. Each library has hundreds of public methods/properties and we lose track of what really need to be public. I guess the first step would be to make internal everything that does not need to be public. I'm sure we could do that without making too many errors, like forgetting that deep down somewhere this method was used in another library. – Utilitaire CCV Nov 26 '19 at 13:37
  • It's just that I feel I need something more than that because I'm deeling with legacy code, with barely any unit test, and also changing something from private to public can in rare cases have dramatic effects since it can mess with the serialization. – Utilitaire CCV Nov 26 '19 at 13:39
  • @UtilitaireCCV From some point of view, that old Monolithic Library was like a static class. A single, big fixture. Your rework is around getting away from static functions. Using static variables that have a instance assigned too. | So that is about the severity of the change pains you will have to deal with. | An option might be to do the transition typewise first. Type Aliases can be used to get a single point in code to change the type you are using. Similar to how using a static variable that can be (re)assigned at runtime, will make a transition easier. – Christopher Nov 27 '19 at 15:10
  • Thank you for the suggestion. However, it would take a lot of time to implement, I believe. The monolith has million lines of code, approximately 30 mbytes. So far, we have spent months just splitting the code with superficial refactoring. The real rewriting of the code will probably take 10 years, if we end up doing it at all. I understand that what you are suggesting does not imply a full rewriting of the code, but I can't imagine us getting through this if we start doing more than very basic refactoring. – Utilitaire CCV Nov 27 '19 at 16:43