-1

This question is probably too wide but...

I have 300 projects (say project1_v1.sln, project2_v1.sln...) all dependent on 1 class library (say classLib_v1.sln), every 3 months we need to do a small amendment the class library (interfaces and implementation), which will break some of the projects, you then fix these projects and deploy....nasty....

BUT....

rather than amend the existing projects we would rather regenerate them as a new "branched" version....

so...clone classLib1.sln to make classLib2.sln make the ammendment, and then regenerate project1_v1.sln, to project1_v2.sln, which references classLib2 rather than classLib1.....compile....and fix.

I.e. have 2 versions of the every project.

feels like some sort of automated "production line", of products.

We do use TFS, but our version control is very linear, so I don't especially know how to deal with something like this without causing a mess.

thoughts?


There have been several useful responses, but just to be clear, this isn't about avoiding the breaking change, the code wILL break, its about automating a change (of a reference), and then using that automation to tell us where the code has broken.

Dependency injection has been toted as a solution, and it can be used, but you still need to automated the change to the configuration, and the downside is that configurable DI isn't type safe, so the compiler can no longer tell you where you've got a problem.

MrD at KookerellaLtd
  • 2,412
  • 1
  • 15
  • 17
  • 1
    *"how to deal with something like this without causing a mess"* > Overall it sounds to me like you already got a hell of a mess there... – bassfader Jun 12 '17 at 12:06
  • imaging 300 apps, that talk to an API, and the API is versioned...(i.e. has breaking changes) – MrD at KookerellaLtd Jun 12 '17 at 12:26
  • probably quite a common scenario – MrD at KookerellaLtd Jun 12 '17 at 12:27
  • 2
    This question is way too broad to answer well in the Stack Overflow format. You have a massive architectural issue with your application that you need to address. – Daniel Mann Jun 12 '17 at 16:30
  • It's not an architectural issue....it's an inherent part of software development. coupling + non conformant changes => breaking clients....its life – MrD at KookerellaLtd Jun 13 '17 at 08:29
  • *"it's an inherent part of software development"* -> Well... no, that is just wrong... In Software development you really try to avoid such situations as much as possible, for example using abstraction layers. That way you don't need to change 300+ projects, instead you only need to update the abstraction layer. To me this all sounds like many basic principles of software development have been violated in this case. I understand that changing this now is probably going to be tough, but trying to keep this design is most likely only going to introduce more and more problems later on... – bassfader Jun 13 '17 at 11:03
  • you say "no" and then you say you "try" to avoid it? but if the answer is no then it means you can always avoid it, because it isnt inherent? this api IS an abstraction layer, for what u say to be true impies that you can define such a layer that is "future prrof" and can always be extended.....which is clearly false (i simply remove some behaviour fron the abstraction and its broken – MrD at KookerellaLtd Jun 13 '17 at 17:09

3 Answers3

2

If updating interfaces is causing rebuilds and the goal is not to have rebuilds against existing consumers on new changes, then one should not make changes to existing interfaces causing the rebuilds.

If new functionality is presented then a new interface should be created which can either inherit from a common interface as a base or even the previous interface; hence only exposing new functionality the new consumers.

ΩmegaMan
  • 29,542
  • 12
  • 100
  • 122
  • breaking changes are inevitable, and where those changes is not known before the changes are made...so there is no known "common interface". – MrD at KookerellaLtd Jun 12 '17 at 12:58
  • the goal is to "industrialise" the process of handling those changes – MrD at KookerellaLtd Jun 12 '17 at 12:59
  • @user2088029 Via a factory pattern such breaking changes should not affect consumers unless an interface changes. Can the software become that compartmentalized? If this is not possible as what you seem to present, then this is software 101 and any new changes have to be recompiled/built throughout all affected systems. I understand you *want* to genericize/industrialize the process, but I wonder if the cure proposed by that versioning, would be almost as onerous as the issue it fixes? GL – ΩmegaMan Jun 12 '17 at 13:22
  • "we need to do a small amendment the class library (interfaces and implementation)" – MrD at KookerellaLtd Jun 12 '17 at 13:25
  • so the interfaces change...the code WILL break...we want it to break....because we want the automated build and test system to tell us what we need to fix....we don't want to generecise it, that implies we can model the changes in polymorphic paramterisation, we can't, breaking changes will happen. – MrD at KookerellaLtd Jun 12 '17 at 13:27
  • So the alternative?....is branching the code...I don't really like branching (I don't understand it!)....but ok let's say we do that, I'd still like to automate the process of branching and then amending 300 projects and then building them through TFS – MrD at KookerellaLtd Jun 12 '17 at 13:29
  • @user2088029 You know your system better than myself, but from my vantage point, I believe you can mitigate the effects (but not remove all) of change by identifying what might change and what does not and plan accordingly. The design implemented may have too much code debt, so to speak, to do such an architectural change; to that one has to live with the trade off selected at the initial design. – ΩmegaMan Jun 12 '17 at 13:31
  • the changes are driven by an external API that is presented to us. but lets be clear, breaking changes are part of the software development process, people don't talk about them, because they are ugly and messy (the software- product-lines people DO talk about it) – MrD at KookerellaLtd Jun 12 '17 at 13:35
  • @user2088029 Sounds like an interesting (to me a third party observer) problem which maybe should be modeled outside of the current project. In other words, if you had your druthers, what would you come up with that is different to placate the incessant needs of a external API? By creating such a *test* system what could be learned about the current system? If you could present us with such an example, SO might be able to give you a more palatable answer. – ΩmegaMan Jun 12 '17 at 13:39
  • druthers? whats that? – MrD at KookerellaLtd Jun 12 '17 at 13:45
  • it's a classic production line/FOP scenario....you want to industrialise a small change to a system, and detect those places where the automated solution doesn't work....the automated solution is changing a reference,,,the detection is static typing and automated unit tests – MrD at KookerellaLtd Jun 12 '17 at 13:47
1

This can be done using dependency injection, instead of recompiling the code all the time, you rely on an xml configuration, in that way you don't need to recompile the code every time a dependency changes. But be aware that will work until you add new features or methods instead of removing or renaming the ones you use.

Below is one of the questions that have your same problem.

Dependency Injection - Choose DLL and class implementation at runtime through configuration file

Now, there is another thing. The tfs has an option of a Continuous Integration process that you can take advantage for, for that you should be an administrator of the tfs. You can specify an option of a Gated Check-In, to safe yourself from a crash when you change your code. Here is the documentation Gated Check-in

Now you can put for all your projects also a trigger on the tfs and if they point to a specific folder to monitor the changes over that folder, if someone checked-in a change on the dependent dll or project that is store there, then you can start an automated build on the tfs to see if that specific project compiles correctly or not using that new dependency.

Zinov
  • 3,817
  • 5
  • 36
  • 70
  • ok, we do indeed use continuous integration and gated checkins. – MrD at KookerellaLtd Jun 12 '17 at 13:21
  • now the questions is if they are configure on that way with the triggers, check for that. Did you check as well the dependency injection approach? – Zinov Jun 12 '17 at 13:23
  • and we also monitor the underlying folder.....but this ISNT what we want to do...we want to version our projects not amend them. – MrD at KookerellaLtd Jun 12 '17 at 13:23
  • dependency injection is about injecting implementations with consistent interfaces, these are breaking changes, we WANT to compile them, because we WANT the compiler (or the tests) to tell us what projects need to be ammended – MrD at KookerellaLtd Jun 12 '17 at 13:24
  • I'm happy with our CI configuration, you're trying to answer a different question though which is about handling non breaking changes. I'm really talking about how to handle production lines of software in the "real world" – MrD at KookerellaLtd Jun 12 '17 at 13:34
  • You have a multi-tenant applications as I can see, maybe one client introduce a change that the other doesn't use, in that case, you need to create as @OmegaMan said a new interface to introduce your changes, and rewrite the part of the code where you need to handle that as client1 or as client2 that need that new feature. In that case you will have always a breaking change and you should compile the entire code to deploy it to all your clients, after doing that you maintain the same code for all but the logic is different depending on which feature they use base on your config – Zinov Jun 12 '17 at 13:39
  • 1
    DI via Interfaces. +1 – ΩmegaMan Jun 12 '17 at 13:40
  • if I have "interface Foo { int Bar(); }," and a client that receives a Foo and invokes Bar....and someone changes that interface to remove Bar(), and adds a Wibble() function, how can DI help? – MrD at KookerellaLtd Jun 12 '17 at 13:49
  • this is about automating, and detecting breaking changes to a large family of related applications, it isn't about software architecture...or at least in the sense, it isn't about being able to handle breaking changes....breaking changes by definition break your code... – MrD at KookerellaLtd Jun 12 '17 at 13:51
  • I think there is a perception that the changes are driven by the clients, and thus we know where to make the changes. That isn't the case, the changes are driven by the external API being planned on being changed, the challenge is thus to detect where to make those changes in the simplest possible way....we can simply manually amend 300 solutions to point to a new reference (we could put it in config, but then the compiler wont know about it), compile...and run the tests...300 times. We want to automate this process whilst branching/versioning the code base. – MrD at KookerellaLtd Jun 12 '17 at 13:59
1

Ultimately I think this is a question about CI and sourcecontrol (not software architecture, DI and any other such things), sometimes, inevitably you have to apply a change that breaks something (unless you don't couple your code to any abstraxtions....and use CTRL C to share bahaviour!).

So the answer is...research how to use sourcecontrol and CI to stage development, UAT and production versions of a family of applications

MrD at KookerellaLtd
  • 2,412
  • 1
  • 15
  • 17
  • Basically thinking about it...THIS is the problem, so I'll repost it in a more focussed way, how to handle production vs development releases of a family of products in general, and probably specifically with git + tfs – MrD at KookerellaLtd Jun 13 '17 at 08:31