Skip to main content
added 2122 characters in body
Source Link
psr
  • 12.9k
  • 5
  • 40
  • 67

We already knew mathematically that verification of a program is impossible in finite time in the most general case, due to the halting problem. So this kind of problem is not new.

In practice, good design can provide decoupling such that the number of intersecting features is far less than 2^N, though it certainly seems to be above N even in well designed systems.

As far as sources, it seems to me that almost every book or blog about software design is effectively trying to reduce that 2^N as much as possible, though I don't know of any that cast the problem in the same terms as you do.

For an example of how design might help with this, in the article mentioned some of the feature intersection happened because replication and indexing were both triggered of the eTag. If they had available another communication channel to signal the need for each of those separately then possibly they could have controlled the order of events more easily and had fewer issues.

Or, maybe not. I don't know anything about RavenDB. Architecture can't prevent feature intersection issues if the features really are inexplicably intertwined, and we can never know in advance we won't want a feature that really does have the worst case of 2^N intersection. But architecture can at least limit intersections due to implementation issues.

Even if I'm wrong about RavenDB and eTags (and I'm just using it for the sake of argument - they're smart people and probably got it right), it should be clear how architecture can help. Most patterns people talk about are designed explicitly with the goal of reducing the number of code changes required by new or changing features. This goes way back - for example "Design Patterns, Elements of Reusable Object-Oriented Software", the introduction states "Each design pattern lets some aspect of the architecture vary independently of other aspects, thereby making a system more robust to a particular kind of change".

My point is, one can get some sense of the Big O of feature intersections in practice by, well, looking at what happens in practice. In researching this answer, I found that most analysis of function points/development effort (i.e. - productivity) found either less than linear growth of project effort per function point, or very slightly above linear growth. Which I found a bit surprising. This had a pretty readable example.

This (and similar studies, some of which use function points instead of lines of code) doesn't prove feature intersection doesn't occur and cause problems, but it seems like reasonable evidence that it's not devastating in practice.

We already knew mathematically that verification of a program is impossible in finite time in the most general case, due to the halting problem. So this kind of problem is not new.

In practice, good design can provide decoupling such that the number of intersecting features is far less than 2^N, though it certainly seems to be above N even in well designed systems.

As far as sources, it seems to me that almost every book or blog about software design is effectively trying to reduce that 2^N as much as possible, though I don't know of any that cast the problem in the same terms as you do.

We already knew mathematically that verification of a program is impossible in finite time in the most general case, due to the halting problem. So this kind of problem is not new.

In practice, good design can provide decoupling such that the number of intersecting features is far less than 2^N, though it certainly seems to be above N even in well designed systems.

As far as sources, it seems to me that almost every book or blog about software design is effectively trying to reduce that 2^N as much as possible, though I don't know of any that cast the problem in the same terms as you do.

For an example of how design might help with this, in the article mentioned some of the feature intersection happened because replication and indexing were both triggered of the eTag. If they had available another communication channel to signal the need for each of those separately then possibly they could have controlled the order of events more easily and had fewer issues.

Or, maybe not. I don't know anything about RavenDB. Architecture can't prevent feature intersection issues if the features really are inexplicably intertwined, and we can never know in advance we won't want a feature that really does have the worst case of 2^N intersection. But architecture can at least limit intersections due to implementation issues.

Even if I'm wrong about RavenDB and eTags (and I'm just using it for the sake of argument - they're smart people and probably got it right), it should be clear how architecture can help. Most patterns people talk about are designed explicitly with the goal of reducing the number of code changes required by new or changing features. This goes way back - for example "Design Patterns, Elements of Reusable Object-Oriented Software", the introduction states "Each design pattern lets some aspect of the architecture vary independently of other aspects, thereby making a system more robust to a particular kind of change".

My point is, one can get some sense of the Big O of feature intersections in practice by, well, looking at what happens in practice. In researching this answer, I found that most analysis of function points/development effort (i.e. - productivity) found either less than linear growth of project effort per function point, or very slightly above linear growth. Which I found a bit surprising. This had a pretty readable example.

This (and similar studies, some of which use function points instead of lines of code) doesn't prove feature intersection doesn't occur and cause problems, but it seems like reasonable evidence that it's not devastating in practice.

Source Link
psr
  • 12.9k
  • 5
  • 40
  • 67

We already knew mathematically that verification of a program is impossible in finite time in the most general case, due to the halting problem. So this kind of problem is not new.

In practice, good design can provide decoupling such that the number of intersecting features is far less than 2^N, though it certainly seems to be above N even in well designed systems.

As far as sources, it seems to me that almost every book or blog about software design is effectively trying to reduce that 2^N as much as possible, though I don't know of any that cast the problem in the same terms as you do.