When I have a reactive stream of some composite data object, I'm never able to reap the benefits of reactive programming below the highest level of the composite. Here's an example to illustrate the issue:
Say we are building an edit form for data of type Product
:
Product: {
name: string
description: string
comments: Array text
}
We also have a FancyControl
for user input, which could contain arbitrarily complex features, e.g. autocomplete dropdowns and spell checking. Those features also rely on reactive programming ("everything is a stream"), so we have something like this:
FancyControl: (inText: Stream string, ...other input streams) => {
asHtml: Stream HTML
outText: Stream string
...other output streams
}
Note that output streams may be wired back into some input streams. For instance, outText
is likely to be joined with other streams and wired back into inText
.
Given products: Stream Product
(a stream emitting single Products), we want a Stream HTML
that visually renders the product to HTML, using our FancyControls
as applicable. (Please refrain from objecting to this example goal; the reality is more performant and secure.)
If we have a function combine: (Array Stream T1, Array T1 -> T2) -> Stream T2
, we can set up a stream to render name
and description
before we actually have a Product
in our hand:
nameControl = FancyControl(products.map(p => p.name), ...);
descriptionControl = FancyControl(products.map(p => p.description), ...);
htmlStream = combine([nameControl.asHtml, descriptionControl.asHtml],
(nameHtml, descHtml) => 'tr' + ...,
);
But I cannot think of any good way to set up the streams to render comments
.
Attempt 1: We can try the same trick if we know (or have a good guess) the maximum number of comments any product will have:
range(0, maxComments).map(i =>
FancyControl(products.map(p => p.comments[i]))
)
But that's unrealistic to always know, we have to guard against null
and out-of-bounds indexes, and most of our streams are sitting idle most of the time. We also get bad combinatorial explosions if our arrays have arrays of arrays, and it doesn't work for other structures (e.g. dictionaries with unknown keys).
Attempt 2: Lift and flatten
If we have a function lift: value -> Stream value
we can do:
commentControls = products.map(p =>
p.comments.map(text => FancyControl(lift(text), ...))
)
We get a Stream Array FancyControl
, and if we have function (Stream Stream T).flatten: () -> Stream T
we can do:
commentControls
.map(ctrlArr => combine(
ctrlArr.map(ctrl => ctrl.asHtml), // Array Stream HTML
htmlArray => htmlArray.map(html => 'tr' ...)
)) // Stream Stream Html
.flatten()
Which gives us the desired Stream HTML
. This avoids all the disadvantages of attempt 1, but we are creating many single-use streams every time any event happens.
Streams should emit immutable values, but the user is modifying the product on this page. Therefore every modification causes products
to emit a new value. FancyControl
may be complex to wire up, and we are now doing it an arbitrary number of times inside of products.map
. Most of the streams we create only ever emit one value and then complete. Below the highest level of the aggregate, reactive programming is (poorly) emulating synchronous value passing.
In reality, the data structures we're dealing with are much more complicated than a single Product
. What if we are editing multiple products, with the ability to add and delete? We now have a Stream Array Product
that, because it must emit immutable values, is creating and emitting a whole new array every time the user types a key. This is necessary; we can't short-circuit this without limiting our functionality: what if the autocomplete options offered for the product on line 17 depends on something the user entered earlier on line 5?
I am finding myself writing complicated stream-manager type classes which do diffs on aggregates to try to figure out when to open and close various streams, but this is hacky and feels like way overkill for just the relatively simple things I'm trying to do.
So my very general question is: How can I keep the nice simplicity of reactive programming below the highest level of an aggregate data structure?
Ideally, the solution would be recursively applicable. That is, if I have a nice reactive way to solve Stream BigComposite SmallComposite -> Stream Result
, then I can use reactive concepts to solve Stream HugeComposite BigComposite SmallComposite -> Stream Result
.
Clarifications
I am talking about Reactive Programming "in the small". This is a programming paradigm closely related to Functional Programming: the world where concepts like Stream.of(3)
(creates a stream that emits the single value 3, then completes) and Stream.never()
(creates a stream that never emits a value) are useful concepts, similar to how Sequence.empty()
is useful. I am not talking about what could be called "Reactive Systems Architecture", where we are worried about sharded databases and elastic server allocations, where it makes sense for each actor to keep its own copy of all the data it needs to do its job, and where we wouldn't expect the concept to recursively apply to smaller and smaller problems.
This question is probing the limits of the Reactive Programming paradigm. Mature programming paradigms don't have artificial floors and ceilings below and above which they stop providing their benefits. If we heard "Object-Oriented programming no longer works as a concept if your inheritance tree is deeper than 2 levels" we would rightly scoff, just as if we heard "Functional programming doesn't work if you're trying to use it inside another function". Concepts from those paradigms apply recursively: I have the full power of functional programming even inside functions inside functions.
Yet, in reactive programming I'm finding that I'm losing the benefit of the entire paradigm when some seemingly simple constraints apply:
I don't know how many child streams to wire up until after I've seen a value from a parent stream.
Some child streams can emit signals that will logically cause the parent stream to emit a new value.
If your answer is "this is not what Reactive Programming is for", then I don't see how the paradigm is salvageable if it's not recursively applicable. If I encounter a problem and solve it using streams (this is what FancyControl
is supposed to be), I want to be able to use my solution if I encounter that same problem again "inside of" another stream. Saying I can't do that is like saying I can't use functional programming concepts inside another function.
If you're getting hung up on FancyControl
, please consider the more concrete example of Dropdown:
.
Dropdown: (currentValue: Stream id, possibleValues: Stream Collection (id, model)) => {
newValue: Stream id,
htmlNodes: Stream virtualHtmlNode
}
Dropdown
is not subscribing to anything, it's just a way to construct a pair of related output streams from a pair of related input streams. When the htmlNodes
are eventually rendered to the screen, they provide a way for the user to cause newValue
to emit a value. If I need to wire up n
dropdowns, where I don't know n
until I see a composite pass through some parent stream (condition 1), and some possibleValues
streams may depend on this new value and other parts of the composite (condition 2), I appear to hit a floor below which reactive programming concepts no longer provide any benefit.
Stream Product
toStream HTML
seems like a pretty textbook example.stream.product
tostream.invoice
. I don't know enough about the reactive libraries like react.js to offer an opinion about UI elements, other than these libraries are strongly related to data binding and are a replacement for events. I don't know anything that serves as a black box for transforming data into html pages directly, except perhaps for XSLT. Have you written anything practical using streams yet?