I am currently working on a project for a production facility. The machines in the production are saving so called 'Telemetry Data', measurements like water pressure, plastic volume or temperature into a database. In this case we are using a ElasticSearch Cluster. My task is to develop a web application which gives the technichal staff an interface for analyzation in case one machine has an error, which can sometimes be read of the telemetry data. The simplest visualization they requested for that is a linechart displaying all the data over time. I know that tools like Kibana are out there but the software needs to fullfil more tasks which can not be realized with Kibana.
My goal would be to build a flexible dashboard for the analyzation part using JSF and Primefaces. The thing where I am struggeling in the moddeling process is the fact that there are 3 types of machines and the telemetry data they are saving to the database are all looking different. For example one of the machines is just stamping parts of metal and is only emitting a pressure measurement, but there is also one that is used for molding plastic which records the plastic volume since and current temperature.
So I started implementing it this way:
Enum MachineType{
METALLSTANCE, PLASTICMOLDER;
}
class Machine{
private MachineType type;
}
class TelemetryDataPoint<T extends Enum<T>>{
}
And there is a locally available API in the network which lets me send search requests to the ElasticSearchCluster and returns the entire string:
{
"took" : 63,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : null,
"hits" : [ {
"_index" : "telemetry",
"_type" : "_doc",
"_id" : "1",
"_score" : null,
"_source" : {<DATA>}
},
]
}
}
I build a RESTRouter class which allows to send those GET requests and returns the response. The RESTRouter is intern used by the statistics repository to retrieve the desired data over a certain time period. The DataPoints are then used in the line diagram. Now my question is how to perform the parsing of the ElasticSearch response back into a TelemetryDataPoint - I am currently using jackson for that which works just fine, but feels kind of unnatural because I am already using the elasticsearch API for building the queries and I feel like there must be a way for that API to take the json and turn it into objects.
The tricky part started today where one of the people in charge of the project walked in on me and asked me to give some aggregated data to the dashboard for example 'What volume of plastic has been used over the last 4 hours' or how often has the pressure gone above a critical threshold whitin the last 4 hours?
And this is where I can not wrap my head around how to do it. I do not know how I should design the object holding the aggregated data since there are so many types of aggregates, e.g. Counts, Sums or Averages, they could be requesting and they all could be from different machines which means that I would have to implement
COUNT(MACHINE_TYPES) X COUNT(AGGREGATE_TYPES)
classes and the according parsers for the response using jackson. Is this the only way to go about this or am I blind?