Two questions that we often hear from our clients are “how can we use our domain know-how to make the model smarter?” and “how are we going to get so much data needed by deep-learning?”
Probabilistic (Bayesian) modeling hits two birds with one stone: (1) it can incorporate domain knowledge, and therefore (2) it needs less data. Another advantage of Bayesian models is that they quantify uncertainty: when we get an answer, we also know how much we can trust it.
We can apply probabilistic modeling to most machine learning setups: classification or prediction, for image or timeseries data, and even combine them with deep learning. We can also use them for causal inference: we are searching what caused a result: “which of all potential factors was responsible for the machine’s damage?”.
In LeanBI we always strive to use the field experts know-how as much as possible in our models. Probabilistic modeling is often the right tool for this, for example using physical modeling with probability distributions to explain indoor air quality, or to quantify the probability of each potential failure in predictive maintenance.