A vision towards bringing machine learning closer to humans
Is there a way to re-interpret machine learning in a constructivist way? And more importantly, why should we do it?
The answers to both questions are quite straightforward. Yes, we can do it, and the motivation for that may address one of the crucial flaws of modern machine learning, i.e., bringing it closer to human interpretation of reality.
The key component of cognitive functionality is a model.
Humans are able to build very complex models, thanks to the way our mind works. Functionalistic psychology have shown that mental models are able to continuously build hypothetical constructs to predict the environment, and continuously modify them.
We can rely on the General Model Theory of Stachoviak (1973) to try to understand what reality and cognition are.
Learning constitutes from constructing, deconstructing and reconstructing models. Constructivism theory states that there is no human-independent reality and knowledge is acquired individually. If you think at machine learning from this perspective, it basically falls outside any human oriented interpretation of reality.
However, you could map:
- Construction (create, imagine) as unsupervised learning
- Reconstruction (reproduce, imitate) as supervised learning
- Deconstruction (reflect, doubt) instead is not clearly mapped. It partially maps to online learning, but not entirely. This is also because you need to look into models, abstract, compare, differentiate them, and select/discard the invalid ones.
It is actually possible to implement in practice some constructivist approach in machine learning algorithms. For instance, here is a simple scheme where you look into two classification models. We look over time (T), subject (Sigma) and purpose (Z) of models. The table below the two models shows the different cases (when one of the three aspect is different) and what it implies in terms of costructivist process.