What is Intelligence?

Beside methods mentioned above, some research towards AGI has significantly different foundations. Some of these frameworks are derived from rigid mathematical theories, some are inspired by neuronal circuits and some are based on psychological models. However, what most of them have in common is focus on aspects in which their popular counterparts fail. Frameworks that I want to highlight are HTM, AIXI, ACT-R and SOAR.

Let’s start with Hierarchical Temporal Memory (HTM). Originally it was based on some ideas inspired by the neocortex circuitry. But, keep in mind that these circuits aren’t understood well enough yet and HTM might serve only as a rough approximation.

However, at the core of HTM theory lies particularly important concept — Sparse Distributed Representation or SDR. In practice, it’s just a bit array with usually about a few thousand elements, and they are constructed in such a way that semantically related inputs are mapped to SDRs with a lot of overlapping bits. Conceptually this is similar to vectorized representations derived with neural networks, but sparsity and excessive capacity are the key differences. These ideas are particularly relevant since one of the key assumptions behind DNN convergence proof is overparametrization of the network.

Example of SDR overlapping in the presence of noise. Source: numenta.com

Other ideas from HTM theory aren’t that interesting in my view. Inhibition is similar to batch normalization and some other regularization techniques, boosting is a relatively old concept in ML, hierarchical structure seems too rigid while neocortex has much more complicated connectivity patterns, topology seems like a synonym to usual NN’s architecture and in general this theory puts a lot of weight on objects while too little on relationships between them, and even SDRs could be constructed with usual ANNs using a lot of neurons while penalizing activations. Altogether, HTM still requires too many tweaks to achieve performance comparable to other ML competitors. Anyway, I believe Numenta (the company behind HTM) deserves credit for simple and intuitive explanations of these ideas.

My next “guest” — AIXI, lacks such simplicity but has much more solid mathematical foundations. It has, however, a significant drawback — it’s uncomputable. In fact, many ML algorithms are impossible to compute precisely and we have to deal with approximations. Anyway, quite often these approximations are performing well in practice. AIXI can be described in one line like that:

The model has an agent and an environment that interact using actions (outputs), observations (inputs), and rewards (might be described as a specific part of the input). The agent sends out an action a, and then the environment sends out both an observation o and a reward r, and term l(q) denotes the complexity of the environment. This process repeats at each time k…m. Source: lesswrong.com

It has been proved to be optimal in many ways and, in my view, the best mathematical description of what AGI might look like that we have nowadays. Also, AIXI is a general purpose reinforcement learning agent and in many ways similar to Gödel Machine developed by Schmidhuber. However, both of them serve as descriptive models of AGI, not the recipes for its creation. Anyway, they are great sources of inspiration for AI researchers.

On the contrary, ACT-R, or Adaptive Control of Thought-Rational, is not just a theory, but also a software framework that is written in LISP. It’s development already lasts for decades with multiple spin-offs for other languages and modified versions of the original model.

Source: teachthought.com

ACT-R is mostly focused on different types of memory and less on transformations of the data in it. It was developed as a computational model of the human mind and was successful to some extent. It has been applied to predict fMRI imaging results as well as some psychological experiments on memory. However, it failed consistently in practical applications and remained only as a tool for researchers. SOAR has similar roots and underlying hypothesis as ACT-R but focused more on achieving AGI than modelling of the human cognition.

ACT-R and SOAR are classic representatives of symbolic approach to AI, and both of them are steadily losing popularity relative to connectionist approaches. They played an important role in the development of cognitive sciences, but their applications require much more configurations and prior knowledge than modern connectionist ML counterparts. Furthermore, neuroimaging and other tools for the studies of mind are becoming more detailed and accurate, while both ACT-R and SOAR are lagging behind and, in some sense, are too rigid to stay relevant.

In my view, however, the future of AI must be symbolic at least to the extent where AI agents can understand and follow our laws composed in human-friendly symbols.

AI agents in the wild

Above I’ve mostly described algorithms that define the policies of AI agents existing nowadays. But, each of them also has some kind of body: computers, robots or servers, and the environment where they operate, which is usually defined by internet services which they have connections with.

Most PCs, smartphones and other gadgets have very similar hardware performance. Their policies are defined by operational systems, and they “learn” by downloading additional software. While early computers completely relied on interactions with humans to learn, nowadays most of them are receiving updates over the internet.

Role of server agents is growing as more and more data is moving to the clouds. Those agents are responsible for most computationally-intensive tasks and are somewhat similar to the central nervous system. On the contrary, consumer-facing gadgets are improving their input/output capabilities becoming somewhat similar to peripheral nerves.

Source: researchgate.net

An extreme case of that is usually known as the Internet of Things where dozens of tiny highly-specialized devices are performing only one or a few functions each, while central cloud-based “brain” orchestrates all of them to control houses, factories and even whole areas.

In contrast, robotics is usually focused on much more autonomous agents. These robots usually have to deal with complicated real-world input/output channels themselves in real time. Self-driving vehicles are probably the most famous examples:

This is only a simplified picture, while real systems often have over a hundred sensors with constant streams of inputs, while their outputs can make a difference between life and death. Engineering such agents is one of the hardest areas of AI research nowadays.

More than that, consumer-oriented robots is only a small fraction of them and a relatively new trend, while the majority is designed for industrial and military needs. With this in mind, misbehaviour of a self-driving taxi looks like a little accident in comparison with a fault of an armed drone or nuclear plant controllers. Programming of policies for such systems can’t rely on black-box learning algorithms, but usually involves strict mathematical specifications for every aspect of their work.

Altogether, AI agents are coming in all shapes and colours, but the trend is that peripheral gadgets are getting smaller while data centers are growing.

Quantum world

While this section might appear disconnected from the topic of intelligence, I believe that physics and, in particular, quantum physics deserves specific attention for a number of reasons.

Source: physics.stackexchange.com

First of all, QM is the common ground for all artificial and biological agents. Workings of both semiconductors and biochemical agents are fundamentally based on quantum effects. While it doesn’t make much sense to talk about intelligence on the atomic or sub-atomic level, it’s totally possible to build universal computers from all kinds of materials.

Second, mathematical tools developed over 300 years ago to calculate planetary motions became the foundation for backpropagation and gradient descent. More than that, probability theory, statistical mechanics and matrix mechanics are fundamental for QM and are close relatives of the modern AI. Currently, Deep Learning is like alchemy, but I believe physics can help us understand it much better than we do now.

Third, the rise of quantum computing. While quantum computers are still in their infancy, current experiments already show significant speed-up potential for certain kinds of optimization problems. For example, Boltzmann Machine is a kind of ANN that is intractable in most practical scenarios, so practitioners came up with a restricted variation that became one of the first deep neural nets. However, maybe quantum computers will allow us to harness the full power of BMs as well as many other probabilistic models.

present and future quantum computing

Lastly, QM is much harder to understand than anything else described above. Probability amplitudes, violation of classic probabilistic logic and vague picture of everything that happens on the sub-atomic level are just the tip of the iceberg. Ironically, while many people criticize artificial neural networks for poor interpretability, even humans fail to describe quantum physics in intuitive terms.

Biological Agents

In contrast to AI agents which exist for only about 100 years, biological ones are here for about 3 billion. There are millions of species on earth, and all of them have something in common: DNA.

Why DNA is so important? Generally, it is the “central nervous system” of the cell. Besides, it is widely accepted that prior to DNA-based life there were RNA-based organisms, but functionally and structurally they are very similar.

Source: Wikipedia

Most of DNA, about 98% in the case of humans, don’t encode proteins, and for a long time was considered useless. However, a considerable chunk of it plays the crucial role in controlling which parts of coding DNA should be active depending on the environment. Also, parts of DNA itself might be deactivated by methylation, which is also reversible and might happen multiple times throughout the life cycle.

All of that allows genome to react to different combinations of inputs in different ways, deciding which role host cell should specialize in and how active it should be. Also, DNA doesn’t actually require a host cell to exist. Extracellular DNA is degrading, but smaller pieces might survive for many years.

By the way, modern biotech allows us to synthesize and edit DNA as we want, so at this point, the distinction between artificial and biological agents basically disappears.

Cells

Elementary functional cells are known as protocells:

Source: xabier.barandiaran.net

They represent what first living organisms might have looked like. Models of the environment on the earth about 3 to 4 billion years ago suggest that bubbles of lipids could have caught inside enough nucleotides to create first genomes by chance, which in turn could have started replicating by catching nutrients from their surroundings. After accumulating the critical amount of genes and other chemicals those bubbles divided under the force of internal pressure.

Another simple example is a virus. The main difference between the two is that viruses don’t maintain inner metabolism and need to exploit other biological agents to replicate. Their genome is usually very short and might encode as few as 1 or 2 proteins. However, viruses can “communicate” through DNA exchange with their hosts in the process known as horizontal gene transfer. Many single-cell organisms are capable of it and it plays an important role in overall evolution.

In contrast, bacteria can have multiple sensors for different chemicals, light, pressure, temperature and other stuff. Many of them have mechanisms for movement that resemble usual combustion engines on a molecular scale.

Bacteria E. Coli. Source: gfycat.com

Also, they have rather advanced communication and can group together in swarms. Their outputs are not only waste anymore. Their genome and all kinds of proteins around it allows them to digest a wide range of nutrients and perform rather complicated behaviour. However, in general, their structure is very similar to protocells and archaea.

On the contrary, eukaryotic cells have quite a lot of organelles. Some of them, like mitochondria and chloroplasts, have their own chunks of DNA and might have been separate organisms in the past. Also, mitochondria play a crucial role in the so-called Krebs cycle, which is fundamentally important for metabolism.

Source: biochemanics.wordpress.com

Typical eukaryotic cells have much more complicated chemical machinery inside but lack the ability to move on their own. More than that, animal cells also lack chloroplasts and cell walls, which compromises their autonomy even further. In general, cells of organisms on the evolution tree depicted above from left to right gradually lose their abilities to survive on their own, while gaining more complicated “social” policies and specialized functions.

One of the quickest ways how cells can react to changes in their environment is by action potentials. When some of the sensors detect chemicals, pressure or other stimuli, they can cause a rapid change in electrical potential in the cell membrane, which in turn may induce cascades of chemical reactions leading to all kinds of outcomes.

Venus Flytrap plant. Source: giphy.com

However, action potential signal is limited to the cell of origin and cells that have direct membrane-membrane connections with it. It can be communicated to other cells through signalling molecules, but this process is significantly slower. To avoid this bottleneck most animals have specialized cells — neurons.

Schematic view of a neuron. Source: Wikipedia

They come in different shapes and can grow new synapses or remove old ones throughout their lifespan. Peripheral neurons usually have just a few hundred connections, while intermediate ones can have more than 10.000. All that machinery allows them to quickly move signals around and transform them by adjusting synaptic strengths. Also, many axons in vertebrates have myelin sheath that allows electric potentials to move even faster while activating fewer membrane channels and saving energy.

However, neurons come in highly interconnected systems and to understand what they are doing on the macro scale you need to consider the whole connectome. One of the best studied nervous systems so far is of the worm C. elegans:

Overview of the C. elegans nervous system. The majority of neurons are located in several ganglia near the nerve ring. Source: stb.royalsocietypublishing.org

It has been studied for more than 50 years and we already know the detailed structure of all it’s 302 neurons with more than 5000 synapses:

Partial circuit diagram of the C. elegans somatic nervous system and musculature. Sensory neurons are represented by triangles, interneurons are represented by hexagons, motor neurons by circles and muscles by diamonds. Arrows represent connections via chemical synapses, which may be excitatory or inhibitory. Dashed lines represent connections by electrical synapses. VNC, ventral nerve cord. Source: rstb.royalsocietypublishing.org

As you might see, even 302 neurons pose a real challenge for understanding what each of them is doing. This is getting even more complicated by the fact that they are “learning” and their functions might change in real time. Now try to imagine what happens with billions of cells in the human brain.

Given all this complexity, most research in neuroscience is focused on specific regions, pathways or cell types. Most evolutionary old structures are responsible for respiration, heart beating, sleep/wake cycles, hunger and other vitally important functions. However, cerebral cortex receives more attention than everything else.

Structurally cortex is a folded layered sheet with a thickness of about 2–3 millimetres and an area about of a dinner napkin that surrounds other parts of the brain.

Cross section of the cortex. Source: etc.usf.edu

It involved in everything that we consider as higher cognitive functions, like language, consciousness, planning etc. In humans, about 90% of the cortex is represented by the neocortex, which is one of the most recent evolutionary inventions in the brain.

Another well-studied region is hippocampus:

All vertebrates have a similar structure called pallium, but only mammals have a more evolved version which is depicted above. It plays a crucial role in spatial and episodic memory. Simply put, it functions as a cognitive spatiotemporal map. With this map brain can store complex memories in other parts specialized in visual, audio and other types of representations.

First studies of the brain were focused on injuries and lesions. However, correlations between absent brain regions and absent cognitive functions turned out to be relatively weak for cortex. It turned out that memories are distributed across the cortex, and even after surgical removal of some part neighbouring neurons may re-learn missing functions. In addition, it`s usually hard to precisely specify the boundaries of the injury. These studies provided maps like this:

Source: pinterest.fr

The main problem with these maps is the lack of precision in both ends, practical and theoretical. In the experimental setting, you can stimulate little parts of the brain and watch the response. But, except for primary sensory and motor areas, it usually yields rather cloudy results. On the other hand, nowadays you may use functional magnetic resonance imaging to track which parts of the brain are active while subjects are performing some task, but since areas aren’t specialized in just a few tasks the results are usually blurry. Also, fMRI is actually measuring oxygen supply levels, so it’s not enough to measure activity on the level of individual neurons like this:

Spike propagation in a hippocampal neuron. Source: nature.com

One of the most promising current directions in neuroscience research is optogenetics. It allows us to control the activity of individual neurons with much higher precision using genes that provide light sensors for neurons. However, it requires genetic manipulations which can’t be used for experiments with humans.

Another interesting feature of the brain activity is that it goes on in waves:

High-level interpretation of EEG recordings (cps = cycles per second). Source: dickinson.edu

All these studies help us understand and treat neurological diseases, but they are far from describing the behaviour of humans except for correlations between activity in some parts with rather vague descriptions of what this person is doing or thinking about. Anyway, this bottom-up approach to the research of mind led to a lot of important discoveries like the possibility of predicting someone’s choice based on their neural activity and that there is no “central” part of the brain.

On the other hand, behavioural studies from the psychological perspective are influenced a lot by genetic, cultural and environmental factors. One of the best-known results of this research is Intelligence Quotient as well as tests to measure it. There are also many theories that attempt to explain intelligence like the theory of multiple intelligences, the triarchic theory of intelligence and others. However, none of them has been widely accepted so far.

The main problem of psychological theories is their descriptive nature which does not provide a way to quantitatively prove them. The amount of neuron-level processes underlying even simple acts like walking or saying “hi” is extremely huge, plus considering the complexity of the DNA and other bio-machinery inside every cell psychological interpretation of the neuroscientific research is often even more complicated than experiments themselves. However, some models of human cognition are drawing solid connections between behaviour and neural activity.

The most interesting one, in my view, is the Integrated Information Theory (IIT), which is based on these axioms:

Axioms and postulates of IIT. Source: wikipedia.org

Other theories include reinforcement learning and how it is implemented in the brain, numerous models of memory, vision, hearing, language and others. In my view, however, IIT proposes the most general theoretical framework among all of them.

While the models mentioned above are mostly focused on the behaviour of individuals, “social psychology” is crucial for most living organisms. Starting with colonies of bacterias in your gut and all the way up to fish, ants, bees, birds and human societies are arising from social interactions. We already know quite a lot about the chemical language of ants and how bees communicate through “dancing”, but understanding human emotions poses a huge challenge. Things are getting even more complicated with all the languages, laws and religions that we have.

So, what is Intelligence?

There are plenty of answers, but we don’t have a widely-accepted unified theory of biological and artificial intelligence yet. However, I believe that a hybrid of AIXI and IIT might get us closer to it. To combine them we’ll need a physical notion of reward/utility which might be derived from medicine and economics, applicable to every artificial and biological agent, which is a huge problem on its own.

Almost all current intelligence measurements are based on performance on some tasks, which poses a problem in a real world where the environment is continuously changing as well as tasks that agent may stumble upon. On the other hand, the definition of consciousness as “any possible experience” and related IIT framework with the intelligence framework behind AIXI together may provide a broader picture of cognitive performance.

From the inside perspective, workings of any agent can be described as a wave function of a quantum system, but in almost all cases it would be computationally intractable. Also, interpretation of learned intermediate representations poses a huge challenge for both biological and artificial agents.

Most importantly, I believe that no single algorithm or mechanism is ultimately responsible for intelligence, but it is a property of how an agent is interacting with its environment.

What’s next?

While advances in AI and deeper understanding of human intelligence have a lot of upsides and tons of practical applications, they also reveal a number of challenges that we need to deal with, and most of them fall into one of these categories:

  • Privacy. Before — your data belonged to you and, to some extent, to the government, with strict laws regulating its flow. Now, hundreds of tracking services, social networks and other corporations with little-to-zero disclosure about how this data is used.
  • Bias. Except for artificially curated ones, every training dataset has its biases and they tend to amplify in the closed-loop systems like recommendation engines.
  • Alignment. Most AI training is based on maximizing utility or minimizing errors, and those objective functions don’t represent all human values and morals.
  • Displacement. Technologies have been replacing humans in many tasks for a while already, but human evolution is much slower than that of AI. Just a few decades ago computers were rare tools for professionals, but now it’s hard to stay relevant without using them every day.
  • Cyberattacks. Before cyberattacks usually required a lot of preparation to targeted a single person, but modern AI can gather information, guess passwords, generate phishing content and pretend to be someone else much faster than humans while improving itself in the process.
  • Psycho-engineering. Numerous psychological experiments and lessons from history have revealed that even people without any prior violent tendencies can do real harm when properly manipulated. Facebook, Google and other large corporations probably have enough information about us to target, screen and force to do basically anything.

How can we reliably solve bias and alignment problems in the case of trading bots which control major parts of the global economy? Who is responsible for the faults of AI agents in scenarios for which they haven’t been trained well enough? How can we make fault-tolerant brain-computer interfaces that won’t be able to take control over our minds? Also, most of these problems are relevant to humans just as much as to AI.

Where we will end up in 5, 10 or 20 years? I don’t know, and I encourage you to be skeptical about any forecasts regarding AI as well. History shows that most predictions even from leading AI researchers turned out to be wrong, sometimes by a huge margin. However, I believe that the symbiosis of artificial and biological intelligence is inevitable and might be very beneficial for us if we will acknowledge related problems and deal with them.

Resources

As well as coursera.org, edx.org and many other open education platforms. When I started studying all of that I haven’t planned to publish anything, so I haven`t collected the list of references and I apologize if your work is described above and is not on the list (feel free to reach me here, through twitter @eDezhic or by email [email protected]).

Favorite

Leave a Comment