Perception, Prediction, and Antifragility

I’ve recently picked up a copy of Nassim Taleb’s latest book Antifragile, and have been very excited about what I’ve read so far. Taleb introduces his concept of antifragility as being the opposite of fragility — something he insists ‘robustness’ or ‘resilience’ doesn’t quite capture. Fragile objects hate disturbance (the glass smashes on the floor when dropped), robust objects buffer themselves from disturbance (the thermostat senses the temperature rise and turns on the air conditioning to compensate), but antifragile objects thrive on disturbance (bones grow denser when we lift heavy weights, but waste away when exposed to zero-gravity conditions).

Taleb’s work is focused on decision making in uncertain environments where disturbances are difficult, if not impossible, to predict. Sometimes, really big unpredicted events take place, ‘black swan events’ (events with no precedent in the historical data). An antifragile entity benefits from such conditions. It does this, not by making predictions (which are fragile to disturbances), but by using heuristics (know-how) to put itself in a position to benefit from the random/unexpected/unknown. Might perceptual systems display antifragile properties, or are they merely robust?

The focus on heuristics over prediction bear a striking resemblance to work in embodied cognition. Thinkers such as Maturana, Varela, Gibson, and many others have put an emphasis on the role of know-how in the process of perception, and simultaneously deemphasized the role of generating internal (neural) models used for predicting and interacting with the world. Such thinkers have recognized the fragility of prediction, and the robustness of heuristics, but what of antifragility in the perception-action process?

Rather than having an internal model ‘of the world’, perhaps it would make more sense to consider the heuristics themselves as the models for goal-directed action. When an agent wants to accomplish something, they engage in heuristic X which they expect to have the outcome Y. Then the model (the heuristic) is selected based on the prediction (the expected outcome). One might then ask the question: how are the heuristics generated?

Perhaps this is the place where the notion of antifragility can help. Taleb points out that for a system to be globally antifragile, it must be locally fragile. That is, in order to generate a variety for evolutionary selection, a system must also weed out what does not work. There must be a distributed trial-and-error process, where errors come at a low cost globally. So, we need a system that can generate heuristics, and hopefully ones that don’t cost much if they don’t work out.

Despite much evidence that people use heuristics, not predictive logic, to solve all sorts of interesting problems (see the book Gut Feelings by Gerd Gigerenzer), ‘predictive’ coding has recently become a topic of considerable interest among many neuroscience labs. This interest is related to recent developments in Bayesian network models of perception, and allow for an error generated updating of predictive models. One thing that seems to be missing from such accounts is the purpose of predicting. Are we really just predicting for prediction’s sake? I would think anyone would agree the answer is no. For prediction to be of any value, it must be embedded in some context of decision making. In the case of perception the most obvious decision would be the selection of action with a particular desired outcome. We predict that engaging in heuristic X will give us the outcome Y. When it doesn’t, we need to revise our model. How do these revisions happen? One possibility would be in a piecemeal way, where slight adjustments to a singular model result from poor predictions. There is also the possibility of a more evolutionary paradigm, where a diversity of heuristics are generated and selected for or against over time. Likely, there is a mixture of both piecemeal and evolutionary development, along with some other stuff that hasn’t been dreamt up.

Although I’ve raised more questions than I’ve answered, I’ll ask one three more: what could the role of ‘black swan’ events play in perception? Is it possible the structure of our nervous system could benefit from such rare and profoundly unpredictable events? Or are perceptual systems merely robust with respect to disorder, dealing with, but not thriving off of, random and unpredictable events?

Here are two relevant videos, the first is Taleb giving a talk at Google about his ideas:

And the second is Andy Clark speaking on predictive coding, which is interesting given his usual focus on embodied cognition, which downplays the role of prediction (talk starts at 11m52s).

Advertisements

About Joe

I'm a PhD candidate in Complex Systems and Brain Sciences at Florida Atlantic University. I'm currently studying the neural dynamics of visual perception.
This entry was posted in antifragility, perception, prediction. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s