Martyrdom as an Antifragile Narrative

In Dan Carlin’s recent Common Sense podcast ‘Arab Spring Fever’ (I recommend this podcast very highly, especially if you are interested in how the US might internally restructure itself to continue evolving and avoid stagnation and decay), he mentions one of the features that unites most, if not all, of the major religions is some kind of concept of martyrdom. Here, I mean this in the loosest sense to simply mean a kind of ‘divine’ suffering. That is, seeing hardship as a blessing in disguise, if you will.


One might ask why this is such a common feature of organized religion. Perhaps it contains some truth, and is therefore invariant over many specific instances of genuine spirituality. However, what struck me was the consequence Carlin pointed to of such a belief structure. Essentially, if the religious narrative of an individual or a people includes being downtrodden as verification of the narrative itself, then becoming downtrodden can only strengthen the narrative. It follows that if followers of such a narrative are not exposed to persecution, suffering, etc. the staying power of the narrative weakens through a lack of empirical verification.

Systems that gain from being exposed to disorder, stress, the unpredictable, etc. and are harmed by a lack of such volatility have been called ‘Antifragile‘ by author and decision-making expert Nassim Nicholas Taleb. It strikes me that martyrdom-centric religious narratives seem to display this feature of antifragility. When true believers become downtrodden, the narrative is strengthened; when no outside stress is applied to true believers, the narrative may dissipate.

If this analysis is correct, it is perhaps no surprise that most of the major religions share this feature. Real systems in the real world are inevitably exposed to volatility. Those that benefit from it prosper, those that do not are here and gone again in the blink of an eye.

An interesting twist on this framing is the fact that Taleb promotes ‘non-narrative’ modes of thinking and living. However, for the human animal, it seems a narrative can serve as an antifragile heuristic. Or, perhaps more precisely, collectives (e.g. religions) may employ antifragile heuristics through leveraging narratives at the level of human psychology. Either way, it seems the reason the world religions share this martyrdom-narrative could be precisely the strengthening effect volatility then plays on it, whereas non-martyrdom-narratives can appreciate no such effect of the random.

Posted in antifragility | Tagged , , , , , | Leave a comment

Structure and Process

A structure constrains a process, a process entails a structure.

Structure is the relation among processes. Process is the ongoing dynamics over a structured manifold. Process leads to the alteration of structure, which in turn modifies the ongoing dynamics.

Creativity happens when the processes can intermix and intermingle in novel ways via the given structure; in ways not determined by the structure, but constrained and loosely organized by it.

So far as I can tell, one does not reduce to the other– they come as a pair.


Posted in Uncategorized | Tagged , , , | Leave a comment

Perception, Prediction, and Antifragility

I’ve recently picked up a copy of Nassim Taleb’s latest book Antifragile, and have been very excited about what I’ve read so far. Taleb introduces his concept of antifragility as being the opposite of fragility — something he insists ‘robustness’ or ‘resilience’ doesn’t quite capture. Fragile objects hate disturbance (the glass smashes on the floor when dropped), robust objects buffer themselves from disturbance (the thermostat senses the temperature rise and turns on the air conditioning to compensate), but antifragile objects thrive on disturbance (bones grow denser when we lift heavy weights, but waste away when exposed to zero-gravity conditions).

Taleb’s work is focused on decision making in uncertain environments where disturbances are difficult, if not impossible, to predict. Sometimes, really big unpredicted events take place, ‘black swan events’ (events with no precedent in the historical data). An antifragile entity benefits from such conditions. It does this, not by making predictions (which are fragile to disturbances), but by using heuristics (know-how) to put itself in a position to benefit from the random/unexpected/unknown. Might perceptual systems display antifragile properties, or are they merely robust?

The focus on heuristics over prediction bear a striking resemblance to work in embodied cognition. Thinkers such as Maturana, Varela, Gibson, and many others have put an emphasis on the role of know-how in the process of perception, and simultaneously deemphasized the role of generating internal (neural) models used for predicting and interacting with the world. Such thinkers have recognized the fragility of prediction, and the robustness of heuristics, but what of antifragility in the perception-action process?

Rather than having an internal model ‘of the world’, perhaps it would make more sense to consider the heuristics themselves as the models for goal-directed action. When an agent wants to accomplish something, they engage in heuristic X which they expect to have the outcome Y. Then the model (the heuristic) is selected based on the prediction (the expected outcome). One might then ask the question: how are the heuristics generated?

Perhaps this is the place where the notion of antifragility can help. Taleb points out that for a system to be globally antifragile, it must be locally fragile. That is, in order to generate a variety for evolutionary selection, a system must also weed out what does not work. There must be a distributed trial-and-error process, where errors come at a low cost globally. So, we need a system that can generate heuristics, and hopefully ones that don’t cost much if they don’t work out.

Despite much evidence that people use heuristics, not predictive logic, to solve all sorts of interesting problems (see the book Gut Feelings by Gerd Gigerenzer), ‘predictive’ coding has recently become a topic of considerable interest among many neuroscience labs. This interest is related to recent developments in Bayesian network models of perception, and allow for an error generated updating of predictive models. One thing that seems to be missing from such accounts is the purpose of predicting. Are we really just predicting for prediction’s sake? I would think anyone would agree the answer is no. For prediction to be of any value, it must be embedded in some context of decision making. In the case of perception the most obvious decision would be the selection of action with a particular desired outcome. We predict that engaging in heuristic X will give us the outcome Y. When it doesn’t, we need to revise our model. How do these revisions happen? One possibility would be in a piecemeal way, where slight adjustments to a singular model result from poor predictions. There is also the possibility of a more evolutionary paradigm, where a diversity of heuristics are generated and selected for or against over time. Likely, there is a mixture of both piecemeal and evolutionary development, along with some other stuff that hasn’t been dreamt up.

Although I’ve raised more questions than I’ve answered, I’ll ask one three more: what could the role of ‘black swan’ events play in perception? Is it possible the structure of our nervous system could benefit from such rare and profoundly unpredictable events? Or are perceptual systems merely robust with respect to disorder, dealing with, but not thriving off of, random and unpredictable events?

Here are two relevant videos, the first is Taleb giving a talk at Google about his ideas:

And the second is Andy Clark speaking on predictive coding, which is interesting given his usual focus on embodied cognition, which downplays the role of prediction (talk starts at 11m52s).

Posted in antifragility, perception, prediction | Leave a comment

Reductionism, Emergence, and Consciousness

Famed neuroscientist and self-proclaimed ‘romantic-reductionist’ Christof Koch has publicly endorsed a ‘panpsychist’ metaphysic in his latest book (see a short discussion here on my friend Matt’s ‘footnotes2plato’ blog). In other words, Koch believes ‘mind’ to be a fundamental aspect of reality. I have not (yet) read the book, but having read some of his (relatively narrowly focused) neuroscientific studies, I find this insight into his larger philosophical perspective interesting for a couple of reasons.

First, it is encouraging to see a well-known and well-respected neuroscientist take something other than an eliminative materialistic approach to understanding what the mind is and what the brain does. To entertain the notion that what we call ‘mind’ might permeate reality in a deeper way than current scientific methods allow us to appreciate is important, I think. If for no other reason, it is important because it is an admission of our ignorance into the nature of reality. Science is meant to probe reality, but its value is its self-awareness in being an ongoing project, necessarily incomplete. This incompleteness means that we cannot say with any certainty what is fundamental to reality, we can just refine our current picture. And if history is any indication, what we currently consider obvious and self-evident will be turned upside down in a future revolution.

Second, I think it is telling of the reductionistic perspective in general. Reductionism is the assumption that all phenomena are understandable through deconstruction into their components (and the components can be broken into their components etc.). To maintain a reductionistic standpoint and simultaneously take consciousness seriously one is almost forced into the position Koch is apparently taking. In other words, if a) consciousness is real, and b) reductionism holds, then consciousness must go ‘all the way down’.

In contrast, the systems notion of ’emergence’ allows that novel aspects of the universe can come into being without contradicting any preexisting law, but nonetheless having a causal power all their own. When an emergence event happens, the universe does not only shift quantitatively, but  qualitatively as well. Of course, as of yet, emergence deals with the behavior of systems, in other words the ‘external’ materially extended aspect of a system. What is so fascinating about consciousness is that it is an ‘internal’ phenomenon (or, possibly, a relation between internal and external). But while current models of emergence can only deal with the external behavioral aspects of a system, the spirit of the idea includes the possibility of new, and real, aspects of the universe.

My intuition leads me to see both ‘internal’ and ‘external’ as co-emerging and co-evolving aspects of reality. From this perspective, I’m not sure how useful it is to claim consciousness goes ‘all-the-way-down’. If we were to accept Koch’s proposal that it does, and in a way amenable to reductionistic analysis, we are still left with the work of how ‘little-consciousnesses’ come together in an organized fashion to build ‘big-consciousnesses’. Presumably at certain thresholds, consciousness will change in character dramatically, as in a phase-transition. Can we understand these events as nothing more but the sum of the ‘little consciousnesses’? If not, we still need to preserve the notion of emergence, and reductionism will still be insufficient to grapple with these issues.

Koch’s proposed measure ‘phi’ aims, I think, to address some of these issues. I will have to read the book to see where we agree, and where we don’t. First up on the list, though, is his close colleague Giulio Tononi’s recently published book ‘Phi‘ that just arrived in the mail. I imagine it will cover similar ground, and I hope to gain insight into their perspective.

Posted in consciousness, emergence, reductionism | Tagged , , | 1 Comment