Our world has become increasingly complex, stratified, and not always compatible. We require the means to translate across these perspectival dislocations and grasp hold of points of orientation which allow us to steer a vast array of systems. But how are we to do this when the various models we have of the world seem so alienated from one another?
[O]nly internal variety within the system itself can force down the variety due to the system’s environment: the system’s codes must be as highly differentiated as the (potentially system-relevant) variety obtaining in the environment, if the system is to perceive this variety at all, let alone make fully sense of it and be able to steer it. One cannot perceive something one cannot ‘place’. Information that is overly complex relative to the degree of differentiation of the individual’s codes – influenced by his educational level, I.Q., level of emotional development, previous experiences, etc. – goes in one ear and out the other, without registering.
— R. Felix Geyer
What I am concerned with is the transfer between models, in particular between our best model of human cognition and that of our other best models. It is therefore part of a pragmatic enterprise, and not a foundational one. This is why I focused on the analogy of the “interface”, the notion that we already represent the world according to a particular “algorithm” or “design”. In a sense what I intend is a kind of allegorical mode of representation, one not built on a metaphysics, but the translation between between models — a sort of User Interface design – relating principles of human perception to abstract models of external phenomena.
Scott Aaronson — Why Philosophers Should Care About Computational Complexity
Edwin Hutchins — Cognition in the Wild
Fredric Jameson — Cognitive Mapping
James Ladyman, James Lambert, Karoline Weisner — What is a Complex System?
William C. Wimsatt — Reductionism and its heuristics: Making methodological reductionism honest