Assimilating Hyperphysical Affordances: Intuition and Embodiment at the Frontiers of Phenomena

The depictive capabilities of spatial computing offer novel opportunities and challenges to the field of interaction design. I aim to articulate a framing of spatial computing that highlights unorthodox aspects of it, with the hope that such a framing might inform incoming designers, augmenting their preconceptions of how to approach this medium.

expectations from prior physics

The physics system that we find ourselves in at birth determines the nature of our bodies, the phenomena we encounter, and the affordances available in the environment. We become familiar with and develop intuitions about the ‘interactional grammars’ that we repeatedly come into contact with. Or, as Sutherland (1965) puts it, “We live in a physical world whose properties we have come to know well through long familiarity. We sense an involvement with this physical world which gives us the ability to predict its properties well.” This is the default state that designers have operated within since antiquity.

With the advent of computer-rendered dynamic media, phenomena could be represented that diverged from the phenomena driven by physical laws classically confining designed artifacts. This larger space of possible physical dynamics, of which the physics of our universe is but a subset, I refer to as hyperphysics. Since these phenomena are observed and interacted with by users who developed in ordinary physics, most users are presumably attuned to the nuances of phenomena (or do “not enter devoid of expectations that come from their previous experience” (Blom, K. J., 2007)) and may be immediately aware of the similarities, recognizing that “content that is familiar to the user from the real world will be initially and automatically considered the same as a real object” (Blom, K. J., 2010). Or, as Golonka & Wilson (2018) state: “When we encounter a novel object or event, it will likely project at least some familiar information variables (e.g., whether it is moveable, alive, etc), giving us a basis for functional action in a novel context”. The challenge is how to communicate “hyperphysical” affordances that do not have exact analogues in ordinary physics. 

For example, many objects in rendered environments (such as depicted in “virtual reality” fields of view or superimposed on the outside world in “augmented reality” fields of view) are capable of being grasped and moved around, no matter their apparent mass, extent, smoothness, etc., even non-locally. Yet Gibson (1979)’s conception of what is graspable (granted, conceived prior to the availability of spatial computing) requires “an object [to] have opposite surfaces separated by a distance less than the span of the hand”. This requirement is now seen as being compartmentalized to only ordinary physics, but should designers of spatial user interfaces (SUIs) abandon it completely? Surely it’s useful to leverage the already-developed familiarity with ordinary physics’ interactional grammars, but at what expense? How tightly should SUIs be coupled to ordinary physics? What is conserved in intuitiveness is lost in the full exploration of the hyperphysics capable of being simulated, as “there is no reason why the objects displayed by a computer have to follow the ordinary rules of physical reality with which we are familiar”(Sutherland, 1965).

coherence and coordination of phenomena

Of course, the reason ordinary physics is intuitive is because we develop and spend our whole lives fully immersed in it. When the environment offers a consistent set of phenomena and consistent responses to input, the brain becomes accustomed to the perceived patterns and builds a set of intuitions about the phenomena. Piaget (1952) notices that “adaptation does not exist if the new reality has imposed motor or mental attitudes contrary to those which were adopted on contact with other earlier given data: adaptation only exists if there is coherence, hence assimilation.” This consistency comes from the fact that ordinary physics do not change over time or location, and the perception of the unity of events arises from multiple senses receiving coordinated impulses. In Gibsonian (1979) parlance, “when a number of stimuli are completely covariant, when they always go together, they constitute a single ‘stimulus’”. Piaget (1952), in noting that “the manual schemata only assimilate the visual realm to the extent that the hand conserves and reproduces what the eyes see of it”, communicates the unification of tactile and visual sensory input, that merely “the act of looking at the hand seems to augment the hand's activity or on the contrary to limit its displacements to the interior of the visual field.”

Usefully, since our bodies are themselves physical, we can directly impact the environment and observe the effects in realtime, becoming recursively engaged with the phenomena in question. Chemero (2009) describes this recursive engagement thusly: 

Notice too that to perceive the book by dynamic touch, you have to heft it; that is, you have to intentionally move it around, actively exploring the way it exerts forces on the muscles of your hands, wrists, and arms. As you move the book, the forces it exerts on your body change, which changes the way you experience the book and the affordances for continued active exploration of the book.

This is assisted by the fact that our senses are not located exclusively in the head. “…in perception by dynamic touch, the information for perception is centered on the location of the action that is to be undertaken” (Chemero, 2009). Thus we can correlate the visual feedback of where, for example, the hand is amidst the environment, the proprioceptive feedback of the hand’s orientation relative to the body, and the tactile and inertial feedback provided by the environment upon the hand. 

Being confined to the laws of ordinary physics, the parallel input sources agree, providing a consistent “image” of the environment. The fewer senses available, the less well-defined the final percept is, and partial disagreement between senses can override “anomalous” sense-inputs. This can lead to perceptual illusions like when, at a stoplight, a large bus in the adjacent lane begins to move forward, and provided it occupies an adequately large section of the visual field, the sensation of yourself moving backwards is induced, even if there is no vestibular agreement with the optical flow. Thus, to provide as rich and internally-coherent experience as possible, spatial computing systems need to provide many parallel sources of sensory input that agree, forming a unified sensory field. Sutherland (1965) agrees that “if the task of the display is to serve as a looking-glass into the mathematical wonderland constructed in computer memory, it should serve as many senses as possible.”

Two difficulties arise. The physical behavior of rendered environments depicted in spatial computing need not align with ordinary physics (the alignment in fact being a difficult if not impossible feat), and the rendered environments need not be internally consistent either (especially given that 1. simulated physics can change in realtime at the whim of the designer {something that ordinary physics is by definition incapable of} and 2. independent rendered-environment-designers can make available  environments that have vastly different physics and thus different interactional “grammars”). Thus the lived experience of the user, navigating between ordinary physics and the variant and likely inconsistent physics of rendered environments, involves shifting between equally inconsistent interactional grammars. Will this have a negative affect on the brain? Will expertise with unorthodox physics developed in a simulated environment have a zero-sum relationship with the embedded expertise navigating ordinary physics? Is the brain plastic enough to contain and continue developing facility in an ever-increasing number of interactional grammars?

engagement with hyperphysics

The opportunities afforded by bodily engagement with hyperphysical simulated systems, however, are numerous. The usefulness of the environment is a function of its physical capacities, and thus the expanded set of hyperphysics within simulated systems supports, in principle, a proportionally-expanded usefulness: 

Concepts which never before had any visual representation can be shown, for example the "constraints" in Sketchpad. By working with such displays of mathematical phenomena we can learn to know them as well as we know our own natural world. (Sutherland, 1965)

We lack corresponding familiarity with the forces on charged particles, forces in non-uniform fields, the effects of nonprojective geometric transformations, and high-inertia, low friction motion. A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. (Sutherland, 1965)

It is fundamentally an accident of birth to have been born into ordinary physics, but the mind is in principle capable of becoming fluent in many other physics:

Our perceptions are but what they are, amidst all those which could possibly be conceived. Euclidean space which is linked to our organs is only one of the kinds of space which are adapted to physical experience. In contrast, the deductive and organizing activity of the mind is unlimited and leads, in the realm of space, precisely to generalizations which surpass intuition. (Piaget, 1952) 

A key constraint then becomes the ability of designers to envision novel physics to then manifest, as 

computers are so versatile in crafting interactive environments that we are more limited by our theoretical notions of learning and our imaginations. We can go far beyond the constraints of conventional materials… (diSessa, 1988)

affordances

Hyperphysics supports novel behaviors that have no necessary analogue in ordinary physics. Thus the entire structural, visual, and dynamic “language” of ordinary affordances is inadequate to fully cover all possible transformations and behaviors that hyperphysics supports. Even fundamental material behaviors like collision are not in principle guaranteed. Dourish (2004) describes how collision can be an essential property for certain useful arrangements:

Tangible-computing designers have sought to create artifacts whose form leads users naturally to the functionality that they embody while steering them away from inconsistent uses by exploiting physical constraints. As a simple example, two objects cannot be in the same place at the same time, so a "mutual exclusion" constraint can be embodied directly in the mapping of data objects onto physical ones; or objects can be designed so that they fit together only in certain ways, making it impossible for users to connect them in ways that might make sense physically, but not computationally.

However, the greater space of possible physical behaviors offers opportunities to create new affordances with new interactional grammars that can take advantage of the specificity of computing power and the precise motion tracking of the body.

embodiment; homuncular flexibility

The body’s relationship to tools is often quite fluid, where prolonged use allows tools to be mentally fused with the body, and engagement with the world is perceived at the tool’s interface with the world rather than the body’s interface with the tool. Blind people can build a relationship with their cane such that “the cane is … incorporated into [their] body schema and is experienced as a transparent extension of [their] motor system” (Heersmink, 2014). The opportunities for spatial computing are even more potent here, where the medium’s capacities for tracking the body’s motion allows an even greater mapping between the rendered environment’s behavior and the user’s motion than ordinary dynamic media constrained to two-dimensional screens and rudimentary inputs.

The ability to depict the body in novel and hyperphysical ways, while still mapping the depicted body’s movement to the base movements of the user, enables startlingly compelling computer interfaces such as increasing the number of limbs,

Participants could hit more targets using an avatar with three upper limbs, which allowed greater reach with less physical movement. This was true even though motions mapped from the participants’ tracked movements were rendered in a different modality (rotation of the wrist moved the avatar’s third limb in arcs corresponding to pitch and yaw). Use of more intuitive mappings might enable even faster adaptation and greater success. (Won et al, 2015)

or changing the physical form of the hands to better interface with a task, as explored by Leithinger et al (2014): “…we can also morph into other tools that are optimal for the task, while controlled by the user. Examples include grippers, bowls, ramps, and claws — tools with specific properties that facilitate or constrain the interactions”. The question then becomes how many familiar aspects to include so as to conserve intuition, framed by Won et al (2015) as “…what affordances are required for people to use a novel body to effectively interact with the environment?”, especially when “such realism may reinforce the user’s desire to move as he or she would in the physical world.” Though, critically, the brain’s plasticity allows for novel environments to eventually become quite literally second-nature, as in the classic Heideggerian example of the hammer, articulated by Heersmink (2014): “When I first start using a hammer, my skills are underdeveloped and the hammer is not yet transparent. But gradually my hammer-using skills develop and the artifact becomes transparent which will then alter my stance towards the world.”

tools for thought

Ideally, the increased adoption and bodily engagement with hyperphysics will prove us with new tools to understand and represent not only the world around us at scales heretofore inaccessible (as Sutherland (1965) envisions about subatomic particles: “With such a display, a computer model of particles in an electric field could combine manual control of the position of a moving charge, replete with the sensation of forces on the charge, with visual presentation of the charge's position”, but also purer forms of knowledge such as mathematical relationships, and will lift our minds to new heights as previous notations for thought have already done. Gooding (2001) articulates it well: 

Computer-based simulation methods may turn out to be a similar representational turning point for the sciences. An important point about these developments is that they are not merely ways of describing. Unlike sense-extending devices such as microscopes, telescopes or cosmic ray detectors, each enabled a new way of thinking about a particular domain.

The sciences frequently run up against the limitations of a way of representing aspects of the world — from material objects such as fundamental particles to abstract entities such as numbers or space and time. One of the most profound changes in our ability to describe aspects of experience has involved developing new conceptions of what it is possible to represent.

As the scale and complexity of problems experienced by humanity grows, it is critical to augment our problem-solving ability, a large part of which involves the creation of new forms of representation, ideally giving us a better grasp on the most fundamental questions. Gooding (2001), again, articulates it well:

But the environment is increasingly populated by artefacts which function as records and as guides for reasoning procedures that are too complex to conduct solely with internal or mental representations. In this way we are continually enhancing the capacity of our environment for creative thought, by adding new cognitive technologies.

These tools are still in their infancy, and only through an open exploration of the frontiers of their possibility-space will we find the most powerful means to augment our intellect.

references

Blom, K. J. (2007). On Affordances and Agency as Explanatory Factors of Presence. Extended Abstract Proceedings of the 2007 Peach Summer School. Peach.

Blom, K. J. (2010). Virtual Affordances: Pliable User expectations. PIVE 2010, 19.

Chemero, A. (2009). Radical Embodied Cognition.

Disessa, A. A. (1988). Knowledge in Pieces.

Dourish, P. (2004). Where the Action Is: The Foundations of Embodied Interaction. MIT press.

Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Psychology Press.

Golonka, S., & Wilson, A. D. (2018). Ecological Representations. bioRxiv, 058925.

Gooding, D. C. (2001). Experiment as an Instrument of Innovation: Experience and Embodied Thought. In Cognitive Technology: Instruments of Mind (pp. 130-140). Springer, Berlin, Heidelberg.

Heersmink, J. R. (2014). The Varieties of Situated Cognitive Systems: Embodied Agents, Cognitive Artifacts, and Scientific Practice.

Leithinger, D., Follmer, S., Olwal, A., & Ishii, H. (2014, October). Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration. In Proceedings of the 27th Annual ACM Symposium on User interface Software and Technology (pp. 461-470). ACM.

Piaget, J., & Cook, M. (1952). The Origins of Intelligence in Children (Vol. 8, No. 5, p. 18). New York: International Universities Press.

Sutherland, I. E. (1965). The Ultimate Display. Multimedia: From Wagner to Virtual Reality, 506-508.

Won, A. S., Bailenson, J., Lee, J., & Lanier, J. (2015). Homuncular Flexibility in Virtual Reality. Journal of Computer-Mediated Communication, 20(3), 241-259.

On Notation as Physics, and the Human Capacity to Learn Environments

I aim to frame simulated environments (such as those found in virtual reality etc) as comparable to [or in the same spectrum as] notations (like mathematical notation) in that they are “environments” with rulesets that can be internalized. This is a framing I haven’t explored completely, and I aim to use this essay (in its etymological basis) as an attempt to assay this framing’s consonance with other areas of interest within my overall thesis, namely the behavior of paraphysical (transphysical? superphysical?) affordances and the opportunities for embodiment with simulated objects.

I’m interested in the thought of notation (mathematical, musical, lingual, interfacial, etc) being a sort of environment / world / ruleset, that has a certain set of behaviors that can be encountered, internalized, and come to be known intuitively. 

I see the manipulation of elements on the page (as with algebraic notation), screen (as with the interactional “grammar” of a certain software), or with material (as with the pattern of operation/manipulation of beads on a soroban/abacus) as being the manipulation of what the brain treats as an internally-coherent environment whose rules and parameter space can be explored and learned.

To take algebraic notation as an example, its spatial, modular structure of coefficients, variables, and operators has specific rules the user/mathematician must follow when rearranging elements to maintain equality and mathematical truth. The spatial operativity engages the user/mathematician in a way decidedly unavailable with prior attempts at notation, namely the propositional, paragraphic description of geometric relations that nevertheless describe the same algebraic relationship as the more modern algebraic notation. The paragraphic notation, while accurately articulating the math, is notationally unavailable to spatial rearrangement in the way that algebraic notation engages the spatial intuition. As a tool for thought it does not afford manual manipulation of elements in a way that algebraic notation allows the user/mathematician to explore the system through manual rearrangement. It is this manual operativity that I see as a quality of explorable environments.

These notations are, in a sense, internally-coherent environments created by humans, able to be partially inhabited (through the affordances of their supporting medium, classically though perhaps too often paper) by the body and thus the mind. The most powerful thing about some notations is that their ruleset is internalizable, that their mode of operation can become purely mental, not requiring the initially-supporting medium, and their internalization scaffolds a mental model / simulation of that “environment’s” ruleset/laws in the same way that we develop a mental model (or models) of our classical environment’s laws, our brains able to simulate hypotheses and without even desiring so, pursue causal chains so ingrained in our set of expectations that it doesn’t even feel like thinking or analysis, but something far more direct. 

I now wonder if these schema, these mentally-internalized models of experienced environments (be them the classically spatial or more notational) form a sort of Gibsonian ecology in our own minds that via repeated engagement arranges itself into alignment with our external circumstances, whatever they may be (this is where I see simulated, virtual environments’ superphysics entering into relevance). Such is the development of expectation/preparedness/familiarity? I’ve wondered how Gibson treats prediction, as that does seem to require a sort of internal model/representation independent (though at basis directly dependent on prior sense impressions) of current sense data.


Our bodies have aspects that afford certain approaches to the world. By default these are determined by our physiology, and then circumstance edits that phenotype, augmenting our bodies with environmental objects that can be embodied. We are provided by birth with a body of environmental objects bound to classical physics that we gain facility in maneuvering around, that we feel identified and embodied with. 

Objects in the world can be found or fashioned and be incorporated into the body and that now-changed body encounters the environment in different ways. Critically, as the environment is encountered repeatedly, the (perhaps newfound) capacities of the body collide with and interface with the environment, simultaneously giving the user/owner opportunities to internalize the dynamics of their body and the dynamics of the environment (particularly useful in the ways the environment is newly-accessible or perceivable specifically from the body’s new capacities through embodied object augmentation).

This power of the brain, to plasticly incorporate objects into itself when given enough time to wield them as it learns to wield the genetically-provided object of the body, becomes especially powerful when the objects to wield and embody have a range of behaviors beyond what classical physics allows, as is the case with computer-depicted-and-simulated objects as are interactable in VR etc. This connects back to my framing of notations as alternate “environments”, with the key difference that the rules for (for example paper-based-) notation are maintained/forwarded by human mastery of that ruleset, and failures of “accurate depiction” if the rules are forgotten or a single operation is made incorrectly break the environment, whereas the computer ostensibly is rigidly locked into self-accuracy, not to mention the orders of magnitude greater depth of ruleset simulation possible by digital computation.

This greater range of possible behaviors boggles the mind, which makes the job of the designer difficult, and will likely be a cultural, likely generational project, to explore the parameter space of possible “universes” of behavior rulesets to find the most useful (and embodiable) simulated objects/phenomena.

A role of many designers has involved tool design within the classical physics of our lived environment. As computers became ascendent, their simulating ability allowed the design of phenomena (UI) that could behave in ways other than classical physics, specifically allowing novel tools for thought and thus novel ways of situating/scaffolding the mind. However, the depictive media (e.g. screens) available to represent computed phenomena available were too often exclusively two-dimensional, with only two-dimensional input, failing to leverage the nuanced spatial facility of the body. Now there exist computing systems capable of tracking the motion of the head (thus orientation within possible optical arrays) and any prehensile limb, capable of simulating three-dimensional phenomena and providing a coherent and interpretable optical array as if the user was themself present amidst the simulated phenomena.

Critically, a role of the designer no longer purely involves the design of phenomena within physics, but has come to also encompass the design of the physics themselves, exploring how different system-parameters can sustain different phenomena, different notations, and thus new modes of behavior, productivity, and thought.

Internalizing Simulated Systems: Manipulation within Virtual Environments

Spatial computing and virtual environments like VR are more powerful mediums than screen-based mediums in that they leverage our bodily fluency with spatial and physical interactions. Further, as computers can simulate and depict arbitrary [e.g. any/unspecified parameter set] physical systems, the environments depicted can behave according to laws other than the laws of physics that material artifacts are constrained by.

By permitting such “exotic” systems, virtual environments widen the design space, allowing the development of more nuanced tools and representations.

 

Manipulability

The main way that humans interface with their environment is through their hands, protrusions capable of orienting and manipulating in three dimensions. Humans further gain an understanding of their environment through senses around their bodies and heavily localized in the head. Modern spatial computing systems track the position and orientation of the head and hands, rendering a scene from the viewpoint of the user’s eyes in perfect synchrony with their motion, giving the illusion of presence within a scene. Tracking the position and actions (like grabbing) of the hands allows the user to manipulate objects within the rendered scene.

As we develop from infancy we develop spatial intuitions and a fluent sense of body through continuous interaction with the material world. However these nuanced abilities have been underutilized by screen-based dynamic media, trapping interactions onto two-dimensional touchscreens or constrained, indirect interaction surfaces such as mice and keyboards.

Spatial computing combines the flexibility of dynamic depictions with interactions approaching the spatiality and manipulability of material environments and objects.

Manipulations are powerful not only because they transform the interacted entity and thus the perception of the system, but, critically,  because they allow the body and mind to internalize the dynamics of the system interacted with (Hutchins, 1995, p140).

 

Internalization

The soroban is a prime example of manipulability’s importance. However, to assess it accurately, the traditional electronic calculator must be invoked.

When using an electronic calculator, the only actions the user participates in is the setup of the mathematical statement, inputting digits and algebraic operations. Once the equals key is pressed, all of the mathematical operations involved in solving the question occurs outside of the user’s perception, invisibly within the calculator. When the user receives the answer without themselves going through the steps, their perception of the mathematical relationships suffers, and their arithmetical abilities atrophy.

The soroban, on the other hand, involves intimate user manipulation to enact every step. It represents digits via the placement of pegs on decimal-place wires, and the user moves the pegs up and down in correspondence with the shifting placement of values during mathematical operations. Since the soroban requires explicit user manipulation to advance the mathematical operation, the user is a direct participant in every step. Such intimate involvement in the operations allows the body and mind to internalize the soroban’s structure. The user develops not only a muscle memory for the location and dynamics of the pegs, but over time builds an internalized, mental representation of the system (Hatano, 1988, p64). This is evidenced in relative novices but occurs to an even greater extent in seasoned users. With enough practice, soroban users do not even need a physical soroban around in order to perform calculations. They have internalized the soroban’s structure so completely that they can calculate massive problems using a purely imagined construct, perhaps rapidly waving their fingers in the air correspondent to the physical manipulations they have so fully internalized. “Sensorimotor operation on physical representation of abacus beads comes to be interiorized as mental operation on a mental representation of an abacus. By this, the speed of the operation is no more limited by the speed of muscle movement” (Hatano, 1988, p64).

Such an example demonstrates the power of bodily manipulation. Given enough time interacting with a system, the body and mind can internalize its dynamics and structure, building, at least in part, a robust mental representation. Since virtual environments supporting hand-presence empower users to manipulate their surroundings, users are that much more able to internalize the dynamics of the interacted systems, developing stronger mental models, growing more fluent at operating within the system, and, perhaps, developing mental representations usable outside of the virtual environment (Hutchins, 1995, p171).

Internalization need not be an exclusively intellectual phenomenon. Somatic internalization occurs when one develops the ability to balance a stick on a finger, as the body’s perception of force, pressure, and proprioception is correlated with visual feedback of stick angle (Heersmink, 2014, p58). The behavior of the overall system is initially alien but over time is explored and eventually becomes second-nature. Such is also the case for learning to drive a vehicle, painting, tying shoes, etc.. Any repeated collision with a manipulable system with bounded possibility-space [ ...as an unbounded space would produce infinite novelty and thus make long-term correlations difficult or impossible] will eventually produce some level of internalization.

Certain objects can be internalized in such a way that they come to be treated by the body as an extension of itself. The perceived locus of interface when using a pencil is at its tip and the paper surface, even though the body terminates at the end of its fingers and the edge of the pencil (Heersmink, 2014, p59). It is as if the pencil has been incorporated into the body schema of the user (Heersmink, 2014, p59). Similarly, 

For the blind man, the cane is not an external object with which he interacts, but he interacts with the environment through the cane. The focus is on the cane-environment interface, rather than on the agent-cane interface. The cane is furthermore incorporated into his body schema and is experienced as a transparent extension of his motor system. (Heersmink, 2014, p59)

Otherwise dynamic media that in some way hinder manipulation consequently hinder their ability to be internalized by the body and mind. More restrictive control surfaces such as mice and keyboards constrain possible manipulations to a small subset of what the body is capable of, and purely visual feedback (on a screen, indirect and away from the control surface) limits the depth of internalization. 

 

Depiction

The second critical feature of virtual, spatial systems is that they can depict objects, scenes, and transformations that are materially impossible (Biocca, 2001). While screens have classically been able to depict arbitrary visual arrangements including “impossible” or exotic arrangements, virtual environments offer the added benefit of robust spatial manipulability. Humans have traditionally been capable of only designing spatially-manipulable systems and tools within the constraints of material physics. With VR that veil has been lifted, opening the interaction-design space to novel tools and interactions previously impossible to not only manifest but possibly also conceive.

Leveraging our tendency to internalize systems we repeatedly interact with with the capacity to represent previously unrepresentable systems inaugurates a new relationship with theory. Previously, if we had developed a new theory or model of phenomena too large or small to be within the bounds of our physical interaction, we could only interface with abstracted versions of it, perhaps only through written or drawn notation. Now we have the capability of simulating such systems in ways that they are manipulable, allowing us to develop spatial intuitions from repeated interactions, possibly internalizing aspects that would have been otherwise invisible in less-realized or -manifested representations or notations.

Ryan Brucks’ parameter space value-finder is a powerful example of the sorts of systems that dynamic media can support (Brucks, 2017). Seeing it in motion communicates its dynamics better than a text description, so the link to the original Twitter post is included. Brucks arranged a two-dimensional grid of eyeballs freely rotatable in their spots, all attempting to aim at the location of his cursor. Critically, each eyeball has a different value of two parameters (speed of alignment to cursor and amount of spring dampening) set up as axes of the grid. As Brucks moves the cursor over the grid, each eye reacts slightly differently, its dynamics and behavior made visible and unique in comparison with its neighbors. Admittedly a surreal (and relatively simple) system, it serves to demonstrate what options exist for surveying parameter-space incorporating a combination of spatial manipulability and arbitrary physics. One imaginable application is using a similar setup to survey possible behaviors of a paintbrush/manipulator/tool in VR and directly plucking out the toolhead with the intended parameters as a sort of reactive, surveyable toolbar.

Representation

There are essentially infinite spatial arrangements of objects, only a subset of which are possible in physical space. This has classically severely constrained what types of tools could be created, designed, or even conceived of. Humanity needs powerful, manipulable representations and cognitive tools. 

“The sciences frequently run up against the limitations of a way of representing aspects of the world” (Gooding, 2001, p131). Early algebra was stuck in long-form paragraphs, and because that format was so unmanipulable, algebra barely advanced. It wasn’t until Descartes developed modern algebraic notation that algebra’s representation allowed a modular manipulability, and mathematical advancement skyrocketed.

There are many complex systems that have previously been unrepresentable, or our confusion about them has stemmed from improperly constrained representations.

Conclusion

This new age of spatial arrangements allows for novel representations and explorable systems, giving us better understanding of the most important systems around us. Spatial computing and VR are still infant media, and such an unconstrained design space is daunting, but they are likely the best tool yet in our attempt to understand the universe.

 

 

References

Biocca, F. (2001). The space of cognitive technology: The design medium and cognitive properties of virtual space. In Cognitive Technology: Instruments of Mind (pp. 55-56). Springer, Berlin, Heidelberg

Brucks, R. [@ShaderBits] (2017). "Fun way to find ideal values in 2d parameter space. Damping decreases left to right, speed decreases front to back.” https://twitter.com/shaderbits/status/939302802098188292

Gooding, D. C. (2001). Experiment as an instrument of innovation: Experience and embodied thought. In Cognitive Technology: Instruments of Mind (pp. 130-140). Springer, Berlin, Heidelberg.

Hatano, G. (1988). Social and motivational bases for mathematical understanding. New directions for child and adolescent development, 1988(41), 55-70.

Heersmink, J. R. (2014). The varieties of situated cognitive systems: embodied agents, cognitive artifacts, and scientific practice.

Hutchins, E. (1995). Cognition in the Wild. MIT press.

‘Life’ and Distinctions Among Matter

Schrödinger asks, in What is Life? The Physical Aspect of the Living Cell’s chapter Living Matter Evades the Decay to Equilibrium, “What is the characteristic feature of life? When is a piece of matter said to be alive?” Few would claim that a single atom is alive, or even a single molecule. It is necessary, then, when observing collections of molecules, to discern what structure and activity is present to warrant the claim of life.

The systems that we call alive are ordered in a way that is distinct from systems that we do not call alive. A periodic crystal is highly ordered, a sometimes perfect tessellation of its structural pattern, but it doesn’t act in a way that we would correspond with life. A rock does not act in such a fashion either. 

We look at lions and we see motion, we see intent to devour. They can be an active threat to us. We look at cows and come to understand that they eat and can be eaten, providing sustenance. Though their motion is achingly slow, we observe similar traits in plants—predation, the possibility of sustenance, etc. Even the ‘intent’ to grow towards the light. No such action do we observe in things we would not call alive, such as a rock.

It makes sense why this distinction (life versus non-life) would be useful to us. A rock on the ground need not pose a threat in the way a prowling lion certainly would, and it cannot provide sustenance in the way a juicy flank or orange can. We observe a whole class of entities that seem to share many aspects of action, and we begin to delineate them based on structure and operation, giving us categories of animal, plant, fungus, etc. As our awareness and knowledge of our environment expands, we uncover more entities, often with novel aspects, yet we still fit them within our categories with relative ease.

So it is that we come to a set of ‘requirements’ for what is life. Self-replication, growth, adaptation, etc. Schrödinger poses the question: “How does the living organism avoid decay? The obvious answer is: By eating, drinking, breathing and (in the case of plants) assimilating. The technical term is metabolism.” If we were to find an entity that possesses only some of these aspects, many people would claim that such an entity was “not alive” because it did not take part in every aspect of our definition. However, how is this intellectually honest? We observe many entities, and from the gamut of our experience, we create a delineation informed by the aspects that we observe in the entities. If we come to a new entity that does not share in all of those aspects, we say it is not alive. We anticipate nature when we create a definition and then apply it to nature, rather than letting nature inform the definition. In this example, the virus is an entity that does not take part in every aspect of our definition of life. It does not metabolize. Yet it reproduces and adapts. However, these aspects are not where we should look. To gain a sense for the relatedness of entities, emergent operation should not be the rubric. Instead the structure and how it produces the observed actions should be the rubric for relatedness.

We possess a general understanding of the presence of nucleic acids in ‘living’ entities and how their operation culminates in the larger action of the ‘living’ body. We say that many aspects of the entity result from the action of its nucleic acids, which in turn result from the structure of the nucleic acids themselves. 

We look at viruses and come to understand that their action is a result of the presence of nucleic acids with a genome that specifically determines their cycle of action. The whole species of nucleic acids, spanning plants, animals, microorganisms, and viruses, act similarly because they are of a like structure. We came up with a definition for life that was not informed by all available entities, and when we observe an entity that falls outside the definition, we discard its plausible inclusion into the hallowed ranks of life. This is not an honest way of defining. What even is defining, with regard to Nature? Are some delineations more valid than others? Surely this is so. We can observe the properties of a gram of mercury versus a gram of argon and be able to delineate them. The definition of life, on the other hand, is more suspect. For millennia, every time we found a new animal, it appeared and acted similarly enough to animals already held as alive to be itself considered alive. Such is the case with plants and fungi. These macroscopic entities shared a like nucleic structure, which in turn determined their like macroscopic structure and action, which was the basis of their being lumped into the same category of ‘living things’.

Once we became able to observe scales previously invisible to us, we found entities (microorganisms: bacteria, protists, etc.) that shared like action with macroscopic entities. This enabled their quick inclusion in to the ranks of living things. However, once we came to viruses, their unlike action (in some respects, namely their lack of growth and metabolism) had us claiming that they could not be alive because they did not share in all the characteristics seen elsewhere. Yet when we looked, we came to know that they possessed similar nucleic acids, and their nucleic operation was nigh identical to macroscopic entities. We came to understand that every entity we called alive shared the same foundation of operation, the nucleic acid. The fact that the operation of nucleic acids can produce an entity that does not need to metabolize is grounds to dissolve the definition for life.

What do we want a definition for, to begin with? We wish to delineate thing from thing, to find the basis of similarity and difference among what we observe. We can now look at all of ‘life’ around us as the product of the long-term evolution of nucleic acids. It’s astonishing how variable this species of molecule is. Its successful self-replication for billions of years has produced structures from viruses to aspens, multi-celled organisms from which a single cell can be separated and grown in isolation, massive colonies of insects that function cohesively, patterns of intelligence emerging from the summation of simple parts.

It’s possible that in the future we could find things that would externally appear “alive” as we recognize it today, whose structure does not utilize nucleic acids. However be the structure of their functioning, if it was locally anti-entropic and could self-replicate, we would still be able to distinguish them from our earthy ‘living things’ because their structure was distinguishable.

By relinquishing our definition for life that was created prior to our larger knowledge of the operation of ‘living’ entities, our understanding of the world around us can become fully a result of what we observe, rather than our applying ideas past their plausible relevance.

The distinction between rock and giraffe is a real one, and we now exist in a world where we are acutely aware of what makes them different. Gone are the days when the only observations we had were macroscopic. Understanding the physicochemical makeup of our objects of interest directly informs a more exhaustive understanding of their macroscopic action. The distinction lies in their makeup, not in their large-scale action. Convergent evolution produces similar large-scale actions, but the nucleus of similarity is derived from the hereditary chain, and the hereditary chain lies in the evolution of nucleic acids.