I aim to frame simulated environments (such as those found in virtual reality etc) as comparable to [or in the same spectrum as] notations (like mathematical notation) in that they are “environments” with rulesets that can be internalized. This is a framing I haven’t explored completely, and I aim to use this essay (in its etymological basis) as an attempt to assay this framing’s consonance with other areas of interest within my overall thesis, namely the behavior of paraphysical (transphysical? superphysical?) affordances and the opportunities for embodiment with simulated objects.
I’m interested in the thought of notation (mathematical, musical, lingual, interfacial, etc) being a sort of environment / world / ruleset, that has a certain set of behaviors that can be encountered, internalized, and come to be known intuitively.
I see the manipulation of elements on the page (as with algebraic notation), screen (as with the interactional “grammar” of a certain software), or with material (as with the pattern of operation/manipulation of beads on a soroban/abacus) as being the manipulation of what the brain treats as an internally-coherent environment whose rules and parameter space can be explored and learned.
To take algebraic notation as an example, its spatial, modular structure of coefficients, variables, and operators has specific rules the user/mathematician must follow when rearranging elements to maintain equality and mathematical truth. The spatial operativity engages the user/mathematician in a way decidedly unavailable with prior attempts at notation, namely the propositional, paragraphic description of geometric relations that nevertheless describe the same algebraic relationship as the more modern algebraic notation. The paragraphic notation, while accurately articulating the math, is notationally unavailable to spatial rearrangement in the way that algebraic notation engages the spatial intuition. As a tool for thought it does not afford manual manipulation of elements in a way that algebraic notation allows the user/mathematician to explore the system through manual rearrangement. It is this manual operativity that I see as a quality of explorable environments.
These notations are, in a sense, internally-coherent environments created by humans, able to be partially inhabited (through the affordances of their supporting medium, classically though perhaps too often paper) by the body and thus the mind. The most powerful thing about some notations is that their ruleset is internalizable, that their mode of operation can become purely mental, not requiring the initially-supporting medium, and their internalization scaffolds a mental model / simulation of that “environment’s” ruleset/laws in the same way that we develop a mental model (or models) of our classical environment’s laws, our brains able to simulate hypotheses and without even desiring so, pursue causal chains so ingrained in our set of expectations that it doesn’t even feel like thinking or analysis, but something far more direct.
I now wonder if these schema, these mentally-internalized models of experienced environments (be them the classically spatial or more notational) form a sort of Gibsonian ecology in our own minds that via repeated engagement arranges itself into alignment with our external circumstances, whatever they may be (this is where I see simulated, virtual environments’ superphysics entering into relevance). Such is the development of expectation/preparedness/familiarity? I’ve wondered how Gibson treats prediction, as that does seem to require a sort of internal model/representation independent (though at basis directly dependent on prior sense impressions) of current sense data.
Our bodies have aspects that afford certain approaches to the world. By default these are determined by our physiology, and then circumstance edits that phenotype, augmenting our bodies with environmental objects that can be embodied. We are provided by birth with a body of environmental objects bound to classical physics that we gain facility in maneuvering around, that we feel identified and embodied with.
Objects in the world can be found or fashioned and be incorporated into the body and that now-changed body encounters the environment in different ways. Critically, as the environment is encountered repeatedly, the (perhaps newfound) capacities of the body collide with and interface with the environment, simultaneously giving the user/owner opportunities to internalize the dynamics of their body and the dynamics of the environment (particularly useful in the ways the environment is newly-accessible or perceivable specifically from the body’s new capacities through embodied object augmentation).
This power of the brain, to plasticly incorporate objects into itself when given enough time to wield them as it learns to wield the genetically-provided object of the body, becomes especially powerful when the objects to wield and embody have a range of behaviors beyond what classical physics allows, as is the case with computer-depicted-and-simulated objects as are interactable in VR etc. This connects back to my framing of notations as alternate “environments”, with the key difference that the rules for (for example paper-based-) notation are maintained/forwarded by human mastery of that ruleset, and failures of “accurate depiction” if the rules are forgotten or a single operation is made incorrectly break the environment, whereas the computer ostensibly is rigidly locked into self-accuracy, not to mention the orders of magnitude greater depth of ruleset simulation possible by digital computation.
This greater range of possible behaviors boggles the mind, which makes the job of the designer difficult, and will likely be a cultural, likely generational project, to explore the parameter space of possible “universes” of behavior rulesets to find the most useful (and embodiable) simulated objects/phenomena.
A role of many designers has involved tool design within the classical physics of our lived environment. As computers became ascendent, their simulating ability allowed the design of phenomena (UI) that could behave in ways other than classical physics, specifically allowing novel tools for thought and thus novel ways of situating/scaffolding the mind. However, the depictive media (e.g. screens) available to represent computed phenomena available were too often exclusively two-dimensional, with only two-dimensional input, failing to leverage the nuanced spatial facility of the body. Now there exist computing systems capable of tracking the motion of the head (thus orientation within possible optical arrays) and any prehensile limb, capable of simulating three-dimensional phenomena and providing a coherent and interpretable optical array as if the user was themself present amidst the simulated phenomena.
Critically, a role of the designer no longer purely involves the design of phenomena within physics, but has come to also encompass the design of the physics themselves, exploring how different system-parameters can sustain different phenomena, different notations, and thus new modes of behavior, productivity, and thought.