Spring 2019 content to be expanded upon.

 

Masters Thesis
Ongoing Notes

Fall 2018+

Spatial computing (VR, etc) reveals an expansive and underexplored possibility-space of interactions wherein the physics subtending affordances and phenomena can itself be designed, rewarding novel approaches to interaction design.

Through reviewing literature and prototyping spatial interactions, I am exploring the impact of previously-unencountered physical dynamics upon the development of intuition with systems and am identifying significant representations of external objects and the body itself, with an eye towards the larger goal of transformative tools for thought.

While identifying promising avenues from the convergence of literary sources, I researched and surveyed applications of interactional dynamics by designing prototypes of spatial interactions given different materialities and computed physics, gaining insight through direct engagement with novel spatial phenomena. These VR prototypes illustrate design considerations for now-accessible interactional and material unorthodoxies, recognizing consequences and applications for embodiment and sensory-coordination.

I expound in my thoughts section.

My cumulative essay on Assimilating Hyperphysical Affordances, covering these prototypes in greater detail, is available on Medium.

 
 

Literary Research ・ Prior Theory

Expectations from Prior Physics

Sutherland, The Ultimate Display
Blom, Virtual Affordances: Pliable User expectations
Golonka & Wilson, Ecological Representations
Gibson, The Ecological Approach to Visual Perception

People build familiarity with ordinary materials and objects, the “interactional grammar” of physical affordances. This is a challenge if computed environments can diverge from that familiarity, and users expect certain behaviors from the start that confine the designer’s hand (and mind) to provide only what aligns with expectation. On the other hand, leveraging these expectations while selectively breaking them with confined novel behaviors provides opportunities to slowly wean users away from their ossifications.

Coherence and Coordination of Phenomena

Gibson, The Ecological Approach to Visual Perception
Piaget, The Origins of Intelligence in Children
Chemero, Radical Embodied Cognition
Sutherland, The Ultimate Display

This familiarity is built up via repeated exposure to consistent observed physical behavior, where covariance of stimuli unifies the parallel streams of input into singular percepts. Relevantly, this incentivizes designers to provide multiple sensory responses for a given phenomena or user action, fleshing out the validity of the subjective experience. A difficulty, however, is that without coordination between designers across experiences, the preponderance of divergent interactional grammars and hypermaterial depictions might inhibit users from developing overarching familiarities.

Engagement with New Physics

Sutherland, The Ultimate Display
Piaget, The Origins of Intelligence in Children
Disessa, Knowledge in Pieces

The usefulness of an environment is a function of its physical capacities, and thus the expanded set of hyperphysics within simulated systems supports, in principle, a proportionally-expanded usefulness. Direct bodily engagement is possible not only with simulations of micro- and macroscopic phenomena, but even more esoteric and unorthodox phenomena not directly realizable within our universe’s laws. This vastly expands the space of interaction design, and rewards open and explorative mindsets and design approaches. Our neuroplasticity enables us to attune ourselves to the nuances of whatever our senses happen to provide, and this expanded space of computer-mediated experience supports untold applications of that plasticity.

Affordances

Gibson, The Ecological Approach to Visual Perception
Dourish, Where the Action Is: The Foundations of Embodied Interaction

Hyperphysics supports novel behaviors that have no necessary analogue in ordinary physics. Thus the entire structural, visual, and dynamic “language” of ordinary affordances is inadequate to fully cover all possible transformations and behaviors that hyperphysics supports. Even fundamental material behaviors are not in principle guaranteed. However, the greater space of possible physical behaviors offers opportunities to create new affordances with new interactional grammars that can take advantage of the specificity of computing power and the precise motion tracking of the body.

Embodiment; Homuncular Flexibility

Heersmink, The Varieties of Situated Cognitive Systems: Embodied Agents, Cognitive Artifacts, and Scientific Practice
Won et al, Homuncular Flexibility in Virtual Reality
Leithinger et al, Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

The body’s relationship to tools is often quite fluid, where prolonged use allows tools to be mentally fused with the body, and engagement with the world is perceived at the tool’s interface with the world rather than the body’s interface with the tool. The ability to depict the body in novel and hyperphysical ways, while still mapping the depicted body’s movement to the base movements of the user, enables startlingly compelling computer interfaces such as increasing the number of limbs or changing the physical form of the hands to better interface with a task.

Tools for Thought

Sutherland, The Ultimate Display
Gooding, Experiment as an Instrument of Innovation: Experience and Embodied Thought

Ideally, the increased adoption and bodily engagement with hyperphysics will provide us with new tools to understand and represent not only the world around us at scales heretofore inaccessible. As the scale and complexity of problems experienced by humanity grows, it is critical to augment our problem-solving ability, a large part of which involves the creation of new forms of representation, ideally giving us a better grasp on the most fundamental questions.

 

 
 

BigGAN as Explorable Yet Non-Physical Space

Having previously only considered the interactional+intuitional consequences of unorthodox computed laws of physics, my traversal of BigGAN’s high-dimensional space on ganbreeder.app revealed ways that my spatial interaction thinking was limited by being focused on physics simulations.

BigGAN’s latent space is massive, and yet through hours of stepwise exploration and interpolation I noticed that I build intuitions about its structure and tendencies, learning to avoid being “trapped” in attractors of increasing visual artifacting.

This space has interactional dynamics that don’t involving computing a physics system but are nevertheless available for intuition to develop around. This in some ways reminds me of mathematical notation, which I write about here.

gan1.gif

Rapid Fluency with Over-mapped Hand Input

Experiments with intentionally esoteric mappings of hand motion to vector field behavior.

I controlled the scale, intensity, and drag of a turbulent vector field affecting one million particles via the three rotational axes of my hands. Initially quite difficult to control, I quickly found pockets of orientation in parameter space that produced engaging particle behavior, and learned to return precisely to those pockets via muscle memory. Once there, exploring the adjacent parameter space was easy, and I could shepherd the particles with high precision.

I’m interested in the rapidity of neuroplasticity even when faced with a relatively arbitrary mapping. Maximizing the number of simultaneously tracked+mapped dimensions of tracked input allows for high expressiveness, at the cost of a learning curve.

I wonder how designers might come to trust users more to devote time to building facility with novel spatial phenomena+tools. I fear that the UI trend toward immediate intuitiveness may hamper the development of virtuosity, and I’m interested in exploring the space of novel spatial interactions that afford skill development.


Grab and fuse
Right Hand menu
Left Hand Interface
Right Hand

Raymarched SDF Hands
Explorations in Materiality

Raymarched signed distance fields (SDFs) is a method of rendering 3D shapes without using polygons. It defines each object as its geometric primitive, each influencing a shared surrounding concentric “distance field”, and renders an isosurface at a given radius away, thus visually fusing any objects that are within 2r of each other. This property produces very organic forms, where any collision smoothly joins the objects into a melted, singular mass.

I had seen this technique used for external objects, but never used for the rendering of the hands themselves, and I suspected it might be quite compelling. Initially I added SDF spheres to the tips of my thumb and index finger within the Attachment Hands section of the hierarchy, childed to my thumb- and indextip, but found that the raymarching script needed them to be at the same level in the hierarchy. Instead I wrote a simple script to have each SDF sphere inherit the global transform coordinates of a specific body part, allowing me to place them wherever in the object hierarchy.

After two fingertips were created, I went ahead and added the other eight, populating the world with a sphere and a couple cylinders to move up to directly see the SDF behaviors. This was immediately mesmerizing, and changing the effective isosurface radius changed my hands from separate spheres only overlapping within close proximity to a singular doughy mass where the underlying proprioceptive motion remained intact if not only slightly masked.

I added spheres for the rest of my finger joints and knuckles, and found that it felt slightly more dynamic to only include the joints that I could move separately. My knuckles weren’t adding to the prehensility and only added mass to the lumpiness, so I removed them.

Before starting, I envisioned that this rendering technique might allow hands where the SUI was fused somehow, or was emitted out of the body directly, or where the body could fuse with the external world. I imagined some UI element being stored in my palm, and only exiting when my hand enters some state.

I initially added a disk-aspect-ratioed cylinder as my palm, and situated a sphere embedded at its center, to be drawn out when my palm rotates to face me. However, the blending between the solid disk and the sphere was too great, bulging too much at the center. I instead tried a torus as my palm, as it leaves a circular hole that the sphere could fit in. Secondarily, when the sphere floats above the palm, the torus offers negative space behind the sphere which provides extra visual contrast, heightening the apparence of the floating UI. By rising above the palm, the sphere delineates itself from its previously-fused state, spatially and kinetically demonstrating its activeness and availability. I expect this UI (and thus the overall form holding it) to change from the placeholder sphere and something with more direct utility. However, this materiality-prototype serves as a chance to engage with the dynamics of these species of meldings without immediate application. The sphere is pokable and pinchable, perhaps the type of object that could be pulled away from its anchor and placed somewhere in space (expanding into a larger set of UI elements).

On my right hand, instead of a prehendable object, I wished to see how something closer to a flat UI panel might behave amidst the hand. To remain consistent, I again chose the torus as the palm, and embedded a thin disk in its center that, when the palm faces me, rises above the palm a few centimeters. While docked, the restrained real-estate of the torus again provides the panel breathing-room such that the pair do not, in their fusing, expand to occupy a disproportional volume. In its current implementation, the panel remains the same size through its spatial translation. I’d like to experiment with changing its size during translation such that in its active state it is much larger and might perhaps be removable and exist apart from the hand as a separate panel.

These experiments begin to touch on this novel materiality, and point at ways that UI might be stored within the body, perhaps reinforcing an eventual bodily identification with the UI itself. Further, the ways that grabbed objects fuse with the hand mirrors how the brain assimilates tools into its body schema, and begins to more directly blur the line between user and tool, body and environment, internal and external.

What are the implications of such phenomena? Could a future SUI system be based around body-embeddedness? What would distinguish its set of activities from surrounding objects? What body parts are most available to embeddedness? The arms are arguably the most prehensile part of the body, and most often within our visual fields, so their unique anchorability is easy to establish.

In future explorations of this rendering technique, I aim to expand on the direct mapping of user motion to behavior of objects in the visual field. How might the sphere behave as an icon of a tool that adheres itself to the fingertip directly, becoming the tool itself (rather than merely a button to enter that tool mode)? How might scaling of objects afford svelter embeddedness before scaling to useful external sizes?

I’m keen to explore more direct mappings of hand motions to the movement of rendered structures. Might elements of the SUI be mapped to the fingers in a way that the prehensility allows a novel menu-maneuvering? Is proprioception loose enough that one would feel identified with the hand-driven motion of non-hand-like structures?