Software Epigenetics and Architectures of Life – Architecture


An End User Undertaking (EUU) is an agreement proffered by software manufacturers and signed by the representatives of software consumers that codifies a specific category of being known as an “end user.” End users are characterized by their likelihood to behave irrationally. This class of subject was invented by software developers in the early 1980s to differentiate their own knowing expertise from the chaotic ignorance of their clients. It’s a problem that’s not foreign to architects, e.g.: how do you design for an audience that couldn’t care less? Or, more delicately, like architecture, software might also be, “a work of art the reception of which is consummated by a collectivity in a state of distraction.”

The end user, as a design problem, was first sketched a decade prior in Karl Popper’s “Of Clouds and Clocks,” a 1966 text that was excerpted for a special issue of Architecture Design in 1969. With the analogs of clocks and clouds, Popper described determinacy and indeterminacy in physical systems. His excerpt began at some indeterminate point in media res by asking his audience to imagine a continuum stretched between the most unpredictable, disordered clouds on the left, and the most predictable, ordered clocks on the right. Toward the right, Popper arrayed the solar system, Cadillacs, and old dogs; while puppies, people, and swarms of gnats were lined up in states of increasing indeterminacy to the left. Popper used this continuum to suggest that in both the hard sciences, as well as those mushier ones like post-war architecture and urbanism, it was presumed that all clouds could be turned into clocks with additional knowledge. That is, a puppy is not inherently less deterministic than a Cadillac, it is simply more difficult to model. While intended as an interjection into ongoing debates between causal and quantum physicists, the inclusion of Popper’s continuum in Architectural Design suggested a particular configuration of the user equally useful in describing both the inhabitants of the post-war American city, and the operators of the software instruments through which the city was increasingly designed: a configuration consisting of users as inherently irrational clouds, interfacing with the clock-like precision of cities or software. It’s no coincidence that a few pages later in the very same issue of AD, this model shared between two distinct users would be collapsed by an experimental urban design software called INTU-VAL.

Intuition and Evaluation, or INTU-VAL for short, was a software platform developed by Peter Kamnitzer and UCLA’s Urban Laboratory Project in 1968. INTU-VAL was intended to discipline the cloud-like intuition of an urban designer with the clock-like evaluative capacities of a digital processor. Kamnitzer developed INTU-VAL through a speculative scenario that asked a user to route a highway through a dense cityscape; a pertinent scenario, coming at the end of an era of urban renewal. Kamnitzer and his collaborators undoubtedly understood the racial and economic violence of post-war planning as emerging, at least in part, from the unevaluated intuition of designers and policy makers. Using a light pen for input, INTU-VAL prompted its user to exercise this potentially problematic intuition atop one of six urban maps depicting the city’s topography, land-use, geology, population, conservation areas, and Dirty Roulette sites of visual interest. Once planned, INTU-VAL would subdivide the designer’s intuitively derived route into analytical sectors, cross-checking them with the remaining five maps to discover unseen conflicts caught between competing cartographic representations of the city. This process of cross-referencing aesthetic design intuition with the unseen, yet computationally modelled realities of the city sought to provoke a corrective feedback loop, wherein the software’s evaluative capabilities would recursively discipline the designer’s prejudice in order to arrive at the least egregious outcome. Upon digitally correcting the crisis of the post-war American city, user and software would celebrate their success together through a digitally animated drive down their simulated highway.

While ancillary to the broader ambitions of their experiment, INTU-VAL’s graphically animated highway was an important technical accomplishment, as the first computational code to translate a dynamically generated environment into a screen-based spatial simulation. It marked the earliest civilian deployment of a subroutine developed by General Electric in 1964 for their LEM Spaceflight Visual Simulator. The subroutine’s original task was to generate a black and white representation of the moon’s surface in real-time on a cathode ray display within NASA’s Space City Lunar Landing Simulator. Like an urban planner second guessing their biased intuition, General Electric’s simulator was designed to help astronauts manage the vagaries of perception while merging at high speed with a clock-like solar system. But INTU-VAL and GE’s Lunar Simulator not only pictorialized the first digitally simulated spaces; they helped to inaugurate an ontology of the user as a cloud, capable of being disciplined through the clock-like feedback of environmental simulation. While this cloudy user was implicit in GE’s conceptualization of an untrained astronaut, for Kamnitzer, this disciplinary regime suggested an approaching epigenetic evolution of the human mind itself, arguing that digital disciplinary instruments such as INTU-VAL “will trigger the next creative leap in the human brain.”

Kamnitzer was correct in anticipating INTU-VAL’s influence on the future of design software, but entirely wrong in imagining that his “creative leap” was anything but a profoundly conservative endeavor. In considering software as a corrective instrument, Kamnitzer perpetuated a model of the user that still lingers in our contemporary encounters with design software. For example, take the collective compulsion to orbit; that reptilian instinct of the architectural unconscious to mobilize a simulated eyeball’s orbit around a simulated object through the dragging of a mouse across a screen. We orbit absent-mindedly while waiting for our own aesthetic intuition to keep pace with our processing power; we orbit in order to make complex things visible so that we might discipline our judgment. But beneath all those orbiting eyeballs lies an ontological line connecting the contemporary architectural imagination to an Apollo astronaut struggling to steer an orbital lander as it merges with the moon. To orbit is not only to model an object, but by implication, to model ourselves, to fashion our minds into indeterminate aggregates made more clock-like through the precision of software and the feedback of visual simulation. Today we design ourselves within software platforms made for a world in which “all clouds are clocks—even the most cloudy of clouds.”

If Popper were to have been provided with more space in the publication, as we can see in the earlier, full-length version of the piece, his text would have gone on to cast doubt over the entire cloud/clock binary that it began with, suggesting instead a different world for software, the city, and its users to eventually inhabit, arguing:

If determinism is true, then the whole world is a perfectly running clock, including all clouds, all organisms, all animals, all men. If, on the other hand, [Charles] Pierce’s or [Werner] Heisenberg’s or some other form of indeterminism is true, then sheer chance plays a major role in our physical world. But is chance really more satisfactory than determinism?

Like Kamnitzer, Popper would reach for an evolutionary epoch to escape being stuck between his clock and a hard place. But rather than merely disciplining one binary pole into its opposite, Popper argued for a paradigm of “plastic control.” This notion acknowledged the presumption underlying Kamnitzer’s experiment, that cultural constructs such as technologies, theories, or mediums are epigenetic tools through which we fabricate ourselves. But unlike Kamnitzer, Popper argued that we are neither determined by, nor determining of these external evolutionary mechanisms, but rather are enmeshed in subtle exchanges of agency between ourselves and the world around us. Rather than a corrective instrument, Popper’s plastic control implies a model of the digital as a fraught arena in which subjects and technologies co-evolve in order to render particular permutations more useful than others. But while UCLA’s Urban Laboratory Project fashioned an ontology of the user around the graphic feedback of a hypothetical highway, a few hours north on I-5, a far more durable account of the user was being engineered.

In 1968 at Xerox’ Palo Alto Research Center (PARC), Alan Kay and his collaborators in the Learning Research Group introduced Dynabook, a proto-tablet device that radically reimagined the computer as a functionally ambiguous device. Rather than a disciplinary instrument, Kay and his collaborators understood the potential of computing as a functionally non-specific environment for encouraging nonhierarchical interactions between the user and code. Where Kamnitzer disciplined creativity through representational feedback, Kay attempted to accelerate the user’s mind through evolving encounters with an indeterminate digital environment, later arguing that Dynabook’s functional non-specificity “would actually change the thought patterns of an entire civilization.” But while Kay’s digitally accelerated evolution foregrounded the computer as an indeterminate environment, it was ultimately an environment inhabited by a profoundly different ontology of the user.

Kay’s subject was salvaged from his encounters with the theories of cognitive psychologist Jerome Bruner in the early 1960s. Over three books, Bruner argued that cognitive development occurred through the mind’s active restructuring of its context. According to this model, the mind already functioned like an environmental simulator: perceiving its context, representing those perceptions back to itself, and then acting upon those representations. Bruner, following the thinking of psychologist Jean Paiget, identified three stages of cognitive representation that he believed defined early childhood learning: the Enactive Stage, representing knowledge through actions; the Iconic Stage, where knowledge is represented through mental image making; and the Symbolic Stage, where information is stored through codes and symbols in the form of language. Importantly, where Piaget saw these stages of representation as sequential periods in the first seven years of cognitive development, Brunner and Kay understood them as permanent structural characteristics in the mind of his archetypal user, claiming that “[o]ur mentalium seems to be made up of multiple separate mentalities with very different characteristics. They reason differently, have different skills, and often are in conflict.” In a creative application of Brunner’s theories, Kay not only re-imagines the mind as a “mentalium” stacked from discrete mentalities, but torques this cognitive stack into a mirror of the personal computer itself. In a diagram later published in 1989, Kay conflates the hand/eye interactivity of the mouse as an interface with the Enactive Mentalis, the graphic spatiality of the desktop as an interface with the configurative Iconic Mentalis, and the object orientation of computational code as an interface with the most abstract, Symbolic Mentalis. It was a newly minted stacked mind mirroring the computational stack of screens atop code atop circuitry.

In the most superficial sense, architecture’s recent post-digital turn could be understood as a shift in attention from Kamnitzer’s disciplinary instruments to Kay’s techno-cultural arenas for interaction. However, this shift from useful tools to cultural terrains obscures an alternative ontology of the user reflected back. To imagine users as being fundamentally like computers is to imagine life itself as a computable phenomenon. The economies of accelerating human-machine interaction, intimated at PARC in 1968, rely as much on the ubiquity of smart technological arenas as they do on the discretization of the mind into an aggregate of class-based programmable faculties. Becoming digital thus entails an epigenetic evolutionary process in which we are all increasingly discretized into evermore computable components. For example: western intelligence agencies now identify anonymous TOR browser users by archiving their idiosyncratic mouse movements as gestural surfing signatures; micro-labor platforms such as Amazon’s Mechanical Turk now disentangled employable attention spans from bodies deserving human rights like healthcare; and crypto-libertarian tech-gurus will soon approach eternal life by transfusing themselves with the stem-cell-rich blood of millennials. Our relationship to the world is now defined by our status as plastic datasets; our abilities to circulate are determined by the usefulness of our ontic aggregates. Being has become the unending obligation to subdivide ourselves into ever more useful mentalia.

What comes next is perhaps already too easy to imagine: something like our cotton candy haired protagonist sheltering from the acid rain of New Tokyo beneath a hologram of Gary Busey’s oversized smile demoing the latest dermal mods. Today, our becoming digital seems strangely coincident with the impossibility of imagining any future other than a well-rehearsed noir pendulum swing from the early optimisms of digital pioneers like Kamnitzer or Kay. In a moment in which architecture’s aging instrumental understanding of digital technology is being upset by an awareness of the broader forms of violence underlying our contemporary digital platforms, perhaps this critical awareness should also be accompanied by a realignment of architecture’s specific forms of imagination. A realignment that might allow architecture’s nascent post-digital turn to sidestep an opposition between computational novelty and cyberpunk noir; two sides of an imagination that persists in seeing technology only as something other than ourselves. This realignment would call for another narrative altogether, one in which computation—as a geographically distributed arena for scattering ourselves across vast networks—allows us to imagine being as a spatial practice. A story in which becoming digital meant becoming architectural; life itself as an architectural act. If such an imagination is possible, I would suggest that it originated somewhere amidst the dead-links and forgotten wikis orbiting around a cryptic software experiment which came to be called Groupware.

This half-lost alternative imagination began in 1971 when Murray Turoff, a physicist working at the US Office of Emergency Preparedness, launched the Emergency Management Informational System and Reference Index (EMISARI) on a small network of UNIVAC multiprocessors. EMISARI was a communications network, intended to collate the knowledge of distant experts in order to assist the US government’s emergency response capabilities. EMISARI allowed these spatially disparate researchers to “log-in” to the national network using Texas Instruments teletype machines connected to long-distance telephone lines and exchange locally gathered information on topics ranging from regional economic disruptions to local commodity shortages. Turoff’s EMISARI was a proto-internet for data wonks, strung together on sophisticated calculators. In its initial implementation, Turoff’s early internet featured an ancillary function called Party Line. Like INTU-VAL’s spatial simulation, Party Line was considered by Turoff to be “[a] minor accomplishment compared to what else we were doing.” In fact, Party Line was something like the first digital chat room. Designed to obviate awkward conference calls, features such as the ability to see other participants in a network or to toggle their speaking privileges originated in Party Line, and relied on a sophisticated series of auditory signals indicating the status of other participants in the electric room.

EMISARI’s core-functionality maintained a niche user base until 1986, but along the way, a peculiar thing began to take place in Turoff’s “minor accomplishment.” While originally intended to address provincial concerns such as avoiding interruptions or tracking the contribution of individual participants, Turroff became convinced that Party Line’s interfacing of distant minds generated forms of cognitive friction between participants that rapidly co-evolved the creative intelligence of the group. Like Kamnitzer and Kay a decade previously, Turoff and his partner Starr Roxanne Hiltz quickly imagined this artificially accelerated cognition scaling from an electric room to civilization itself. The two would go on to found the Electronic Information Exchange System (EIES, pronounced “eyes”) in 1978 at the New Jersey Institute of Technology. Part asynchronous communications network, part interstate collaborative, part electric new world government in waiting, EIES was premised on a radical understanding of software’s potential as an interface for collectively engineering our own epigenetic evolution by processing users’ cognition as spatially redistributable content. The roughly two-thousand members of EIES, which included figures like Stewart Brand and Alvin Toffler, began referring to this projective model of software as “Groupware.” Between 1978 and the mid 1980s, EIES members collectively co-engineered their subjectivities as alternative artistic, political, and spiritual aggregates, creating everything from crowd-sourced soap operas to some of the earliest treatises on online aesthetics.

Like Popper’s plastic control, Groupware’s groups were defined by a cybernetic ontology of the user later characterized by Andrew Pickering as “nonmodern.” This nonmodern ontology refused a dualism between cognition and the world (or clouds and clocks) to transform the user from a fixed being into an emergent ecology. Early EIES members and Groupware theorists Peter and Trudy Johnson-Lenz later described these nonmodern assemblies of processors and “biological hardware” as “part computer software and part ‘imaginal software’ in the hearts and minds of those using it.” But unlike the discretization of the mentalium that now strings a causal chain from digital utopians like Kay through noir cyberpunks of the early internet age to contemporary cognitive capitalists, EIES’s Groupware insisted on the spatiality of this post-human ontology. While EIES’s diverse activities comprised some of the earliest forms of digital mass-culture, decades before the public popularity of the internet, their shared structures of thought emerged from a digital that was insistently spatial.

In a report by an EIES-affiliated techno-spiritualist organization called The Awakening, a taxonomy of groups is outlined that reads like an architect’s catalog of spatial types. The creative acceleration of group dynamics observed by Turoff in the first Party Line experiments of the early 1970s is attributed to classes of spatial dynamics such as boundaries, containment qualities, thresholds, and forms describing Groupware operating procedures such as user access, editing hierarchies, or session timeouts. For EIES, the architectural qualities of the chat room were not merely analogues for domesticating an unprecedented form of communication, but of constructing another imagination for networked computation that foregrounded processing as a spatial proposition. These spaces drew out their users into expansive aggregates, congealing hardware and cognition into vast networked assemblies capable of undermining established spatial politics.

EIES’s proposition of the user as an aggregate architecture was as radical as it was inconsequential, and made all the more irrelevant over the proceeding decades as communication protocols like the world wide web replaced software as the primary technical avatars of the digital. Users, in turn, defaulted to the disembodied brains of cognitive capitalist mentaliums, or to End User Undertakings prescribing the cloud-like ignorance of so many orbiting eyeballs. However, it is precisely Groupware’s status as a footnote in the history of the digital that makes remembering it so important. In foregrounding the political agency of a renewed spatial imagination in our considerations of planetary computing, Groupware suggests an architectural model of life itself as an alternative to both the cul-de-sac of clock-like feedback and the sci-fi pessimism of a post-Snowden internet.

Like Turroff’s earliest experiment, this alternative digital imagination would ask of architecture to see platforms for design and construction as ad hoc planetary rooms within which silicone hardware and atomized users organize themselves into ad hoc spatial aggregates. It would allow us to look beyond simulated objects endlessly orbiting around screens and toward software as the fabrication of ourselves. It would insist on seeing boundaries in REVIT edit permissions, containment in the indebted migrations of international construction workers, or thresholds in the convoluted models of authorship that underwrite contemporary online outsourcing economies. Most importantly however, this reframing of computation after decades of screen-based precision might offer us the strange and hopeful realization that, from Alan Kay’s spatial cognition to the contemporary design of ourselves as vast online aggregates, the digital has been architecture all along.

Source link


Comments are closed.