User Tools

Site Tools


Inoculate against Innocuousness

Recent developments in cosmology, quantum sciences, complex systems, process physics or ecosystem studies reveal that not only is the world stranger than we imagine, but possibly stranger than we can imagine. However, the surprises, serendipities, surrealisms that make the physical world such an exiting place to live in are cut out, removed from consideration, left unimplemented by the visionaries of the new social and other spaces. There will be no surprising social slips, appearances of magic or highwire trapeze tricks if the world acts how it appears, if it simply is what it appears to be. So it would seem that some dressing up is needed. But then if the actual machinery of the world is merely covered, it becomes monster clothed in cartoon cloth, it leaves the users locked in the front world with their arses open to the back door hacker ideology converts, to use a rather distasteful phrase. We need something where there is no limitation to investigation, to perception. There cannot be a layer beyond which the users cannot descend. No, emergence of higher level properties from the lower levels, turtles upon turtles, is the only really feasible technique to generate enveloping, convincingly detailed worlds.

Our starting point moves, but repeatedly cycles back to the fundamentals of physical space populated with sweaty human bodies. Overcoming the vaguenesses of abstraction to move to the hard physical, to cheat the users perceptions by using that with which they are most comfortable. Though cheating is perhaps the wrong word; perhaps it is cooption, perhaps it is a compatibility maximising process. Perhaps it is just easiness. Perhaps it is the observation that the only reasonable way to deal with the backpains of the desktop is to get out to the workshop, cut, join and otherwise modify the physical surroundings and get to a point where something physical and present stands before me with the infinite detail of a physical object; scratches, folds, sap lines and weld marks.

A space reactive, filled with buttons, levels, trackers of motion and gesture, effectors into the space along all axes; audio, visual, haptic, aromatic and otherwise, is but the cleanroom of a chip production plant, the main room of a slaughterhouse with EU sanitary standards unless inoculated against innocuousness by the addition of autonomy. An autonomous entity, a proto-perceptual system, a pool of interacting agents, a flexible script unfolding along its own axes. Reacting, ignoring, ever busy, computationally flamboyant, carrying on a breadth of action and interaction with its own, self-defined and ever changing agenda of action, reaction, miscommunication and provisional perception; breed, modify, move, build, communicate, recycle, grind into powder and start all over again. Such a flow, a pool, a swamp must carry on regardless, yet must impinge upon and be impinged upon by the physicality of the space.

The addition of autonomy to systems where we are attempting to present them to an interested and interesting public is a rather nonlinear, easily perturbed and somewhat hazy process. Designing the experience to be suitably satisfying, without becoming the one eyed screaming director who needs everything to be exactly what it is that she envisaged. Yet not having to fall back upon arguments about the unpredicatability of emergent bahaviour in artificial life environments or some such pseudo/proto scientific nonsense as the whole situation descends along lines of least resistance to the equivalent of grey goo: a formless, drab and messy environment with all the interest of a bucket of slops.

Of course not every emergent world is fun to live in, the questions about the inevitibility of life given basic chemistry, the questions of intelligence or culture given the appropriate background, remain speculative. The development of some kind of husbandry, taking some idea of control of the reins of development, learning not only to surf the waves of change in a complex system but planning strategies to trim them to a suitable shape (in whatever sense “suitable” makes sense) over time. Tom Ray asked whether, given soil chemistry, the spectrum of the sun and DNA transmission, one could foretell something like rice. At least in a wild form. Then again, every living ecosystem with a minimum level of complexity, open to energetic input but otherwise closed, falls into some dynamic equilibrium over time, a balance of oxygen usage and creation, energy storage and liberation. How do we live with such things, how do we learn to live with them yet let them follow their own, murkily defined, goals.

Is a balance possible here? Can we implement an artificial environment that is both autonomous enough to develop along lines of interest without constant prodding and control from interfering humans, yet can be manipulated enough so as to be worked in directions leading to humanly interesting results?

Nature might abhor a vacuum, but her technique of filling it, rampant overproduction in all fields, is possibly not the most efficient, nor the most replicable. Attempting to run some kind of system with a suitably interesting level of complexity, whether a simulation or a totally autonomously defined system of exchange and modification, requires not only enough grunt to get the things done that one can perceive, but also the grunt required to build up the complexity level for level, aspect for aspect, from the micromotions and dumb physics to the sharing of data and management of information flows. Moore's Law might help us here for a while, or perhaps the solution lies in locking together swarms of redundant, obsolete machines with lightweight operating systems to churn the system through its motions, local annihilations with CPU overheating, an ecosystem both virtual and fundamentally effected by the physics of the world in which the hardware runs itself into the ground.

Is a balance possible here? Can we implement an artificial environment that is both autonomous enough to be interesting and generate some facets of novelty, yet computationally feasible, not dependent upon the massive parallelism of physical processes?

Complexity is one of those buzzwords that hangs on; Stephen Hawking's comment about the 21st century being the century of complexity achieves maximal repetition, maybe meaningless through ubiquity. Complexity applied to managment theory, global terrorist networks, internet connectivity, and extremophile lifeforms. Stephen Wolfram's ego manages another outburst in his new book, claiming to have discovered a new type of science, complexity being reduced to the interrelations and interferences of many small systems, in his case one dimensional cellular automata (CA). Neither the academy, the business world nor anything else has managed to diminish his need/desire for self-promotion. Kurzweil's review of this book seems to hit the nail on its head; without the adaptive nature of a learning / adapting / evolving / whatever system, the preprogrammed complexity of a CA development is as sterile as the well-educated discussion of angels dancing on pin heads. The data is all set up, although the exact results can only be obtained by running the program, those results are completely predictable, sterile, controlled, essentially unsurprising. Kurzweil is being perhaps unfair here; adaption is possible in any programmed system, and as the CA of which Wolfram is speaking have been (apparently) shown to be computationally universal, the implementation of algorithms that use adaptive techniques is possible. Kurzweil's main problem seems to lie in the noninteractive aspect, in the closure of the world of action (the CA state space) from the external world of interaction. But posibally there is a further problem. Wegner and Goldin (amongst others) lay out the claim that the standard of computability (i.e. computational universality), the universal Turing machine, is fundamentally less computationally powerful than the processes available to an interactive machine. Perhaps the argument can be reduced to the observation that a good lawyer gets more out of the interactive cross-examination of a witness in a courtroom, because each question can be made dependent upon the preceding; it is not necessary to obtain all answers to all questions in advance, only the answers to those questions that are most necessary given the current state of the process. Such theoretical developments play into our hands as we work with these ideas and problems of setting up interesting, interactive, adaptive experimental situations; perhaps the computational power we need in implicit in the interaction, by accepting the problem as a whole we bypass the problems of computational flamboyancy and wastage and leap straight to the resulting highly adaptive, reactive system. Adaption, interaction, physicality, space; lines of investigation that wind around an axis of development, information flow between an environment and a system, increasing complexity based upon a learning and an adaption to the given situation. We hope that the integration of very physical interfaces and interactions, coupled with adaptive protocognitive systems, will lead to an interesting and worthwhile complexification. An interesting situation for the participants, public individuals interacting and moving through a space filled with others and the other.

– TimBo sep.02

inoculate_against_innocuousness.txt · Last modified: 2007/06/11 14:33 (external edit)