(brief and somewhat distracted musings concerning the nature of nodular transcriptronic pulse mechanic space inverters, amongst other things)
sutChwon = n[tp]M = subject to change without notice
[authors note: this piece was first published in the TimesUp newslettter after emerging from discussions before, during and after the closing the loop sessions in 02.2000eV. it deals with a protocol/system and its implementation in the present tense, even though it is still firmly in the realm of speculation, rather than active development. any comments, feedback, abuse, patent violations and|or code would be most appreciated [hermes.at.phl.cx]]
this article is essentially a digression into developments of various ideas that are either half baked, half hearted or half finished, none the less, worth pursuing. an attempt at describing a glue layer between indeterminate components. a framework for developing frameworks. another thing that can go wrong in a fragile, indeterminate system. it describes a system or series of protocols which may simply be described as facilitating collaboration between networked participants. it should operate in real time, delayed time or stuttering network time, enabling synchronisation, but not depending upon it. it should be robust enough to maintain communication and exchange of data in unstable situations, learn how to interact with unknown or new systems and run with a range of hardware, software and human configurations. it should be decentralised (peer-to-peer), adaptable, flexible. above all, it should focus on doing what computers do well (crunching data) and as a result enabling the people who are using it do what they do well (whatever that may be).
it is more or less a given that different groups and individuals work in different ways in collaborative situations. there are a range of modes of production, interaction and even (perhaps most importantly) modes of explaining “how?”, “what?” and “why?”. a filmmaker, for example will approach a project quite differently than a musician, architect or programmer. it is not within the scope of this article to describe any of this in detail, apart from pointing out aspects of a system that should be flexible enough to work with such a range of approaches, and in fact enhance them.
most of us are using networks in someway or another for the projects discussed, presented or developed during Closing the Loop. Since there are a range of approaches, methodologies and attitudes to this, it seems appropriate to think of methods which would facilitate connection between remote participants who may be uncertain of what data being sent, how the data would be used, or why in fact, there is any data at all (since its just a game).
The networked aspect of these projects can frequently be described as a struggle of connections and configurations fuelled by the “ecstasy of communication at a distance”. often, it is the network itself which is the object of fascination, the performer, the stage, the invisible centre of attention focusing geographically dispersed centres of attention.
For the purposes of this text, i'll refer to the system as “sutChwon”, or alternatively “n[tp]M”, this is of course “subject to change without notice” for want of a more apt title.
there are no standard setups in this area, most of us who are using networks for exchanging sound, video, text, sensor data or pong scores, use a range of equipment and for a range of purposes. the difficulty in establishing useful connections between these disparate elements often increases rapidly with the number of people involved in the setup/network (nodes) and the differences in the setup at each node.
n[tp]M is envisioned as a glue (inter)layer, or alternatively as a tubing set to facilitate communication between a wide range of otherwise uncooperative software, hardware and people. to be useful (and/or useable) it should interface easily with other software, other mechanisms and be easily usable (adaptable?) in a wide range of situations from telerobotic gardening to automated generation of sound from logfiles on a server.
most networking technologies present an interface to a specific level of network protocols. eg an ethernet packet sniffer, mail program, web browser etc. sutChwon attempts to provide network abstractions and/or communication on 4 distinct layers (which are, of course, interdependent and non standard).
the network layer; in which the computers exchange, and manage the exchange of packets of data.
the syntactic layer; in which quantifiable information about the data is exchanged. eg. what format the data is in, where it is (and where its going), what protocols are being used, node response times, connection reliability, transmission times, etc. easily crunched numbers.
the interpretation layer; in which the information collected and exchanged on the syntactic layer is analysed and/or organised to be presented in a human readable form. user level interaction with the connected systems would most often occur on this level.
the conversation layer; in which humans talk about guinea pigs (amongst other things).
it is often useful and in fact necessary to be able to access several layers of abstraction from the basic tcp or udp layer, as well as several protocols. sutChwon provides connections on usually distinct network layers, which may be automatic, or configured as required.
the most common setup i have seen for networked sound exchange in a performance context is an encoder/decoder (most often realencoder or an mpeg server, but more recently quicktime streaming server) a telnet client (for the inevitable troubleshooting and discussion during the performance) and of course the instruments themselves (which may or may not be on the same computer). other notable setups have involved ftp sites, web interfaces, cuseeme, email and unfortunately MIDI. things become rapidly more involved with the inclusion of visual material, control data or any attempt at synchronisation.
in order to fit between this range of setups, sutChwon should utilise existing technologies, such as XML or MIME (or http headers based on MIME types) and be capable of implementing diverse protocols like Open Sound Control or telnet. since it will be used in a range of existing systems it must be able to interact with them with as little modification to those systems as possible, while building on their current strengths.
a substantial amount of information about network activity, data exchange and interconnection of parts can be obtained automatically and presented in a unified manner (hopefully at an appropriate level of detail). the system should be as automatable as possible, so it can be automated as required (which would depend on its context). for example, having a graphic display of who and what is connected to the network, what sources they are using/providing and how reliable and fast (slow!) particular connections are, should make such things easier to set up, and easier to use from both a technical and creative point of view. having the system respond to particular events, or changes in the network should make a variety of dynamic generative nodes possible.
since the system is flexible and modular, this information can be displayed and used in a variety of ways, and also for a variety of purposes. display and presentation of this collected data is itself an active hci/visualisation problem (again, something beyond the scope of this text). n[tp]M should collect network information in such a way that it can be presented in familiar ways to people with a diverse range of computer skills/uses. however, to be a useful tool there definitely should be useable, readable interfaces to this slippery layer of glue. ideally, such interfaces would enhance existing ones on their own terms (eg. a text based programming interface, or a timeline editing system), so in this sense, it is automation to facilitate both transparency (disappearing into current systems) and wider range of vision (extending the scope and interoperability of the system by increasing what is visible)
known data formats can be understood using a MIME like mechanism, so they can be dealt with internally, or passed to relevant subsystems/instruments or programs. most media files or data which are exchanged can be described in fairly basic terms, so a common format description method could be developed (if it hasn't been done already) to describe unknown formats, or extend n[tp]M. something along the lines of XML documents, to describe things like headers and byte ordering with some human understandable description, could be used.
the main purpose of this format for data description is to enable otherwise unreadable data to be used. this makes it possible for participants in the network to exchange data without necessarily knowing if the other participants can understand it. a consequence of having an interconnected network, is that if one node on the network can describe a data format, these descriptions can be shared with any nodes requiring them.
explaining how protocols are implemented in a generalized manner, or in a manner in which it can be handled by another program is a slightly trickier problem, but essentially dealt with in a similar way. a description of the protocol is passed to n[tp]M, which can choose to deal with it internally (if it has the capacity), send data to external programs or a combination of the two. for example, web based interface to a synth, with the form fields being assigned to controls on the synth. on another level, the html tags could be used to create sounds, or inversely, the synth output could be used to generate html data which is sent to a server by an external ftp program.
since a protocol is essentially a description of a state machine and the magic words to change it, or elicit a response from it, protocol descriptions could be implemented (at least skeletally) in some kind of finite state machine emulator which has assignable connections to other software.
in order to be able to exchange data either as a single meaningful lump, or by using a specific protocol, both the sender and receiver need to be able to interpret this data, or dataflow. the traditional response to data arriving in an unknown format can usually fit into 2 categories 1) output junk 2) crash.
the n[tp]M model would facilitate the exchange of unknown, or arbitrary data with arbitrary protocols by establishing a protocol for the exchange of abstract descriptions of formats and protocols.
pnp is used when a n[tp]M node cant understand, interpret, or in fact do anything with a stream of data it is about to receive.
it seems there are 3 ways to handle data which is being sent in an unknown format, or using an unknown protocol. (in order of difficulty) - display it anyway, using another protocol - exchange information about the format/protocol, then proceed. - try to learn the protocol (eg. challenge/response sequences)
of course there are the implicit 4th and 5th options - ignore it and crash. neither of which im intending on implementing (except possibly the 'ignore' option thru connecting data source to a /dev/null abstraction)
in order to establish an exchange of data that will be intelligible as possible to both the sender and receiver, the nodes go thru a series of query/response cycles in which they tell each other what types of data they can currently handle, what data is about to be sent and what protocols are mutually understandable. if one node cant currently handle the data or protocol, it will either request a description from another node, and attempt to implement a mechanism for handling the requested data, or ask the sender to switch to another format (or protocol) if possible. if a reliable channel cant be established, each node will attempt to exchange data using the most compatible methods possible (which would have varying degrees of legibility) and possibly filter out known incompatibilites between the formats (control characters for example).
there should be a range of methods for enhancing the reliability of pnp, a simple method would involve registries of known data formats, protocols, interoperabilty matrices and known conflicts. a more complex method could use machine learning techniques to analyse exchanges (or read rfcs) and, with some human feedback, build templates, or translators which would be made available to the network.
since a working version of sutChwon involves the development of several interdependent parts to function as described, it may take some time before it becomes particularly useful. however, one of its advantages is that its usefulness is enhanced by a diverse network of interconnected parts, and the more diverse the parts, the more useful it becomes.
active development will consist of four main parts, - formalising a specification of the protocols and description formats - writing a basic reference implementation - incorporating feedback from potential use®s to determine the accuracy and scope of the specification and the implementation. - extensive testing in a wide range of existing systems
while each of these aspects is somewhat self contained, they are all absolutely necessary for n[tp]M to function in networks that are subject to change without notice.