Skip to content

Richard Barbrook and ‘The book of imaginary media’

Eric Kluitenberg introduces his newly edited book on imaginary media (Nai) by asserting we have systematically failed to communicate with the ‘other’ across cultural divides. He points to our collective fantasy that this miscommunication can be overcome by the miracle of a machine interface.

Imagine the power of the umpteenth gadget. Imagine that technology can go where no human has ever gone before, that technology can succeed where no human has succeeded – not only in space or in nature, but also in the interpersonal, specifically in communication with the other.

The book is an encapsulation of a previous conference on ‘imaginary media’ and is a collection of visionary media fantasies from the likes of Siegfried Zielinski, Bruce Sterling, Erkki Huhtamo and Timothy Druckrey.

The evening continues with a book talk by Richard Barbrook, who’s writing on the future of artificial intelligence while a professor at the hypermedia research center, University of Westminster.

these are running notes…

Barbrook looks at the 1939 World’s fair in NYC and its commuter, motor-car loving, suburban fantasy. It was a future that big government and big business provided – the idea of a consumer paradise. It was also the first instance of the presentation of artificial intelligence. Up until that point, artificial intelligence was on the level of Frankenstein-like creatures who become possessed and turn on the world of their creators.

The ‘39 fair introduced the dominant trope of artificial intelligence as robots that would be friendly and servile.

In the ‘64 fair, most people travelled by automobile; the way that was envisioned in the motorized fordist imaginary future from 1939. This gave people the confidence that imaginary futures would come true. So people believed that space tourism would be viable by the ’90s. In the ‘64 fair, there was also an effort to distract people from the true nature of the cold war reality of the technology on display.

He says that when we think about artificial intelligence, we must understand why it is being created. So in the ’50s and ’60s, we must know that they were developed as weapons of genocide. For example the first IBM computers were either sold to the US government or weapons manufacturers. The GUI and lightpen as well as network computing came from the command and control functions of the military. In fact, the history of computing goes back to at least the victorian age to assure naval dominance and act as the ‘machiney of government.’

The precursor to modern computers that came out of Turing’s lab that were designed to break German encryption. Turing saw machines not as technical tools, but as artificial brains – thinking machines. The idea that computers, if with enough memory could think, goes back to the 1940s.

Norbert Wiener’s Cybernetics (command and control in the animal and the machine) comes from work in anti-aircraft technology in the second world war. Machines can do what humans are doing. Interesting note about Wiener is that he was a socialist pacifist and refused to use his expertise in cybernetics to develop for the cold war. Instead John van Neumann, Hungarian immigrant and ‘cold war warrior’ was in the strategic lead. He theorized on robot warriors that could repair themselves in battle (the general and logical theory of automata).

Computing power was swiftly adopted by corporations because of the ability to automate clerical work. Machines increased productivity and controlled people using the machines.

Wiener saw that workers and the entire corporation could be replaced by machines, and eventually become a body of artificial intelligence. The totalitarian fantasy of Neumann’s artificial intelligence was a vision of a panopticon that would monitor people and eventually replace them.

By understanding the history of the future, we can work out ways of going beyond the dialectic of Wiener (distributed) and VanNeumann (centralized).

Cybernetics and computerization gets interesting when it goes beyond Fordism into network computing, the model of open-source and collaboration. Wiener inspires the radical idea of the bottom up network that became the internet.

Futurism tends to fetishize technology, and doesn’t focus nearly enough on the social conditions of the use of the technology. We need to envision a new type of future that puts people at the center of history. We make the machines and the future. Our organizations and systems are mutable, and we can form them in our own interest.

Post a Comment

Your email is never published nor shared. Required fields are marked *