Skip to main content

* researcher in infrastructure futures and theory (University of Sheffield, UK)
* science fiction author and literary critic
* writer, theorist, critical futurist
* dishevelled mountebank


Sf and solutionism / QuantSelf and behaviourism

2 min read

Evidence, if such were needed, that C20th science fiction and the solutionist impulse are two prongs of the same fork:

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor [and popularisor of pulp science fiction] Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

(I'm fairly sure there are still a few big names in sf whose approach to writing and life very much resembles resembles Gernsback's Excludo-Helm(TM), if only metaphorically so.)

The above is excerpted aside from a pretty decent New Statesman joint that makes a clear and explicit comparison between the Quantified Self fad and B F Skinner's operant conditioning; shame they didn't reference any of the people who've been arguing that very point for the past five years or so, but hey, journalism amirites?


Lessons from infrastructural history: Angkor Wat edition

1 min read

Perhaps Ozymandius died of thirst?

Evans, however, now believes that environmental factors played a significant part [in the collapse of Angkor Wat]. “Looking at the sedimentary records, there is evidence of catastrophic flooding,” he says. “In the expansion of Angkor, they had devastated all of the forests in the watershed, and we have detected failures in the water system, revealing that various parts of the network simply broke down.” With the entire feudal hierarchy reliant on the successful management of water, a break in the chain could have been enough to prompt a gradual decline.

Optimisation is the enemy of resilience. And if you think that you don't live in a feudal hierarchy reliant on the successful management of water, I recommend that you look at capitalism from a slightly different angle.


Your humble servant: UI design, narrative point-of-view and the corporate voice

5 min read

I've been chuntering on about the application of narrative theory to design for long enough that I'm kind of embarassed not to have thought of looking for it in something as everyday as the menu labels in UIs... but better late than never, eh?

This guy is interested in how the labels frame the user's experience:

By using “my” in an interface, it implies that the product is an extension of the user. It’s as if the product is labeling things on behalf of the user. “My” feels personal. It feels like you can customize and control it.

By that logic, “my” might be more appropriate when you want to emphasize privacy, personalization, or ownership.


By using “your” in an interface, it implies that the product is talking with you. It’s almost as if the product is your personal assistant, helping you get something done. “Here’s your music. Here are your orders.”

By that logic, “your” might be more appropriate when you want your product to sound conversational—like it’s walking you through some task. 

As well as personifying the device or app, the second-person POV (where the labels say "your") normalises the presence within the relationship of a narrator who is not the user: it's not just you and your files any more, but you and your files and the implied agency of the personified app. Much has been written already about the way in which the more advanced versions of these personae (Siri, Alexa and friends) have defaults that problematically frame that agency as female, but there's a broader implication as well, in that this personification encourages the conceptualisation of the app not as a tool (which you use to achieve a thing), but as a servant (which you command to achieve a thing on your behalf).

This fits well with the emergent program among tech companies to instrumentalise Clarke's Third Law as a marketing strategy: even a well-made tool lacks the gosh-wow magic of a silicon servant at one's verbal beck and call. And that's a subtly aspirational reframing, a gesture -- largely illusory, but still very powerful -- toward the same distinction to be found between having a well-appointed kitchen and having a chef on retainer, or between having one's own library and having one's own librarian.

By using “we,” “our,” or “us,” they’re actually adding a third participant into the mix — the people behind the product. It suggests that there are real human beings doing the work, not just some mindless machine.


On the other hand, if your product is an automated tool like Google’s search engine, “we” can feel misleading because there aren’t human beings processing your search. In fact, Google’s UI writing guidelines recommend not saying “we” for most things in their interface.

This is where things start getting a bit weird, because outside of hardcore postmodernist work, you don't often get this sort of corporate third-person narrator cropping up in literature. But we're in a weird period regarding corporate identities in general: in some legal and political senses, corporations really are people -- or at least they are acquiring suites of permissible agency that enable them to act and speak on the same level as people. But the corporate voice is inherently problematic: in its implication of unity (or at least consensus), and in its obfuscation of responsibility. The corporate voice isn't quite the passive voice -- y'know, our old friend "mistakes were made" -- but it gets close enough to do useful work of a similar nature.

By way of example, consider the ways in which some religious organisations narrate their culpability (or lack thereof) in abuse scandals: the refusal to name names or deal in specifics, the diffusion of responsibility, the insistence on the organisation's right to manage its internal affairs privately. The corporate voice is not necessarily duplicitous, but through its conflation of an unknown number of voices into a single authoritative narrator, it retains great scope for rhetorical trickery. That said, repeated and high-profile misuses appear to be encouraging a sort of cultural immunity response -- which, I'd argue, is one reason for the ongoing decline of trust in party political organisations, for whom the corporate voice has always been a crucial rhetorical device: who is this "we", exactly? And would that be the same "we" that lied the last time round? The corporate voice relies on a sense of continuity for its authority, but continuity in a networked world means an ever-growing snail-trail of screw-ups and deceits that are harder to hide away or gloss over; the corporate voice may be powerful, but it comes with risks.

As such, I find it noteworthy that Google's style guide seems to want to make a strict delineation between Google-the-org and Google-the-products. To use an industry-appropriate metaphor, that's a narrative firewall designed to prevent bad opinion of the products being reflected directly onto the org, a deniability mechanism: to criticise the algorithm is not to criticise the company.


In the golden era of British railways, the rail companies -- old masters of the corporate voice -- insisted on distinctive pseudo-military uniforms for their employees, who were never referred to as employees, but as servants. This distinction served largely to defray responsibility for accidents away from the organisation and onto the individual or individuals directly involved: one could no more blame the board of directors for an accident caused by one of their shunters, so the argument went, than one could blame the lord of the manor for a murder commited by his groundskeeper. 


The end of the codex and the death of Literature

2 min read

Interesting (and appropriately rambling) talk by Will Self, expanding on his recent thesis that a) the technology of the codex is on the way out, and thusly b) so is capital-L literature. I'm not sure I buy it completely, but his argument goes to lots of interesting places, and I recognise a lot in his description of the academy as a sort of care-home for obsolescing art-mediums such as the modernist novel.

(The audience, on the other hand, replete with writers and teachers of writing -- two categories that overlap a great deal, as Self points out -- fails to recognise his description with such venom that it's hard not to characterise their response as classic denial. That said, these are anxious times in the academy, and particularly at the arts and humanities end of it, and being lectured about the demise of your field of expertise by a man still managing to make a living producing that which you study must be a bit galling; in essence, Self does here to literary scholars what Bruce Sterling repeatedly does for technologists and futures types. The difference appears to be that literary scholars know a Cassandra when they hear one.)

Also of interest is Self's characterisation of the difference between literary fiction and genre fiction, perhaps because it is both vaguely canonical and seemingly unexamined: that old tautologous chestnut about literary fiction not being a genre because it doesn't obsess over reader fulfilment and boundary-work. That may be true of literary writers, perhaps (though Barthes is giving me some side-eye for saying so), but it is to ignore the way the publishing industry deals with the category, which is almost entirely generic... and that's a curious oversight for someone who predicates their argument about literature's decline on explicitly technological dynamics. Nonetheless, well worth a watch/listen.


Narrative strategies in prose and cinema

4 min read

Some interesting and practical material in this interview with Alex Garland regarding the different narrative affordances of prose and cinema:

DBK: I can imagine a more robust form of that argument just being: A book can deal with ideas, a novel can deal with ideas, in a much more robust way than a film can, so express the ideas in a book.

AG: In its best medium.

DBK: In its best medium, right.

AG: And then I’d say, “Well, it probably depends on the idea. And it depends on the way you want to explore the idea.” If you want to explore it in a forensic way, then what you said is probably true, because just in terms of information, you can get much more information into a novel. Rather, you can get explicit information into a novel that allows you, in a concrete way, to see exactly what the sentence is at least attempting to say, within reason. In film, the ideas are more often alluded to. In the film I just worked on, which is an ideas movie, I would say some of the ideas are very explicitly put out there and literally discussed, and others of them are there by illustration or by inference, just maybe simply in the presentation of a thing. Of a robot that looks like a woman, but isn’t a woman, but maybe it is a woman. There’s an idea contained within that. There is, in fact, a brief discussion about it. But, broadly speaking, in a novel, you would be able to have much more full and forensic-type explanations or discussions.

Film relies much more on inference, but that’s its strength, too. I’ve often thought, as someone who has worked in books and film, about what you can do in a film by doing a close-up, or even a mid-shot, of a glance where somebody notices something, and how easy it is to pack massive amounts of information into that glance in terms of what the character has just seen, or what they haven’t seen. And in a book, how you can never quite throw the moment away, and yet contain as much within it as you can with film. The thing I like most about film is probably that thing. It has this terrific way of being able to load moments that it’s also throwing away, and that’s harder in a novel.

DBK: To be contrarian about that, for a second though . . .

AG: Cool. [Laughter]

DBK: In a book you can actually get inside someone’s head and just tell the reader what they’re thinking or inhabit their consciousness.

AG: Absolutely.

DBK: In a film, everything that the character is thinking has to be conveyed through their facial expression or body language.

AG: Or a bit of voiceover, yeah.

[Note how rare a technique the voiceover is in modern cinema. Note also, by comparing the original cinematic release of Blade Runner with the director's cut, the extent to which the addition or removal of a first-person voice-over completely changes the affect of a film.]

DBK: One thing that strikes me a lot about movies is that the character is deceiving other characters in the scene, but they have to be doing it in a way that’s obvious enough that the audience sees through them, whereas, why don’t the characters in the scene see through them?

AG: Well, it’s funny you should say that, because actually inEx Machinathe characters are often simultaneously deceiving the audience and the other characters. One of the conversations with the actors, prior to shooting, was about making sure that we didn’t telegraph in the way that film often does, in exactly the way you said, that you abandon that relationship. Now, that’s problematic in some ways, because it makes character motivation more ambiguous, but in other ways, that’s also a strength. That may be something I’m pulling from novels, I don’t know, but I didn’t think I was. I thought it was a more explicit version of show-don’t-tell. It was taking show-don’t-tell to a sort of extremist degree, or something like that. But interestingly, there are many, many times inEx Machinawhere a lot of effort is made to not have a complicit understanding, or an implicit understanding, between the audience and a character.



Frank Cottrell Boyce:

Innovation doesn’t come from the profit motive.

Innovation comes from those who are happy to embark on a course of action without quite knowing where it will lead, without doing a feasibility study, without fear of failure or too much hope of reward. The engine of innovation is reckless generosity...

This. A thousand times, this.



The uses of story: narrative strategies for speculative critical designers

1 min read

On 5 July 2016, I spent the day at the London College of Communication as a guest lecturer for a summer school on speculative and critical design. Courtesy course leaders Tobias Revell and Ben Stopher, here's a video of my lecture.


Robin Hanson's _The Age of Em_ | Books | The Guardian

Early on, Hanson cheerfully says: “This book mostly ignores humans.”

This human mostly ignores economists who believe that being aware of the existence of cognitive bias makes them magically immune from it. Go back to touching yourself with the invisible hand.


Story of cities: what will our growing megacities really look like? | Cities | The Guardian

The mainstreaming of urban design fictions continues apace.

For the moment, we remain largely wedded to superficial visual futures. The likelihood is that the prevailing chrome and chlorophyll vision of architects and urbanists will become as much an enticing, but outdated, fashion as the Raygun Gothic of The Jetsons or the cyberpunk of Blade Runner. Rather than a sudden leap into dazzling space age-style cityscapes, innovations will unfold in real-time – and so too will catastrophes. The very enormity of what cities face seems beyond the realms of believability, and encourages postponement and denial.


Terreform One’s ideas and designs might seem wildly visionary on first glance but looking closer, they go beyond speculative concepts into proposing functioning models. “What we do is create very detailed fictive scenarios that don’t promise the future will end up this way, but rather we think about what the inherent issues are and bring these to the foreground and talk in a logical way how cities might respond.”


All Problems Can Be Illuminated; Not All Problems Can Be Solved [Ursula Franklin]

While producing wonderful artifacts and mind-blowing techniques, prescriptive technologies create a world in which it’s normal to do what we’re told, and to do so without the ability to control and shape the process or the outcome. They also require a command and control structure. A class of experts—the architects, the planners—and others who follow the plans and execute the tasks. This structure creates a “culture of compliance . . . ever more conditioned to accept orthodoxy as normal and to accept that there is only one way of doing ‘it.’”8 A view through Franklin’s lens reveals that, as a “byproduct” of what we call progress, we have created societies easily ruled and monitored— and accustomed to following orders whose ends they don’t question.