Skip to main content

* researcher in infrastructure futures and theory (University of Sheffield, UK)
* science fiction author and literary critic
* writer, theorist, critical futurist
* dishevelled mountebank

velcro-city.co.uk

orcid.org/0000-0002-3555-843X

www.sheffield.ac.uk/usp/researchschool/students/paulraven

 

The difference between a writer and someone who dreams of being a writer is that the writer has finished.

2 min read

Kate Tempest on writing as a process:

... sometimes everything else disappears, and that happens very rarely. The rest of the time, it’s you writing when you don’t feel like writing, writing when you hate everything that’s coming out, forcing yourself to engage with the idea that it’s going to be shit no matter what you do, and trying to kind of break through that because of a deadline, or because you know that it’s very important to continue. This is what enables you to be a writer.

The difference between a writer and someone who dreams of being a writer is that the writer has finished. You’ve gone through the agony of taking an idea that is perfect – it’s soaring, it comes from this other place – then you’ve had to summon it down and process it through your shit brain. It’s coming out of your shit hands and you’ve ruined it completely. The finished thing is never going to be anywhere near as perfect as the idea, of course, because if it was, why would you ever do anything else? And then you have another idea. And then these finished things are like stepping stones towards being able to find your voice.

The thing is, everybody’s got an idea. Everybody wants to tell me about their ideas. Everybody is very quick to look down on your finished things, because of their great ideas. But until you finish something, I’ve got no time to have that discussion. Because living through that agony is what gives you the humility to understand what writing is about.

Exactly this.

 

In which I find Amitav Ghosh's missing monocle, and return it to him that he might see more clearly

5 min read

Poor old Amitav Ghosh is wondering where all the fiction about climate change might be... when in fact it's right under his nose, and he simply chooses to disregard it as being insufficiently deserving of the label "literature".

Right in the first paragraph, he answers his question and immediately discards the answer:

... it could even be said that fiction that deals with climate change is almost by definition not of the kind that is taken seriously: the mere mention of the subject is often enough to relegate a novel or a short story to the genre of science fiction. It is as though in the literary imagination climate change were somehow akin to extraterrestrials or interplanetary travel.

If for "literary imagination" we substitute "bourgeois imagination", that last sentence is no surprise at all -- because this is about genre, which is a proxy for class.

And when Ghosh surveys the few examples of supposedly literary fiction that have dealt with climate change, look what happens:

When I try to think of writers whose imaginative work has communicated a more specific sense of the accelerating changes in our environment, I find myself at a loss; of literary novelists writing in English only a handful of names come to mind: Margaret Atwood, Kurt Vonnegut Jr, Barbara Kingsolver, Doris Lessing, Cormac McCarthy, Ian McEwan and T Coraghessan Boyle.

Now, I'll concede that most of them have preferred generic labels other than science fiction for their works at one time or another, but it's very hard to make the case that Atwood, Vonnegut and Lessing haven't written works that slip very easily into the sf folksonomy, while McCarthy has written a very successful dystopia. So that's half of Ghosh's successes demonstrably working in the speculative fiction tradition... but they can't be speculative fiction, because they're too good for that trash. They've won awards and stuff -- awards that aren't rocket-shaped. Ipso facto, no?

To his credit, Ghosh gets pretty close to the technical distinction in narrative strategy that demarks the dichotomy he's observing, via one of Moretti's more interesting theory-nuggets:

This is achieved through the insertion of what Franco Moretti, the literary theorist, calls “fillers”. According to Moretti, “fillers function very much like the good manners so important in Austen: they are both mechanisms designed to keep the ‘narrativity’ of life under control – to give a regularity, a ‘style’ to existence”. It is through this mechanism that worlds are conjured up, through everyday details, which function “as the opposite of narrative”.

It is thus that the novel takes its modern form, through “the relocation of the unheard-of toward the background ... while the everyday moves into the foreground”. As Moretti puts it, “fillers are an attempt at rationalising the novelistic universe: turning it into a world of few surprises, fewer adventures, and no miracles at all”.

I offer that the absence of Moretti's fillers -- often but not always replaced with anti-fillers designed to re-enchant the novelistic universe, and make of the universe a character in its own right -- is a way to describe one of the more fundamental strategies of speculative fictions, where it is preferable to have a world with more surprises, more adventures, and more than the occasional deus ex machina). Moretti's fillers are basically the opposite of worldbuilding; they remove complexity, rather than adding it.

And here we see the true root of the problem, the reason no one who identifies as a writer of "serious" "literary" fiction can handle climate change in their work -- look at Ghosh's language, here, and tell me he doesn't feel the class pressure of genre (my bold):

To introduce such happenings into a novel is in fact to court eviction from the mansion in which serious fiction has long been in residence; it is to risk banishment to the humbler dwellings that surround the manor house – those generic out-houses that were once known by names such as the gothic, the romance or the melodrama, and have now come to be called fantasy, horror and science fiction.

It's clearly not that "the novel" as a form can't handle climate change: science fiction novels routinely invert the obstacles set out in Ghosh's piece in order to do their work. It's that to upset those particular obstacles is to break the rules of Literature Club, to go slumming it with the plebes of genre fiction: literary fiction can't write about climate change, or about any other topic that requires an understanding of the storyworld as a dynamic and complex system, because -- as a self-consciously bourgeois genre in its own right -- it cannot commit the sin of portraying a world where the bourgeoise certainties no longer pertain, wherein hazard and adventure and unexpected events are revealed to be not merely routine, but to be the New Normal.

Take it from a squatter in the generic out-houses, Amitav old son: there's only one way you'll ever get literary fiction that deals with climate change -- and that's by acknowledging, however grudgingly, that not only was science fiction capable of being literature all along, but that science fiction began by asking the question whose suppression is the truest trope of the literary: what if the world were more important than the actions of individuals?

 

Play as counterpoint to the infrastructural mediation of industrial spacetime

3 min read

Yeah, it's another Will Self talk, this time from Nesta's 2016 FutureFest -- he's pretty on-point with a lot of my interests these days, which makes me think I should probably make the effort to read more of his fiction*.

 

‎So this talk is ostensibly about fun and play, but Self being Self, it wanders off (see what I did there?) into psychogeography and other places. What really interested me in particular was his positioning of play as a counter to the constrictions of technologically mediated life: he talks of (and I paraphrase from memory and scribble notes, here) the way in which smartphones have 'fused industrial time and space into our cerebellums', with the result that we are rarely (if ever) in that state of unplacedness and unproductivity which the d‎érive was designed to discover. Now, this is scarcely an original observation on Self's part (Gibson's Blue Ant trilogy is in some respects entirely about what one character refers to as the 'eversion of cyberspace'), but the positioning of play and the derive against it is interesting to me because it opens the door on a way to experience infrastructure while receiving minimal or no support from it. The industrial conception of time was reified by the spread of the railways, and with them, the telegraph; meanwhile, the GPS network has seen a similar thing happen to the industrial conception of space, which, like its temporal cousin, is all about ownership and apportionment -- maps don't create or describe territories, but capture them, divide them up (all the better to be conquered).

Like Self, I don't se much likelihood of these systems rolling back any time soon, absent the sort of socioeconomic collapse in which the lack of GPS would be the last thing on anyone's mind. However, play and playful approaches to industrial spacetime -- per Debord and company, but perhaps minus their death-wish nihilism -- might nonetheless still offer escape from the invisible matrix, even if only temporarily.

(I also like his idea of walking to and from airports, though I suspect it wouldn't be viable for every journey, even assuming one had the free days required; I sure wouldn't want to try walking from Boston Logan to Harvard Square, f'rex.)

#

[* -- I remember during the late 90s a friend loaned me a copy of The Sweet Smell of Psychosis, right around the time that said friend and others were getting into the cocaine glamour of superclubbing...oh, the irony. I mostly took away from the book the timely (and subsequently justified) warning that cocaine's worst side-effect was the way in which it turned ordinary people into monumentally self-deluded and paranoiac arseholes, but perhaps the affect of the writing -- which is as seedy and unsettling as the descent into fuckedupness it describes -- put me off reading him again.]

 

Fear of a Blank Verse Planet

2 min read

I've long been an admirer of Adam Roberts, and that's as least as much for his critical writing as for his fiction output, if not perhaps a little more. I put this down (at least in part) to his stint as a 'columnist' when I was still running Futurismic as a regular webzine*, where I was first exposed to his Borgesian strategy of reviewing imaginary works; I'm sure he's not the only source of the notion I have that a review should in some manner stylistically reflect the text to which it is responding, but he's always my go-to example of someone who does it routinely, and does it well.

And here's an example, just published as part of this year's Strange Horizons funding drive. Because how else to appropriately respond to a Baen publication about anthropogenic climate change written entirely in blank verse, but in blank verse?

The fact remains this is a verse-novel;
And as such, frankly, it’s a curate’s egg:
In equal measures striking and inert.
No question it’s echt science fictional
A perfectly effective instance of
This kind of techno-thriller doomsday yarn
(Though it mutates into a stranger and
More satisfying kind of story by its end).
And Turner’s good on "door dilated" stuff
Those kinds of unobtrusive details that
Hallmark much trad SF...

The closing section is the key, though, in making clear that pastiche can and should have purpose beyond the simple joy of rummaging in the dress-up box.

 

Sf and solutionism / QuantSelf and behaviourism

2 min read

Evidence, if such were needed, that C20th science fiction and the solutionist impulse are two prongs of the same fork:

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor [and popularisor of pulp science fiction] Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

(I'm fairly sure there are still a few big names in sf whose approach to writing and life very much resembles resembles Gernsback's Excludo-Helm(TM), if only metaphorically so.)

The above is excerpted aside from a pretty decent New Statesman joint that makes a clear and explicit comparison between the Quantified Self fad and B F Skinner's operant conditioning; shame they didn't reference any of the people who've been arguing that very point for the past five years or so, but hey, journalism amirites?

 

Your humble servant: UI design, narrative point-of-view and the corporate voice

5 min read

I've been chuntering on about the application of narrative theory to design for long enough that I'm kind of embarassed not to have thought of looking for it in something as everyday as the menu labels in UIs... but better late than never, eh?

This guy is interested in how the labels frame the user's experience:

By using “my” in an interface, it implies that the product is an extension of the user. It’s as if the product is labeling things on behalf of the user. “My” feels personal. It feels like you can customize and control it.

By that logic, “my” might be more appropriate when you want to emphasize privacy, personalization, or ownership.

[...]

By using “your” in an interface, it implies that the product is talking with you. It’s almost as if the product is your personal assistant, helping you get something done. “Here’s your music. Here are your orders.”

By that logic, “your” might be more appropriate when you want your product to sound conversational—like it’s walking you through some task. 

As well as personifying the device or app, the second-person POV (where the labels say "your") normalises the presence within the relationship of a narrator who is not the user: it's not just you and your files any more, but you and your files and the implied agency of the personified app. Much has been written already about the way in which the more advanced versions of these personae (Siri, Alexa and friends) have defaults that problematically frame that agency as female, but there's a broader implication as well, in that this personification encourages the conceptualisation of the app not as a tool (which you use to achieve a thing), but as a servant (which you command to achieve a thing on your behalf).

This fits well with the emergent program among tech companies to instrumentalise Clarke's Third Law as a marketing strategy: even a well-made tool lacks the gosh-wow magic of a silicon servant at one's verbal beck and call. And that's a subtly aspirational reframing, a gesture -- largely illusory, but still very powerful -- toward the same distinction to be found between having a well-appointed kitchen and having a chef on retainer, or between having one's own library and having one's own librarian.

By using “we,” “our,” or “us,” they’re actually adding a third participant into the mix — the people behind the product. It suggests that there are real human beings doing the work, not just some mindless machine.

[...]

On the other hand, if your product is an automated tool like Google’s search engine, “we” can feel misleading because there aren’t human beings processing your search. In fact, Google’s UI writing guidelines recommend not saying “we” for most things in their interface.

This is where things start getting a bit weird, because outside of hardcore postmodernist work, you don't often get this sort of corporate third-person narrator cropping up in literature. But we're in a weird period regarding corporate identities in general: in some legal and political senses, corporations really are people -- or at least they are acquiring suites of permissible agency that enable them to act and speak on the same level as people. But the corporate voice is inherently problematic: in its implication of unity (or at least consensus), and in its obfuscation of responsibility. The corporate voice isn't quite the passive voice -- y'know, our old friend "mistakes were made" -- but it gets close enough to do useful work of a similar nature.

By way of example, consider the ways in which some religious organisations narrate their culpability (or lack thereof) in abuse scandals: the refusal to name names or deal in specifics, the diffusion of responsibility, the insistence on the organisation's right to manage its internal affairs privately. The corporate voice is not necessarily duplicitous, but through its conflation of an unknown number of voices into a single authoritative narrator, it retains great scope for rhetorical trickery. That said, repeated and high-profile misuses appear to be encouraging a sort of cultural immunity response -- which, I'd argue, is one reason for the ongoing decline of trust in party political organisations, for whom the corporate voice has always been a crucial rhetorical device: who is this "we", exactly? And would that be the same "we" that lied the last time round? The corporate voice relies on a sense of continuity for its authority, but continuity in a networked world means an ever-growing snail-trail of screw-ups and deceits that are harder to hide away or gloss over; the corporate voice may be powerful, but it comes with risks.

As such, I find it noteworthy that Google's style guide seems to want to make a strict delineation between Google-the-org and Google-the-products. To use an industry-appropriate metaphor, that's a narrative firewall designed to prevent bad opinion of the products being reflected directly onto the org, a deniability mechanism: to criticise the algorithm is not to criticise the company.

#

In the golden era of British railways, the rail companies -- old masters of the corporate voice -- insisted on distinctive pseudo-military uniforms for their employees, who were never referred to as employees, but as servants. This distinction served largely to defray responsibility for accidents away from the organisation and onto the individual or individuals directly involved: one could no more blame the board of directors for an accident caused by one of their shunters, so the argument went, than one could blame the lord of the manor for a murder commited by his groundskeeper. 

 

The end of the codex and the death of Literature

2 min read

Interesting (and appropriately rambling) talk by Will Self, expanding on his recent thesis that a) the technology of the codex is on the way out, and thusly b) so is capital-L literature. I'm not sure I buy it completely, but his argument goes to lots of interesting places, and I recognise a lot in his description of the academy as a sort of care-home for obsolescing art-mediums such as the modernist novel.

(The audience, on the other hand, replete with writers and teachers of writing -- two categories that overlap a great deal, as Self points out -- fails to recognise his description with such venom that it's hard not to characterise their response as classic denial. That said, these are anxious times in the academy, and particularly at the arts and humanities end of it, and being lectured about the demise of your field of expertise by a man still managing to make a living producing that which you study must be a bit galling; in essence, Self does here to literary scholars what Bruce Sterling repeatedly does for technologists and futures types. The difference appears to be that literary scholars know a Cassandra when they hear one.)

Also of interest is Self's characterisation of the difference between literary fiction and genre fiction, perhaps because it is both vaguely canonical and seemingly unexamined: that old tautologous chestnut about literary fiction not being a genre because it doesn't obsess over reader fulfilment and boundary-work. That may be true of literary writers, perhaps (though Barthes is giving me some side-eye for saying so), but it is to ignore the way the publishing industry deals with the category, which is almost entirely generic... and that's a curious oversight for someone who predicates their argument about literature's decline on explicitly technological dynamics. Nonetheless, well worth a watch/listen.

 

Synthetic space(s)

3 min read

While I will probably always be gutted that someone else has beaten me to writing a history of EVE, I can at least take comfort in the fact that the person who's done it appears to get it -- the game itself is of little interest, it's the utopian economic space-for-action which the game provides that matters:

I met these two guys from the University of Ghent who created a computer model that shows what happens to economic prices in certain parts of EVE, depending on whether or not there are battles going on nearby.

In these areas where a lot of ships are being destroyed, you would expect to see the price of materials skyrocket, because everyone’s trying to build new ships and new fleets. But what they found was that, in areas where a lot of ships are being destroyed, the prices go through the floor, because everyone in that region of space starts liquidating everything. There’s an invading alliance coming, and they’re trying to get their stuff out the door as fast as possible, to make sure their stuff doesn’t get taken or conquered. They said this is similar to what you see in the real world. In pre-war Germany, the price of gold dropped through the floor because everyone was trying to liquidate their belongings and get out of the country. …

EVE is the most real place that we’ve ever created on the Internet. And that is borne out in these war stories. And it’s borne out because these people who—you find this over and over again—who don’t view this as fictional. They don’t view it as a game. They view it as a very real part of their lives, and a very real part of their accomplishments as people.

[...]

Something that I found formed very early on in EVE was the understanding among certain leaders was that people will follow you, even if they don’t believe in what you believe in, simply because you’re giving them something to believe in. You’re giving them a reason to play this game. You’re giving them a narrative to unite behind, and that’s fun. It’s far more fun to crusade against the evil empire than it is to show up and shoot lasers at spaceships.

Now mulling over the possibilities of studying the role of infrastructure in virtual economies... anyone want to picth in on a grant application?

 

Leading with an apology: some thoughts on innovation in communications

5 min read

Something I'm finding interesting about the New Newsletter Movement (which isn't really a movement, but is surely a definite phenomena in a certain slice of the internets) is the normalisation of the Extended But Friendly Unsubscribe Disclaimer, wherein profuse preemptive apologies are made for the possible cluttering of inboxes, and the ease of avoiding such is highlighted. It's not surprising -- on the contrary, it serves to highlight that the move to newsletters was driven at least in part by a sense that there are an excess of push-notification demands on people's attention, and that we all know they're no fun any more (even if we're still occasionally unwilling to say so).

Email is a fairly pushy medium too, of course (which is why it's such a popular topic for those work/life balance articles), but it seems to me to have two main merits in the context of the current communications retrenchment: firstly, there are a lot more third-party tools and techniques for managing email as multiple flows and categories of comms (including, crucially, easy blocking and blacklisting); secondly, no one can envisage being able to give up email forever, so the inbox is both a comfortable and secure place in which to set up one's ultimate data redoubt. Hence newsletters: they're a one-to-many subscriber-based push medium, much like socnets, but -- crucially -- the interface through which both the sender and the receiver mediate and adjust their experience of communicating via newsletters, namely the inbox, does not belong to the company providing the transmission service. 

Sure, that interface may well belong to someone other than the end-user -- most likely G**gle or another webmail provider -- but the point is that the route between sender and receiver has a whole bunch of waypoints, seams between one system or platform and another where one or another of the communicants can step in and control their experience. With FarceBork or Twitter, that communicative channel -- the interface apps, the core protocol and its design principles -- is all in-house, all the time, a perfect vertical: it works this way, that's the only way it works, take it or leave it. (Note that it takes either network effects or addicition mechanisms, or possibly both, to build the sort of product where you can be so totalitarian about functionality; note further that network effects are easier to achieve in closed and/or monopoly networks.) So the newsletter is a point of compromise: a one-to-many-push model which retains plenty of control at both the author and reader ends. 

And so we have a situation where one of the most common features of the use of a particular opt-in medium is a disclaimer about how easy it is to avoid further messages from the same source. I find this of some considerable interest -- not least because rather than being a technical innovation, it's actually a reversion to older technologies which have been rearticulated through a new set of social protocols and values.

That said, it's a little odd that we've jumped all the way back to email, skipping over the supposedly-failed utopia that was the Open Web (or whatever we're now calling it in hindsight): y'know, blogs, aggregators, pingbacks, RSS, all that jazz. I do hear some lamenting for the Open Web, but it tends to be couched in a way that suggests there's no going back, and that the socnets pushed all that out of the way for good. And while that may be true in commercial terms, it's not at all true in technical terms; I can't speak to the change in running overheads, especially for anyone running anything more than the website equivalent of a lemonade stand, but all that infrastructure is still there, still just as useable as it was when we got bored of it. Hosting is cheaper and more stable than it was a decade ago; protocols like RSS and pingbacks and webmentions only stop being useful when no one uses them.

So why didn't we go back to blogging? After all, the genres of writing in newsletters are very similar to those which were commonplace on blogs, it's a one-to-many-pull medium (so no accidental inbox invasions), and the pertinent protocols are just sat there, waiting to be written into software and used again.

But it's a lot more effort to run even a small blog than to run a newsletter (you effectively outsource all the work besides the writing to your newsletter provider, for whom it's less a matter of work and more a matter of maintaining automated capacity), and you still have to go "somewhere else" (whether directly to the site, or to an RSS aggregator) to catch up with the news from others. Newsletters are just easier, in other words -- sufficiently easy that the inherent deficiencies of the medium don't seem too much of a chore to manage, for sender or receiver.

Whether that remains the case for newsletter authors with very large audiences, I have no idea -- and how long it will remain the case is just as open a question, as is the question of where we'll move our discourse to next. However, it's pretty clear that the newsletter phenomenon thumbs its nose at the standard models of innovation, wherein we transition to new technologies on the basis of their novelty and/or technological advantages. This is good news, because it means that we're perfectly capable of rearticulating the technological base of the things we do in response to changing social meanings and values -- and perhaps it even suggests that those meanings and values are more influential than the supposed determinism of the technological stack itself.

We can but hope, I guess.

 

 

The End of Big Data

2 min read

Jim Bridle turns his hand to writing science fiction, and does a good enough job of it that I wonder why I still bother. Snip:

While I was out cold in my bunk last night, eyes in the sky were dowsing for covert data farms: telltale transmissions near the dew point. You can do a lot with fans, water mist, recirculation and chillers, but thermodynamics is pretty unforgiving. The energy of computation has to come out somewhere, and the combination of heat and rare earth traces is, ultimately, undeniable: a forensics of the machine. Between RITTER’s infrared and the EUROSUR air contaminants grid, we can usually triangulate any processor over 25 kW. A few months ago it took the ground crew almost a week to locate some Estonian ex-Salesforce analysts whose lock-up in Tallinn was cold as stone. Turns out they were piping their server exhaust a kilometer outside of town, but we got there in the end. This morning the sensors picked up suspicious heat sources in Poland and Slovenia. Could be generators, could be thermal dumps. I’ll get to them once my initial sweep is done.

Go read. I nearly cheered out loud at the ending.