Skip to main content

* researcher in infrastructure futures and theory (University of Sheffield, UK)
* science fiction author and literary critic
* writer, theorist, critical futurist
* dishevelled mountebank

velcro-city.co.uk

orcid.org/0000-0002-3555-843X

www.sheffield.ac.uk/usp/researchschool/students/paulraven

 

Your humble servant: UI design, narrative point-of-view and the corporate voice

5 min read

I've been chuntering on about the application of narrative theory to design for long enough that I'm kind of embarassed not to have thought of looking for it in something as everyday as the menu labels in UIs... but better late than never, eh?

This guy is interested in how the labels frame the user's experience:

By using “my” in an interface, it implies that the product is an extension of the user. It’s as if the product is labeling things on behalf of the user. “My” feels personal. It feels like you can customize and control it.

By that logic, “my” might be more appropriate when you want to emphasize privacy, personalization, or ownership.

[...]

By using “your” in an interface, it implies that the product is talking with you. It’s almost as if the product is your personal assistant, helping you get something done. “Here’s your music. Here are your orders.”

By that logic, “your” might be more appropriate when you want your product to sound conversational—like it’s walking you through some task. 

As well as personifying the device or app, the second-person POV (where the labels say "your") normalises the presence within the relationship of a narrator who is not the user: it's not just you and your files any more, but you and your files and the implied agency of the personified app. Much has been written already about the way in which the more advanced versions of these personae (Siri, Alexa and friends) have defaults that problematically frame that agency as female, but there's a broader implication as well, in that this personification encourages the conceptualisation of the app not as a tool (which you use to achieve a thing), but as a servant (which you command to achieve a thing on your behalf).

This fits well with the emergent program among tech companies to instrumentalise Clarke's Third Law as a marketing strategy: even a well-made tool lacks the gosh-wow magic of a silicon servant at one's verbal beck and call. And that's a subtly aspirational reframing, a gesture -- largely illusory, but still very powerful -- toward the same distinction to be found between having a well-appointed kitchen and having a chef on retainer, or between having one's own library and having one's own librarian.

By using “we,” “our,” or “us,” they’re actually adding a third participant into the mix — the people behind the product. It suggests that there are real human beings doing the work, not just some mindless machine.

[...]

On the other hand, if your product is an automated tool like Google’s search engine, “we” can feel misleading because there aren’t human beings processing your search. In fact, Google’s UI writing guidelines recommend not saying “we” for most things in their interface.

This is where things start getting a bit weird, because outside of hardcore postmodernist work, you don't often get this sort of corporate third-person narrator cropping up in literature. But we're in a weird period regarding corporate identities in general: in some legal and political senses, corporations really are people -- or at least they are acquiring suites of permissible agency that enable them to act and speak on the same level as people. But the corporate voice is inherently problematic: in its implication of unity (or at least consensus), and in its obfuscation of responsibility. The corporate voice isn't quite the passive voice -- y'know, our old friend "mistakes were made" -- but it gets close enough to do useful work of a similar nature.

By way of example, consider the ways in which some religious organisations narrate their culpability (or lack thereof) in abuse scandals: the refusal to name names or deal in specifics, the diffusion of responsibility, the insistence on the organisation's right to manage its internal affairs privately. The corporate voice is not necessarily duplicitous, but through its conflation of an unknown number of voices into a single authoritative narrator, it retains great scope for rhetorical trickery. That said, repeated and high-profile misuses appear to be encouraging a sort of cultural immunity response -- which, I'd argue, is one reason for the ongoing decline of trust in party political organisations, for whom the corporate voice has always been a crucial rhetorical device: who is this "we", exactly? And would that be the same "we" that lied the last time round? The corporate voice relies on a sense of continuity for its authority, but continuity in a networked world means an ever-growing snail-trail of screw-ups and deceits that are harder to hide away or gloss over; the corporate voice may be powerful, but it comes with risks.

As such, I find it noteworthy that Google's style guide seems to want to make a strict delineation between Google-the-org and Google-the-products. To use an industry-appropriate metaphor, that's a narrative firewall designed to prevent bad opinion of the products being reflected directly onto the org, a deniability mechanism: to criticise the algorithm is not to criticise the company.

#

In the golden era of British railways, the rail companies -- old masters of the corporate voice -- insisted on distinctive pseudo-military uniforms for their employees, who were never referred to as employees, but as servants. This distinction served largely to defray responsibility for accidents away from the organisation and onto the individual or individuals directly involved: one could no more blame the board of directors for an accident caused by one of their shunters, so the argument went, than one could blame the lord of the manor for a murder commited by his groundskeeper. 

 

Innovation dynamics in the metasystemic stack

2 min read

Joi Ito expresses some misgivings (far milder than my own) about "the Bitcoin community", and along the way provides this gem of a case-study:

One of the key benefits of the Internet was that the open protocols allowed innovation and competition at EVERY layer with each layer properly sandwiched between standards developed by the community. This drove costs down and innovation up. By the time we got around to building the mobile web, we lost sight (or control) of our principles and let the mobile operators build the network. That's why on the fixed-line Internet you don't worry about data costs, but when you travel over a national border, a "normal" Internet experience on mobile will probably cost more than your rent. Mobile Internet "feels" like the Internet, but it's an ugly and distorted copy of it with monopoly-like systems at many layers. This is exactly what happens when we let the application layer drag the architecture along in a kludgy and unprincipled way.

Historically, the application layer of a network system pretty much always drags the architectural layer, because the application (or interface) layer is governed by commercial incentives to innovate; those commercial incentives may result in improved functionality, but they are just as likely (if not depressingly more so) result in the appearance of improved functionality (which is a very different thing, and sometimes the exact opposite).

This isn't to say that the architectural (or infrastructural) layer has no influence in the other direction, of course, but infrastructure is by necessity a very slow game: big-ticket projects on the largest of geographical scales. The interface layer is inevitably more nimble, more able to iterate quickly; when the interface layer in question is pretty much pure software (as in the example of the blockchain), that is even more the case, because the opportunity cost of iteration and testing is so low, and the potential rewards so ridiculously high. (However, the infrastructural layer is far from innocent, as the battles over Net Neutrality indicated very clearly.)

As Ito indicates, and historical evidence supports, open protocols and shared standards between sociotechnical systems lower costs and open up the field for innovation to *all* players in the stack, not just to the interface developers.

That alone should tell you exactly why Silicon Valley dropped the Open Web.

 

Leading with an apology: some thoughts on innovation in communications

5 min read

Something I'm finding interesting about the New Newsletter Movement (which isn't really a movement, but is surely a definite phenomena in a certain slice of the internets) is the normalisation of the Extended But Friendly Unsubscribe Disclaimer, wherein profuse preemptive apologies are made for the possible cluttering of inboxes, and the ease of avoiding such is highlighted. It's not surprising -- on the contrary, it serves to highlight that the move to newsletters was driven at least in part by a sense that there are an excess of push-notification demands on people's attention, and that we all know they're no fun any more (even if we're still occasionally unwilling to say so).

Email is a fairly pushy medium too, of course (which is why it's such a popular topic for those work/life balance articles), but it seems to me to have two main merits in the context of the current communications retrenchment: firstly, there are a lot more third-party tools and techniques for managing email as multiple flows and categories of comms (including, crucially, easy blocking and blacklisting); secondly, no one can envisage being able to give up email forever, so the inbox is both a comfortable and secure place in which to set up one's ultimate data redoubt. Hence newsletters: they're a one-to-many subscriber-based push medium, much like socnets, but -- crucially -- the interface through which both the sender and the receiver mediate and adjust their experience of communicating via newsletters, namely the inbox, does not belong to the company providing the transmission service. 

Sure, that interface may well belong to someone other than the end-user -- most likely G**gle or another webmail provider -- but the point is that the route between sender and receiver has a whole bunch of waypoints, seams between one system or platform and another where one or another of the communicants can step in and control their experience. With FarceBork or Twitter, that communicative channel -- the interface apps, the core protocol and its design principles -- is all in-house, all the time, a perfect vertical: it works this way, that's the only way it works, take it or leave it. (Note that it takes either network effects or addicition mechanisms, or possibly both, to build the sort of product where you can be so totalitarian about functionality; note further that network effects are easier to achieve in closed and/or monopoly networks.) So the newsletter is a point of compromise: a one-to-many-push model which retains plenty of control at both the author and reader ends. 

And so we have a situation where one of the most common features of the use of a particular opt-in medium is a disclaimer about how easy it is to avoid further messages from the same source. I find this of some considerable interest -- not least because rather than being a technical innovation, it's actually a reversion to older technologies which have been rearticulated through a new set of social protocols and values.

That said, it's a little odd that we've jumped all the way back to email, skipping over the supposedly-failed utopia that was the Open Web (or whatever we're now calling it in hindsight): y'know, blogs, aggregators, pingbacks, RSS, all that jazz. I do hear some lamenting for the Open Web, but it tends to be couched in a way that suggests there's no going back, and that the socnets pushed all that out of the way for good. And while that may be true in commercial terms, it's not at all true in technical terms; I can't speak to the change in running overheads, especially for anyone running anything more than the website equivalent of a lemonade stand, but all that infrastructure is still there, still just as useable as it was when we got bored of it. Hosting is cheaper and more stable than it was a decade ago; protocols like RSS and pingbacks and webmentions only stop being useful when no one uses them.

So why didn't we go back to blogging? After all, the genres of writing in newsletters are very similar to those which were commonplace on blogs, it's a one-to-many-pull medium (so no accidental inbox invasions), and the pertinent protocols are just sat there, waiting to be written into software and used again.

But it's a lot more effort to run even a small blog than to run a newsletter (you effectively outsource all the work besides the writing to your newsletter provider, for whom it's less a matter of work and more a matter of maintaining automated capacity), and you still have to go "somewhere else" (whether directly to the site, or to an RSS aggregator) to catch up with the news from others. Newsletters are just easier, in other words -- sufficiently easy that the inherent deficiencies of the medium don't seem too much of a chore to manage, for sender or receiver.

Whether that remains the case for newsletter authors with very large audiences, I have no idea -- and how long it will remain the case is just as open a question, as is the question of where we'll move our discourse to next. However, it's pretty clear that the newsletter phenomenon thumbs its nose at the standard models of innovation, wherein we transition to new technologies on the basis of their novelty and/or technological advantages. This is good news, because it means that we're perfectly capable of rearticulating the technological base of the things we do in response to changing social meanings and values -- and perhaps it even suggests that those meanings and values are more influential than the supposed determinism of the technological stack itself.

We can but hope, I guess.