The Great Deprival
After the latest update of my phone,1 Chrome started putting weird links of previous searches on my browser’s default page. This default page only has a single justification: making it easy for me to access the sites I want to visit regularly. Restoring the old behaviour that I was used to took me way longer than I originally anticipated.2 I felt deprived of my autonomy: finding bug reports for Chrome is quite impossible, and you have to rely on someone else bringing up your own problem in some forum, wading through tons of useless answers along the lines of ‘destroy your smartphone to restore the old behaviour.’
Another example: Slack rolled out changes in the way conversations are being sorted. Some of my workspaces got the change and started showing the unread conversations first, deviating from the alphabetical order, while other workspaces showed the old behaviour. Fixing it required me to walk through a lot of configuration dialogues because Slack still does not support a unified configuration option.
A last example: the app for booking my train tickets conveniently forgets about all the travel cards in my name. I have to re-enter all information manually every time I book. Trying to contact the developers does not work as there is no e-mail, issue tracker, or anything of that sort.
Now, let me make this clear: All of these things are minor annoyances at best and could probably be fixed more quickly with more efforts from my side. My main point is that the accumulation of such issues will deprive us of our autonomy. Think ‘Death by 1,000 paper cuts.’ On any given day, I have too many other things to focus on; the more all of us are relying on large software stacks, the larger our surface for issues or unwanted configuration changes becomes, and the more we run the risk of being deprived of our (computational) autonomy.
I have observed this for myself: I accepted certain bugs as given, some apps as broken, and some configuration options as ephemeral, to be done away with the next update. I am not the only one: many colleagues at my work have accepted that some systems are the way they are and that they do not have the autonomy to change them. There is a kind of insidious Normalization of Deviance at work here. That cannot be good!
Now, of course, some of the examples given just fall into the category of ‘someone else dictates new default setting for a personal (!) device; old grumpy person complains.’ While I understand that software needs to move forward, I feel too young at heart to enter a ‘yelling at everything’ phase.
I do wonder whether this deprival of autonomy is good in the long run. What will the lesson be for users of this technology? To put it dramatically: will we just become used to everything that is pushed to our devices, a caste of docile Eloi that are just force-fed the latest changes by some Morlocks. Is this because we are part of the product instead of being a user? What happens if the machine learning / artificial intelligence revolution really takes off and software creation becomes (even more) automatised?
I have no simple answers here, but I can offer the chilling observation that people adapt quickly, while also feeling a loss of autonomy. This results in some unconscious resignation of the form ‘Well, there is nothing I can do anyway.’ Moreover, we are pretty good at accepting the new status quo.
One way out of this mire is to pursue a strict ‘human-centred software design’ approach. Following the design lessons that software should be designed with human users in mind, we mind end up with better software all around. This is the time where HCI is probably the most relevant research field. Setting bounds for the way we interact with our devices and for the many ways our devices may interact with us will be critical for the coming years. It is a puzzle for me why companies are not investing more in this, but maybe the new toys developed by OpenAI and others will show us how to create better interfaces and better software.
Until next time, stay independent!