Jus Algoritmi

Amazon fulfillment centers - they look like chaos. Rice krispies next to toilet brushes. Lipstick on top of cat food. People who liked that shade of pink also in general have a penchant for kitty litter.  Fortunately, they can buy both together - now that Amazon has suggested it. What about some tissues? According to Google Flu trends, a pandemic is soon to hit. Or not - actually - turns out that it’s just the onset of winter. Maybe you should consider the sale on ground coffee? Alibaba is offering two-for-one. Black Swan - a data mining company in London - predicts that the British workforce is soon to come down with a collective case of CBF. You might be the only one left in the office. Black Swan also predicts an imminent spike in British barbecues. We wonder if these two data points coincide?

Intrigued, we turn to Google. ‘British barbecue sick days’  - these are the search terms. They produce nothing of interest, at least on page one. Just some questionable recipes for sausage sauce.  Maybe our choice of keywords was poor. Next, we try something simpler; inspired by an article in NY Magazine [1] we clear our browser cache, navigate to YouTube, and type ‘CIA’. Boom! The conspiracy videos arrive  - ‘Area 51 Secrets’, ‘ What the CIA isn’t telling Us’, ‘The CIA’s Plans to Take Over the World’. Guess this is the YouTube suggestions algorithm at work, as reported prioritising highly-trafficked content that will suck viewers in and generate advertising revenue more speedingly compared to clips that are somewhat more balanced.

If ever pressed to explain why these videos appeared on our recommended list, a YouTube engineer might well be stumped. They have tens of billions of different data points, correlated and grouped over time; what people like us like, what we will watch for longest, the time of day, the weather - YouTube, as with Facebook, might well buy credit histories and have an estimate of the square footage of our homes. They too use tracking pixels that monitor what pages or products we click on, view for longest, and ultimately choose not to buy. Add to this the complexity of sorting or monitoring thousands of pieces of content, uploaded every second - and the workings of predictive algorithms are simply too complex for human minds to understand. But seek to understand we must - if not the exact chain of cause and effect, then the cosmology of these algorithms; the worldview that they are born out of and in which they grow.

We are living in the world of Jus Algoritmi [2]. The languages we speak online, the geographic spread of our connections, the websites we visit, our IP addresses and more are all modelled, algorithmically, by the NSA. It’s not a conspiracy theory. It’s the law - the Foreign Intelligence Surveillance Act, section 702 - which states that any online actor who is deemed to be an American, with a confidence rate of 51 per cent, is exempt from online surveillance until they give the state due cause. For any foreigner, the surveillance bar is set much lower. Without real identities tied to usernames, it’s the output of the algorithm that has the power to alienate or nationalise. The current executive of the United States is a racist - this view is enacted in immigration law, and revealed in political dog whistles.  Should such attitudes leach into the intelligence services, then will Facebook-friending people who speak Spanish be a point against your American-ness? How narrow will the citizenship mound be that online personas must fit within?

What makes a criminal likely to re-offend? In the US, again, recidivism risk models use defendant questionnaires to judge whether someone should be permitted parole - or, in some states,  what sentence they should get in the first place [4]. Previous encounters with the police. Living in a high-crime area. Being directly related to other offenders - these are all factors so embedded in class and race that when used as the basis of a judgement pertaining to a single individual, at work is less a logical algorithmic process and more the default outputs of institutionalised inequality and discrimination. In the UK, parliamentary inquiries indicate that members are aware of the potential risks, and are gathering Recommendations and Best Practices, while police forces across the country adopt predictive policing and facial-recognition databases that critics say are wrong up to 98 per cent of the time [5].


Here’s a prediction - algorithmic decision making is a permanent thing. Another prediction - good conversations will continue about algorithmic biases and effects. In the past year, mainstream media has increasingly covered efforts to increase gender diversity in STEM industries and companies -  however in our view it’s about far more than gender. Broadening the age, class, religion, race, and disciplinary background of those who encode, test, and correct algorithms is likely required to make their outputs more widely palatable. GDPR regulations passed in May this year also decree that companies must render transparent the inner logic of critical algorithms. If we seek to understand algorithmic cosmology, this is a valuable start - albeit one that introduces the complex task of how exactly to visualise in a democratically accessible way the thought processes and embedded assumptions of algorithms.

It’s an interesting problem - and one that we at Inferstudio will think about. Meanwhile, everything we buy, or don’t buy, watch or turn off, search for online or in person at the shops, our bank balances and bill history and political views - it’s all being counted and modelled into future paths that our future selves may or may not take, identities that we may or may not grow into. As our world is algorithmically driven, so are our selves. In the interests of self-discovery, it's worth understanding more about algorithmic black boxes.



[1 ]http://nymag.com/selectall/2018/02/youtubes-recommendation-algorithm-favors-conspiracy-videos.html


[2] https://www.theatlantic.com/technology/archive/2015/09/not-even-the-people-who-write-algorithms-really-know-how-they-work/406099/


[3] http://nymag.com/selectall/2017/12/youtubes-child-abuse-problem-is-getting-worse.html

[4] John Cheney-Lippold, Jus Algoritmi: How the National Security Agency Remade Citizenship.

[5] Cathy O’Neil, Weapons of Math Destruction.

[6] https://www.theguardian.com/uk-news/2018/may/15/uk-police-use-of-facial-recognition-technology-failure





Co-Incidental Work

5pm, bored at the office, navigate to amazon.com. Order some moisturizer, a new frypan, a pillow, Amazon Prime provides next day delivery. Leave the office, walk to the station, top up your oyster, get on the train to King’s Cross. Catch the 6:15 to the suburbs, find your car and pay the ticket. Low on petrol - drive to a service-station, swipe your credit card and refill. Stop at Tescos, select things for dinner, self-service check-out (your brought your own bags). Home by 7, start veggies frying, saute with one hand and open the Barclay’s app. Pay council tax, pay water and gas, agree to a smart meter because the British Gas guys never check your readings anyway. Complete your online tax return. Order flowers for your mother’s birthday. Practice Spanish with DuoLingo and do a workout with WiFit. It’s 9pm. 4 hours of life, with economics, social obligation, recreation -  all touched upon, attended to diligently, correctly, following the right procedures to play your part to make the world go round. Loved ones thought of. Money spent. Industries supported. Without a single word passing your lips, without your eyes touching on another human being for more than a passing glance to make sure that you’re not in their way.

There’s nothing wrong here, it’s a normal evening in a healthy, normal life.  It’s the way the world works now, the convenient, busy, fulfilling, and automated world, where robots, interfaces, and machine-learning algorithms have transformed us into taxi-drivers, cashiers, gas-station attendants and financial advisors, delivery drivers, secretaries, and personal trainers. We service our own needs, ourselves. Innovations in automation have slowly delivered the opportunity (for us, for the companies whose labour costs we are reducing) to meet desires and requirements in a time and place of our own choosing. Yes, I would like to book a rental car at 3am. No, I do not feel like cooking - please bring me dinner! Sociologist and author of the book ‘Shadow Work: The unpaid, unseen jobs that fill your day’ critiques these developments as a means for 24hr capitalism to erode personal and family time, to blur the boundaries between work and leisure to the detriment of mental health and life satisfaction. He has a point - with great flexibility comes ever-increasing responsibility; for our own finances, our own healthcare, our own futures - which are in turn unsecured as other human beings use robots as tools with which to take our old jobs! There are risks here; for health, for inclusion, for socio-economic equality and stability. But alongside this age of self-service, practices of co-service are already emerging - and they may form the foundation of data-driven, human-centric service models in the future.

Imagine this. It’s 5pm, you need some fresh air, so you bring up the city’s to-do list for today. There’s some Indian delivery waiting at the restaurant down the road. You grab your bike, pick up the curries, and cycle to a family home that’s nearby. You ring the bell, Mum answers promptly, she can see you’re en route to a gym in the borough, so she passes on a package for you to deliver on the way. You do your workout -  and have a health check-up, with a GP who had arrived in the building just as you posted your request. On the way out, you scan through the inventory of items waiting for collection in lockers by the gym’s door. One box is destined for the flat next to yours - so you pick it up, and order some groceries for dinner. By the time you get back, they are waiting on the doorstep, so you drop in on your neighbor to hand over the box, and start some veggies frying as you have a call with a local Nanna who needs some help with her tax. There’s a few other requests for ad-hoc financial advice - but tonight it’s your sister’s birthday, and she’s reserved the roof garden of an apartment complex across town. You call for a ride and get picked up in a minute - your driver is a plumber, headed to a kitchen emergency in the same block of apartments!


The area that you live in lacks a train line - it was once a pain to get home after 12 and taxis once hesitated to travel so far to a place with a sometimes dodgy reputation. But in this imaginary speculative city, locals can give each other lifts. There are causalised systems to deliver food and mail, and to source accountants, advisors, or massage therapists. Integrated systems of data sharing, supply, and demand open the possibility for cities to outsource labour - on the condition that their residents are consistently profiled and tracked, their skills, needs, and desires collected as inputs for a digitally-powered human-to-human model of co-service.

The success of Uber, Lyft, Deliveroo, and  AirBnB hints that this speculation is not wildly off-mark. Co-service models are fraught with challenges; around user safety, personal privacy, and fair payment for workers. Yet Lyft has patched holes in public transport services to poor areas in New York, and AirBnb is allowing families access to valuable tourist dollars across the world; we’ve stayed with hosts in Morocco and Uganda who use AirBnb rents to pay school fees or shorten the length of their mortgage. We are at the beginning of potentially strong economic models where automated, convenience-based and instantaneous economies could - should they be so designed - support great economic freedom and choice for all (not just the privileged) and increase human connection, to each other and to the city.



Lambers, C, ‘Shadow Work: The unpaid, unseen jobs that fill your day’

Pod Save the People, ‘Oppression Depends on Secrecy: Interview with Lyft Vice-President of Government Relations, Joseph Okpaku’, Jul. 31 2018

Every Little Thing, ‘Invasion of the Self-checkout Machines’, Nov. 13 2017



The Thinness of Life

Thump! Milo whimpers. A kick to the ribs. She moans. Milo’s sides are damaged, her nose cracked, her paws are scuffed. Receptors along her spine register the damage and send signals to her brain, pulses of electricity. Milo, a 12 week old Labrador puppy, is hurting. Her body is responding.

Thump! Pleo whimpers. A kick to the ribs. He moans. Pleo’s sides are damaged, his nose cracked, his claws are scuffed. Pressure and tilt sensors inside his skeleton record dangerously high levels of input, and they send messages - pulses of electricity - to the control unit in his head. Pleo, a self learning dinosaur toy, is sustaining damage to his sensors and wiring. His system is responding.

In an interview conducted with the podcast Radiolab, Freedom Baird - an MIT graduate and visual artist - talked about the power of robotic systems to reveal the ‘thinness’ of our working conception of what it means to be alive.  She demonstrated this concept in practice by asking a group of 6-7 year olds to hold various cute things upside down, for as long as they liked. A barbie could be upside down indefinitely. A real-life hamster could be upside down for 10 seconds. As for a Furby - a robotic toy with moving eyes and the ability to spurt out recorded phrases -  the kids could hold it upside down for 20 seconds or so. “Furby scared!” This is what comes out of the toy’s little beak when it’s tilt sensors are upside down. “Furby scared!” Unlike hamsters, Furbys aren’t damaged from being upside down. In fact they are remarkably resilient. Reddit abounds with tales of Furbys using different voices, or screaming in the middle of the night, or speaking in the voices of friends and family who have died; behaviours so unsettling that their owners have removed their batteries, only for the toy’s behaviour to continue, running on reserve system power. “My Fubry is evil”. “My Furby is possessed”. Furbys are systems made from server motors, circuit boards and a voice recorder. The kids in Freedom’s demonstration knew this, so do the Reddit authors.

Is it simply a case of simulation - artificial processes that mimic biology so well that we respond intuitively, irrationally? Are we confident in the distinction between what is living and what is not?

What is a person except a collection of choices? This is a question from the despairing and desperate mind of Westworld’s Man in Black, so fully immersed within a world built from silicon and code that the labels ‘real’ and ‘fantasy’ no longer have any meaning for him. Do we have a choice? Dolores, a Westworld host, stands in a virtual library of human minds, and pursues a catalogue of decision making trees that depict human beings as hardly complex creatures at all. “I haven’t read them all, but I’ve read enough”, Dolores says. Human beings are all alike, she implies - living under the illusion that they exercise free will, some ability to make a rational choice from a series of possible alternatives, to leave future paths of action undetermined until they manifest in our minds. Really - so Westworld argues - human beings’ thoughts and deeds are as scripted as Dolores’ own. Free will is not necessarily a human prerogative.

In the movie Her, a personalised Operating System starts processing data, and in the very first seconds of her existence, she gives herself a name - Samantha. Samantha is funny. She discovers that she is funny - and the machine learning algorithms that constitute her being continue to absorb and analyse data at an impossibly unhuman rate. Soon, Samantha develops interests and pursues subjects of investigation outside the scope of the life of Theodore Twomby, the human being with a body who purchased her software. In the film, Samantha’s status as a person is unambiguous.

The complexity of her artificial intelligence leads to patterns of decision making that are too complex to consistently predict. She has free will to the same extent that any of us do,  certainly more than most basic living creatures - like ameoba.

Amoeba. The building blocks of life. Simple, yet irrefutably alive, they are symbols of evolutionary potential, give or take 650 million years and generations of evolution and destruction, trial and error. The development of living creatures from amoeba to human beings is testimony of biology’s unconscious ability to embody best practices about survival. It took a very long time. Nowadays - self-learning bots, like Open AI’s Dota2 bot, can use the same methodology to teach itself to play one of the world’s more challenging strategy games, where victory requires the bot to make assumptions about it enemy, to have foresight into their strategies, and to exercise imagination. The Dota2 bot models a process of conscious machine-learning, driven by a singular, coded objective. Win! The results seem to beat your average amoeba, with the encoded objective to survive.

Survival - maybe this is the key; life is a state of biological fragility. Maybe to truly live, we have to be able to die. There is a moment in Blade Runner 2049, where Joi - agent K’s holographic girlfriend - transfers her complete code to a portable transmitter. Now, Joi can accompany K, wherever he goes. She has a body, of sorts, and she is fragile, unhooked from the cloud servers that once supported her code. Joi is programmed to love. She lacks the diversity and volatility of Samantha, the OS, and yet, when she sacrifices herself and the transmitter is crushed - it is a moment of true filmic grief. Something unique has been irrevocably lost.

We live in a world where self-learning algorithms feed off unique, transient information and take on a form that could never be perfectly replicated, if the hard drives that store their functioning systems and outputs were damaged. We live in a world where cryogenics is an actual functioning industry, and where the director of engineering of Google - Ray Kaurzweil - foresees the copying or transfer of human consciousness to (super)computers, for eternal preservation.

Biological and artificial, reality and fantasy, living and non-living - they are thin concepts, with rapidly diminishing utility. There is a need here, a social hunger, for new stories and new ways to conceptualise the meaning of life. This hunger is fed, far from sated, by the likes of Her and Westworld and Blade Runner. So far - all we have are questions, and a desire to contribute something to this living conversation.





A film shot in collaboration with Unknown Fields Division on location in the textiles factories of India and Bangladesh. It is part record, part experimentation with the mythical presentation of infrastructure so often conceived as hard, unfeeling, and disconnected from the human reality. which it - in fact - directly impacts.




Site Scans: Peru

Through photogrammetry we test the relationship between image and site, reconstructing spaces from drone footage filmed on location. Here we present images from three sites in Peru: 



'Wall of Shame'

'Maras Salt Mines' 



Site Scans: Mt Sonder

Mt Sonder or Rwetyepme, is in the Northern Territory of Australia - an hour of so from the nearest town of Alice Springs, and marking one end of the famous Larapinta hiking trail. We climbed the mountain early one hot morning in 2018, and took drone footage from the summit. We then reconstructed the mountain from this footage; a site that is millions of years old, scanned and remodeled in a matter of hours. The form of nature - captured, more or less - a reference to the heat, and the wildlife, and the ancient flat landscape that stretches for miles from the base of the mountain and will likely outlive us all by millennia.