James Padolsey's Blog

2023-12-13

Akihabara, and my reflections on the democratization of AI

Sketch of an electronics store aisle with customers and shelves lined with various vintage electronic devices, capturing Akihabara's Electric Town ambiance.

Weaving through alleys of Akihabara, the so-called Electric Town of Tokyo, one notices the overwhelming amount of choice. In Yodobashi Store, an eye-watering nine-storey electronic supermarket, there are multiple aisles dedicated to converters of all varieties through the ages. From SCART to VGA, HDMI to DisplayPort and beyond. It is a chronological tapestry spanning from bygone decades to the cutting-edge. But even the term 'cutting-edge' seems a misnomer. We consumers rarely have such access. We are never quite at that edge. We, meek and wide-eyed, are perpetually behind the cusp, definitively at the very end of the supply chain. We buy goods that were speculated, designed, and manufactured for us, not with us nor by us.

It is not us who filled those shelves. It is the many lone imaginative entrepreneurs who’ve built empires defining decades of technological progress. Steve Jobs of Apple, Shigeru Miyamoto of Nintendo, Akio Morita of Sony, and many more individuals with vision and conviction, have defined what objects lie in our living rooms, on our desks and in our pockets. We are beholden to such companies, yet we seem to have little say in what they deem appropriate to manafacture. After all, we can only buy what’s actually for sale. And what's for sale is deliberated by those empowered few.

You may be thinking that this is fine; the incentives line up beautifully. The bold creators create, the buyers buy, supply meets demand, the market defines itself. Ad nauseum. However, superior technologies sometimes fade into obscurity while their less advanced siblings take the winnings. This was the story of Sony’s Betamax cassette format, Toshiba’s HD-DVD, Apple’s FireWire, Saga’s Dreamcast, Palm PDAs. The list goes on and on. All technically superior for the very brief time they tasted existence. Why are we left with less superior tech? Better marketing? Better management? Probably. Competition is supposed to serve the consumer but we are somehow still left with shoddy battery lives in our phones, annoying latencies on wireless earbuds, webcams with awful quality, and home printers from the dark ages.

This, in a nutshell, is the story of consumer electronics. We gratefully assimilate into our everyday lives the best items we happen to find on the shelves. Software, however, is a different story. We are much less beholden to the limits set by tech giants. In part, this is thanks to new anti-trust laws preventing the likes of Microsoft and Apple from locking you into their products. But it is also thanks to the ballooning capability of web browsers over the last two decades, making it far easier to create applications with less code and more functionality. This democratization of technology has never really existed in the hardware world, at least not until the era of the Raspberry Pi. But even with that, it’s a simple matter of cost and materials. Lone hackers and creatives in their homes can't get their hands on materials or machinery needed to pack miniscule transistors into nanometre scales. But software rarely struggles with such intimidating constraints. More and more individuals, with nothing but their laptops and phones, are learning to wield their many apps and computer literacy in savvier ways, subsuming themselves into cloaks of the “power user”, “coder”, and “hacker.”

The latest addition to our abstract aisles in Yodabashi is that of the very capable – and menacing – Artificial Intelligence. A child of both extraordinary software and hardware, it’s not something we’ve really been able to “see and touch” until now. We’ve heard many dramatic tales, ranging the gamut of extinction, nuclear war, cyber-espionage, and more. But when we sit down with this supposed menace, tap a few sentences into it, and see what it has to say, our sci-fi doomerism finds itself without a nail to hit.

Cliche-ridden marketing material, kindergarten-level numeracy, avocadoes turned into sofas, and pictures of anatomically incorrect politicians; these aren’t exactly the four horsemen of the AI apocalypse.

Yet, even with such doubts and reports of disappointment, Generative AI is working its way into Fortune 500 boardrooms and teenagers’ devices alike. The latter is wielding it out of fun, the former out of fear and confusion. For corporates, it’s a mad scramble out of the sticky mud, not for a hope of winning, but just of surviving. Governments, too, are trembling. This entire episode is reminiscent of the mad rush of cryptocurrencies. Nobody seems to know what’s going on, but they are all shifting their pieces around to make damned sure they’re a winning stakeholder however the dust settles. If it all proves rather underwhelming, they won’t have lost much anyway.

But here's my question for those geared up for such disappointment: If this is all just a meaningless goldrush, and AI doom is a fanciful fiction, and the content that AI generates is mere child's play, then why are technologists and academics alike, frothing in excitement and fear too? Are they drunk on kool-aid or hoping their equity ticks up before they clock out? The latest social meme in San Francisco is to exchange each other's p(doom), that is: one's assessment of the probability (p) that AI will wipe out all of humanity. A rather severe social segue, even amongst the more libertarian elites.

I can understand their intensity though. If you’ve spent any time around the people involved in these companies and on their fringes, you’ll agree that Large-Language Models (LLMs), which have largely become synonymous with 'AI', are far more impressive than anyone in these labs and startups could have hoped for. An LLM was intended to be a noteworthy milestone, a nice progression from OpenAI’s early days in Starcraft pathfinding and AI-ethics punditry. But, instead, it was a scarily huge leapfrog ahead towards a general purpose AI that may yet bring about the next industrial revolution.

But what of the anecdotes of badly solved riddles and incorrectly rendered anatomies? Those will simply pass by as tiny curiosities in the tapestry, like the ghosts of burnt-in pixels on Plasma screens from the year 2000.

There is a broad agreement that, by carefully wielding the most capable LLMs and other neural networks, we'll arrive at something more-or-less in the remit of AGI: Artificial General Intelligence, a synthetic intellect not just mirroring but transcending human cognition. It’s either a mountain or cliff that approaches us but we can’t see past a foggy horizon. Thankfully, we are not powerless hobbyists praying that the Apples and Samsungs of this world will grace us with what we will learn to desire, no. We are soon, and in some ways already, the enablers and definers of our own artificial intelligences.

Right now, OpenAI holds the keys to massive capital and mindshare. Anthropic’s PhDs are busying themselves with alignment research. Microsoft is speeding ahead hand-in-hand with OpenAI. Google, meanwhile, pumps money and fake marketing while twiddling its thumbs. But we, the billions of end-users, at this moment, are more empowered than ever before. Not just as users but as creators too. With just a few components, programmers and artists around the world are, with furious excitement, developing new ways of wielding artificial intelligence every day. They do so limited only by their creativity and skill. And new AI models are being trained and released every week. Most recently, Mistral, a small open-source-minded outfit in France, released an LLM that can run on devices as tiny as our phones. It’s got the skills that ChatGPT did only a year ago, but in a much tinier package.

Governments and policymakers wait on the side-lines, watching with piercing stares a stream of new possibilities that they simply do not understand, and the mass citizenry are similarly worried and misinformed by scaremongers who idiotically imbue these algorithms with anthropomorphic agency and malevolence akin to Skynet. They wish to regulate AI more stringently than we do automobiles, keeping the roadways of information safe and well-ordered. It is a good intention that many will nod heads to, but I ask you this: what if the internet were so guarded? What if, in its early inception, it was immediately locked down? Imagine all the things we would have lost. AI will either be owned and regulated by a select few powers, or it will be an open space to benefit all.

To my friends in the tech sector, hobbyists, hackers, creators: we can boldly decide to make AI the rising tide that lifts all boats, or we can insularly crowd around technical intrigues and let our peers in the mainstream remain beholden to a select few godly Silicon Valley “pioneers”. And to all of us, whether creators or users, we must vote with our feet more than ever before. Policymakers who represent you and market leaders who fill your shelves are, in the end, yours to influence. The next few decades, and perhaps centuries, depend on this singular point of inflection.