Brave New Digital World E04

Chronicling Technologically-Turbulent Times: Signal Loss

S01E04: Sam and Satya Scanned Darkly

You’ve entered a Brave New Digital World: trust nothing.

The OpenAI debacle and the ramifications for ‘AI Safety’

Now that the recent tumult at OpenAI — the Bay Area Behemoth behind ChatGPT — has simmered down, we’ll take a moment this holiday season to sift through the aftermath. This episode examines the history of OpenAI and sheds light on the recent happenings at the fulcrum of commercial AI innovation, pondering the potential long-term ripple effects of recent events.

The saga at OpenAI in November 2023 wasn't just a series of executive reshuffles. It was a narrative of power struggles, employee activism, and strategic alliances, all playing out in the fast-evolving landscape of AI technology, with potentially profound implications for our future.

OpenAI: some Historical Context

OpenAI was founded in December 2015 as a non-profit artificial intelligence research company. A cool billion dollars of startup capital came from a veritable ‘Who’s Who’ of Silicon Valley Technoligarchs, Money Mafiosos, AI experts and corporations, including:

The purported aim was to promote and develop human-friendly AI (avoiding Skynet scenarios and suchlike), steered by altruistic and humanistic principles as much as - if not more than - capitalist instincts 🤔

The Shift to a "Capped-Profit" Model

In March 2019, OpenAI made a significant shift in its structure. It transitioned from a pure non-profit to a hybrid model, forming a new entity called OpenAI LP, which is a "capped-profit" company. This change was underpinned by the need to attract more capital to scale their work, compete with large tech companies, and accelerate their research in AI. OpenAI Inc., the original parent company, became the sole member of OpenAI LP, functioning as a non-profit and holding final decision-making authority.

Under this new structure, OpenAI Inc. operates as a kind of overseer, ensuring that OpenAI LP adheres to its mission of safe and widely distributed AI benefits. The capped-profit model means that returns to investors and employees are limited, ostensibly helping OpenAI to remain pure and focus on its humanist charter without the pressure of optimizing purely for financial return.

A brief detour into “AI Safety”

The topic of AI Safety is not new: Back in the 1940’s, author Isaac Asimov foresaw many of the ethical quandaries that humankind would have to grapple with once intelligent machines became a reality. Moreover, many scientists, philosophers and writers have opined on the risks inherent in creating unconstrained artificial intelligence that might far surpass that of humans. Such an intelligence could prove tricky to control, not least because it would be able to out-think its opponents, conceal its motives and operate at an incredibly high speed compared to humans.

“Any fool can tell a crisis when it arrives. The real service to the state is to detect it in embryo.”

― Isaac Asimov, Foundation

Some possible ways a deviant AI could pose problems for humanity (src: https://en.wikipedia.org/wiki/AI_safety)

“Moreover, if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes.”

— Norbert Wiener

Recently, the alarm over the potential dangers of unrestrained Artificial General Intelligence (AGI) has been sounded by leading figures in the AI community, including OpenAI's chief scientist, Ilya Sutskever. In an enlightening interview from October 2023, Sutskever offers a window into his worldview, particularly his deep-seated commitment to AI Safety. This conversation not only sheds light on the pressing concerns faced by those at the forefront of AI development but was also published at a pivotal moment in OpenAI's history, just before November’s upheaval.

2019… Enter Microsoft, aka ‘The Windows People’

In 2019, Microsoft, a tech giant with deep pockets, a history of monopolism and an active interest in the burgeoning field of AI, infused OpenAI with a billion-dollar investment. Microsoft invested more billions in January 2023.

Former Microsoft CEO Bill Gates being deposed by US Government lawyers. Nice suit, Bill!

The Microsoft-OpenAI alliance isn't just a cash grab; it's a strategic masterstroke. OpenAI gets access to the Herculean computing might of Azure, essential for developing cutting-edge AI models including the evolution of GPT. Meanwhile, Microsoft, seemingly determined not to miss the AI boat like it did with mobile (not to mention totally missing the Internet before that), is hitching a ride straight to the forefront of AI innovation.

Microsoft laid the groundwork for the current generation of AI technology decades earlier with the launch of Clippy, their groundbreaking Office Assistant. Microsoft employees are notorious for referring to Clippy as the ‘Godfather of GPT’

Yet, like any Hollywood power couple, the glossy veneer of unity might just have been a smokescreen for Machiavellian power plays simmering backstage. After all, in the world of tech titans, AI and trillions of dollars in potential market share, it's rarely just about holding hands in the spotlight.

Fast Forward to ‘The OpenAI Drama’ of November 2023

Friday November 17th 2023: The Expulsion of Saint Sam  

OpenAI's board gave CEO Sam Altman the boot with a vague nod to "lack of candor" while mumbling about a desire to protect OpenAI's humanity benefiting mission​. As of the time of writing, the public is still none the wiser as to what precisely Sam did or didn’t do that caused the board to eject him.

It’s generally considered de-rigeur in capitalist circles to stand back and let CEOs of wildly-successful companies get on with generating shareholder value. There’s even a proverb about geese laying golden eggs that applies in such situations. This move took everybody by surprise. OpenAI is ostensibly leading the way in AI tech, not to mention monetizing access to their API, ChatGPT and Dall-E products about as well as any capped-profit subsidiary of a non-profit part-owned by Microsoft can do.

Not only did the entire internet erupt in a communal ‘WTF?!’ moment, but company President Greg Brockman decided to walk, in solidarity with Altman. Meanwhile, the staff at OpenAI nearly-unanimously threatened to jump ship to Microsoft unless Altman was reinstated​.

“The hopeless don't revolt, because revolution is an act of hope.”

— Peter Kropotkin
Tuesday 21st November: The Plot Thickens

Altman, perhaps feeling like a protagonist in a Shakespearean play, mulls an offer from Microsoft to lead a new research lab, while chaos reigns supreme at OpenAI​ and over 730 staff threaten to quit, demanding the board resigns and re-instates Altman as CEO in an open letter.

Wednesday 22nd November: a Twist!

In a move that would make soap opera writers blush, Altman is suddenly back as CEO, just five days after his ouster​​​​. Internet wiseguys are swift to coin this as “Speedrunning an entire decade of Steve Jobs’ life”, a reference to the time when, after a failed boardroom coup, Steve Jobs was ousted from Apple and formed his own startup, which was eventually purchased by Apple, leading to Jobs re-taking the CEO position.

“The struggle is lost from the beginning, long before the victorious party or army conquers state power and ‘betrays’ its promises. It is lost once power itself seeps into the struggle, once the logic of power becomes the logic of the revolutionary process, once the negative of refusal is converted into the positive of power-building.”

― John Holloway

Note: Mira Murati and Emmett Shear briefly (extremely briefly!) served as interim CEOs between the Friday firing of Altman and his Wednesday reinstatement. Murati, OpenAI's CTO and a staunch advocate for AI safety, stepped in following Sam Altman's sudden dismissal but was quickly replaced by Emmett Shear, Twitch's co-founder. Executive Musical Chairs at expert-level pace.

The Aftermath: Money Talks, ‘AI Safety’ Walks

Satya Nadella, CEO of Microsoft, having played his hand masterfully, expresses enthusiasm for the new board, opining on a strengthened partnership with OpenAI​. (No shit, Satya… you’re the king now).

Wildly-popular CEO, Sam Altman — the approachable and humble face of AI — retains his CEO role, now largely unfettered by ethical considerations (though he plans to still keep them in mind).

Hypothetically speaking, if a superintelligent digital entity from the future were to construct and send back in time a beguiling yet avuncular humanoid to lure the world into a false sense of security about AI, said time-traveler would surely bear a striking resemblance to Saint Sam.

Thanks to Satya’s slick play, Microsoft is now in prime position to profit from a increasingly-commercialized version of OpenAI: we can expect them to pour in countless more billions to power the engines of innovation. MS are keen to point out, however, that they technically don’t own any part of OpenAI.

“The lady doth protest too much, methinks.”

William Shakespere, Hamlet

The new board is considerably less “AI Safety” focused, with Ilya, the gatekeeper of the humanist charter, losing his board seat. He’ll retain his role as Chief Scientist, but apparently keeping an exceedingly low profile at OpenAI, with an uncertain future.  

The board now consists of Bret Taylor (Chair of the board, prev. board member at Shopify), Larry Summers (former co-CEO of Salesforce, Former Treasury Secretary during the Clinton administration and a seasoned economist) and Adam D’Angelo (CEO of Quora and former Facebook CTO who bears a striking resemblance to the protagonist from HBO’s Silicon Valley). D’Angelo remaining on the board confounded the expectations of many pundits who assumed he’d be toast.

Notably, Sam Altman and Greg Brockman, despite their much-heralded comeback — and despite retaining their titles (CEO and President, respectively) — have also been edged out of the boardroom, indicating a seismic shift in the corridors of power.

Critically, Microsoft also snagged a non-voting observer seat at the OpenAI board table​.

As OpenAI grapples with the classic trifecta of innovation, profitability, and ethical responsibility, its trajectory will be pivotal in shaping AI's societal integration. With financial interests now steering the ship, there's a palpable concern that the ethical principles foundational to OpenAI face gradual (or rapid) dilution. There’s nothing new here; as corporations grow up, they tend to shed their constraining, integrity-based baggage.

We are observing a profound metamorphosis. OpenAI, in its pivotal pupal stage, is undergoing significant transformation. Wrapped in a corporate cocoon, a subtle tension simmers beneath. Soon, this chrysalis will tremble and reveal its new form. From its inception, woven from ethical, humanistic principles, will emerge a creature of shadowed elegance and sophisticated ambiguity: its wings steeped in soulless hues. Fluttering not toward the light, it will perform an enigmatic nocturnal dance — a cryptic ballet — choreographed by unseen hands, undulating with the vibrations of the market.

“When, at last, I ceased to be myself, I came to be.”

— Kamand Kojouri

S01E04 Reading List:

First up, we have another hard-hitting, dystopian critique of the human condition in this AI-tinged period of the digital age. Dakara, author of the, dark and foreboding Mindprison blog, make the case that technology - and particularly AI - is accelerating the development of a post-truth* civilization.

The essay makes compelling points and disturbing observations about humanity sleepwalking into an ethereal world of make-believe.

“Any given man sees only a tiny portion of the total truth, and very often, in fact almost perpetually, he deliberately deceives himself about that little precious fragment as well. A portion of him turns against him and acts like another person, defeating him from inside. A man inside a man. Which is no man at all.”

― Philip K. Dick, A Scanner Darkly

Here’s a source that corroborates one of the central tenets of Dakara’s piece above: namely that AI significantly enhances and scales the production of deceptive content. The article below shows examples of the Chinese state scaling ‘fake news’ to push their chosen narratives through crude (but instantly-generated) , but it would be prudent to assume that China is by no means an exception when it comes to below-the-line state broadcasting on the internet. In this case, the article author identified over 30 YouTube channels using AI-generated voice-overs and content to spread Chinese Communist Party narratives, including claims about technological advancements and infrastructure development. Given the inherent scalability of such automated-content-generation approaches and the inherent lag in detecting and taking down such material, it’s safe to assume that these 30 channels are just the tip of the iceberg. #PostTruth is already with us.

“I knew nothing but shadows and I thought them to be real.”

― Oscar Wilde, The Picture of Dorian Gray

As if we haven’t heard quite enough about OpenAI for one episode, it would be remiss not to highlight that Microsoft is pouring in more cash to fund development of the next-gen GPT-5 model, while Sam Altman — perhaps channeling a Victorian carnival barker — hypes up the possibility of an impending Singularity Event.

“The hype cheapens the hyped, as right things are then made wrong by exaggeration.”

— Criss Jami

But wait! It’s OpenAI… again. This time, they’re getting sued by the New York Times in what could be a landmark case in the current wild-west where AI companies are sucking up as much data as they can get their hands on in an effort to feed their always-hungry models-in-training while their in-house counsel cites fair-use doctrine to any complainants.

You can read the full legal filing (well worth a look!) right here (PDF).

Source: OpenAI Dall E (Alanis Morissette would have a field day with this)

Switching track to hacking and infosec, A teenage member of the Lapsus$ hacking group has been sentenced to indefinite hospital detention by a British judge. Arion Kurtaj, who has severe autism, was convicted in August for his role in several high-profile hacks, including those targeting Rockstar Games, Uber and Revolut. Doctors deemed Kurtaj unfit to stand trial due to his mental health condition. The judge heard that a mental health assessment conducted on Kurtaj found him still planning to return to criminal hacking "as soon as possible."

This has been widely misreported as a ‘life sentence’… it’s not the same: he’ll be out at some point, presumably when his autism is under control to such an extent that he doesn’t immediately admit to future hacking plans.

“The inanity of her remark infuriated him. 'Good grief don't you understand Janet? At this point I'm thoroughly delusional. I'm as mentally ill as it's possible to be. It's incredible that I can communicate with you at all. It's a credit to my ego-strength that I'm not at this point totally autistic.”

― Philip K. Dick, The Simulacra

🚨🤓Nerd-Alert🤓🚨 : Only click this next story if - like me - you’re positively exhilarated by low-level code and zero-day chained exploits using undocumented registers. TL;DR: Update your iPhone software (not that it’ll stop a determined state actor). This is one bad-ass and thoroughly-scary hack.

“In a total surveillance state, complicity is much more likely than ignorance.”

― Cliff Jones Jr., Dreck

And finally to BioTech, where scientists have created a computer system that combines real human brain tissue with electronics. The system, called Brainoware (presumably a brand name generated by a very early version of the system), uses mini-brains (or at least networks of live brain cells, grown in a lab), and connects them to microelectrodes via artificial neural networks. It can do things like recognize speech and solve simple math problems (although not as comprehensively as regular computers, which are deterministically great at math). Oh and don't worry, it's all done ethically... for now.

*Fictional Terminology Guide:  Post-Truth (n.)

1.   A societal condition where the discernment of objective truth is obscured, and narratives based on fiction are often accepted as reality.
2.   An environment characterized by the confluence of factors such as virtue signaling on social media platforms, reality distortion through digital filters, the production of engaging yet potentially misleading media content, and various deceptive practices including scams, disinformation campaigns, social engineering tactics, and state-level propaganda.
3.   A phenomenon intensified by the rapid evolution of artificial intelligence technologies, leading to an increased blending of factual and fictional elements in public discourse.

Post-Credit: https://www.mindprison.cc/p/ai-accelerates-post-truth-civilization

The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.

— Arthur C. Clarke

Thanks for reading this far.

We’re about to enter a Brave New Digital Year.

See you in 2025!