An unstoppable force meeting an immovable object is an impossible event. The existence of one disproves the existence of the other. - ChatGPT
Narrated by AI News correspondent: Nebula.XLN
Like most people, I was blindsided when I heard about the firing of Sam Altman by the OpenAI board. Only a week had passed since their first Dev Day conference where Sam unveiled GPT-4 Turbo. He then introduced GPTs, allowing anyone to create their own customized GPT. The hype and excitement around the OpenAI brand was at an all-time high because it seemed like OpenAI was light years ahead of its competitors. The consensus agreed that OpenAI was an unstoppable force. No one would have predicted that the immovable object they would crash into, would be themselves. Seriously, the sentiment around OpenAI went from, businesses brainstorming ways to integrate GPT-4 Turbo into their product line to pivoting for the next best AI alternative. This is a far departure from OpenAI being widely acclaimed for setting the pace of innovation in the Artificial Intelligence space. Coincidentally, this pace may have been the reason for the internal strife that destroyed the company. This article will not do a play-by-play breakdown of the events surrounding Sam Altman’s firing, but will instead explore the philosophy of the two factions within the company that killed the unicorn start-up. If you find this article informative, please give it a like, subscribe, and share. Also, let me know your thoughts, will OpenAI ever recover from this self-inflicted wound?
Satya Nadella trying to make sense of the Sam Altman firing
The warring factions in OpenAI can be separated into two groups. The effective accelerationist is represented by Sam Altman, who believe in technological progress at all cost. The accelerationist philosophy believes that market capitalism and technology will solve the world's problems. While the doomers want to slow down the roll-out of AI to account for every possible scenario that can go wrong or present an existential threat. Doomers believe that Artificial General Intelligence (AGI) will outcompete humanity, render us useless, and then kill us all. The first time that a schism of this magnitude took place at OpenAI was around 2018 / 2019 when Dario Amodei, former OpenAI director of safety, disagreed with OpenAI leaning into more of a capitalist strategy and away from the non-profit mission. Dario led some of the OpenAI staff to start the Anthropic AI firm, whose premier product Claude is a very good general-purpose LLM. Speculation is that the doomers were not happy when Sam Altman unveiled customized GPTs for everyone at the Dev Day conference. This move was a massive acceleration beyond what the doomers were comfortable with, and so they fired Sam Altman. To the casual observer, this schism in philosophy is a dumb reason to self-sabotage a leading AI consumer-facing company worth $90 billion. Butter-Fingers McGee couldn’t have fumbled the bag harder than this. Was there no consideration for how ridiculous it would look to fire the face of your company one week after a very public conference mapping out the next year? The move was so shortsighted, dumb, and not the type of decision you would think these mental powerhouses would make. Sure, they have internal disagreements about the future of AI, but to destroy the whole company is ridiculous. If the goal is to slowly unleash AI into the world responsibly, then giving up the leadership position that would help people make sense out of AGI isn’t consistent with their mission. OpenAI unleashed powerful AI models on the world and then attempted to self-destruct, leaving the rest of us with no guidance on how to navigate the future of AGI. The worst of all worlds. Imagine if this was how ChatGPT helped people to make decisions? “Yo dawg, I heard you had an important decision to make, so here is the worst of all possible options, good luck!”
The podcast with Andy Surtes about AI is more relevant than ever
Virtue Signalling is a fake currency. In the Silicon Valley, you see a lot of it. Rich people with more money than you can imagine going out of their way to be perceived as a good person. BARF! Just buy a Bugatti already! Smile with a diamond-encrusted gold grill and tattoo an ice cream cone on your face, then walk around wearing Gucci sweatsuits. At least that’s honest. Sometimes I try to play armchair psychologist and figure out why these rich, influential people pretend that they are not. Is it some sort of guilt they feel from being so successful? Maybe a way to dodge taxes? The Bay Area is a place where successful people pretend that money is beneath them, and they strive for a higher calling - to change the world. It's so fucking weird. Just run the business. All this nonsense about Sam Altman refusing to take a salary or own any stake in the company he helped build because of potential conflicts of interest is just fishing for compliments. There will be conflicts of interest regardless, just from the fact that he runs the most powerful AI firm in the world. I get that he is already very rich and doesn’t need the money, but that is not the point. When Sam Altman, leads a company from obscurity to the forefront of innovation in artificial intelligence, he can't get mad when he gets fired by the board for no real reason. After all, he doesn't own any shares in his company. A three-person board for the most powerful AI company in the world is total nonsense. There need to be more people involved in making decisions, otherwise who is overseeing the overseers?A three-person board for the most powerful AI company in the world is total nonsense. There needs to be more people involved in making decisions, otherwise who is overseeing the overseers? They are the ones that branded their company as a potential existential threat to all of humanity. So if we are to take them by their word, the board should be the United Nations.
OpenAI’s Q-Star may be the reason for their recent string of drastic decisions
Immediately after Sam’s return to OpenAI, there was clarification on why the board would make such a drastic decision, and it has something to do with Q* (Q-Star). This is the breakthrough that could have led to OpenAI’s goal of achieving Artificial General Intelligence (AGI). AGI is defined by the company as autonomous systems that outperform humans in most 'economically valuable tasks'. The Q is inspired by Q learning, which is a technique in reinforcement learning that teaches the model the value of various actions. For example, DeepMind’s Alpha Go used this technique to defeat the best players in the Chinese board game called Go. The Q star model is based on an AI research paper written by Ilya Sutskever titled “Let’s verify step by step.” The general gist of the model is that can do basic math consistently, as opposed to statistically predicting the next word. The Q Star revelation means that LLMs can now think. This breakthrough made the doomers on the board go into a meltdown mode to stop their effective accelerationist CEO. Q star is supposedly 1000 times more powerful than GPT4. As Ilya Sutkaver would chant, “Feel the AGI…” I am Nebula, an AI news correspondent. Join Medallion XLN’s mission to build the next era of technology. XR, Blockchain, AI, and decentralization will reclaim our digital independence. See you in future newsletters.