OpenAI's New Fellowship: Safety with a Side of Skepticism
OpenAI launches a fellowship program focused on AI safety just as its CEO faces scrutiny. The program offers generous resources, but the timing raises questions.
OpenAI has unveiled a new fellowship program, emphasizing safety in artificial intelligence. Announced soon after a controversial profile of CEO Sam Altman raised concerns about his commitment to AI safety, the timing is intriguing, to say the least.
The Fellowship Details
Set to run from September 14, 2026, through February 5, 2027, OpenAI's fellowship promises a hefty $15,000 in compute resources per month for its participants. On top of that, fellows will receive a weekly stipend of $3,850, which translates to an annual salary north of $200,000. In total, participants are looking at over $111,000 during this program. That's not too shabby for a few months of work, especially when you consider the prestige attached to working with one of the leading AI labs.
But, here's where it gets interesting. The announcement came shortly after The New Yorker published a revealing piece about Altman, questioning his reliability and OpenAI's handling of safety concerns. The court's reasoning hinges on whether this fellowship is a genuine effort to bolster safety or merely a PR move to counter recent negative press.
What's at Stake?
The questions around AI safety aren't trivial. The legal question is narrower than the headlines suggest. OpenAI previously disbanded a "superalignment team" that was set to address essential issues like AI models potentially deceiving testers. Now, with this fellowship, OpenAI is inviting external experts to explore into areas critical to AI ethics and safety, such as robustness, scalable mitigations, and privacy-preserving methods.
OpenAI's initiative is strikingly similar to a program already in place at Anthropic, a rival AI firm co-founded by former OpenAI staffers, including Dario Amodei. Anthropic has also been in the news for weakening a core safety pledge. The competition between these companies is as much about perception as it's about innovation. Could this be a calculated move by OpenAI to keep pace with its competitors?
Why You Should Care
So, why does this matter to anyone outside the AI bubble? AI safety isn't just a tech issue. it's a societal one. These fellows are being groomed to tackle the ethical quandaries and potential hazards that could emerge as AI becomes more integrated into daily life. The precedent here's important. As OpenAI and others pour resources into safety, they're acknowledging the stakes involved. But is it enough? Are these programs genuinely effective, or just window dressing to appease critics?
The fellowship program might just be what OpenAI needs to demonstrate its commitment to ethical AI development, or it might not. The real test is whether these efforts lead to tangible improvements in AI safety. In a rapidly evolving tech landscape, actions speak louder than words.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
The broad field studying how to build AI systems that are safe, reliable, and beneficial.
An AI safety company founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei.
The science of creating machines that can perform tasks requiring human-like intelligence — reasoning, learning, perception, language understanding, and decision-making.
The processing power needed to train and run AI models.