Richard Socher, a well-known figure in the AI field, is now leading a new venture called Recursive Superintelligence, based in San Francisco. This startup, which emerged from stealth mode with $650 million in funding, aims to create an AI model that can improve itself without human intervention. Socher is working alongside notable AI researchers like Peter Norvig and Cresta co-founder Tim Shi. They focus on using open-endedness to achieve recursive self-improvement, a goal that remains elusive for many. Unlike typical improvements, this involves AI autonomously identifying and fixing its weaknesses.
The team includes leaders like Tim Rocktäschel, who has experience with open-endedness and self-improvement from his time at Google DeepMind, specifically with projects like the world model Genie 3. Their method draws parallels with biological evolution, where continuous adaptation leads to innovation.
Additionally, they employ “rainbow teaming,” related to cybersecurity’s “red teaming.” Here, two AIs test each other, ensuring that the first AI can resist saying undesirable things under pressure from the second AI, which constantly challenges it. This approach leads to co-evolution and safer AI models.
Socher mentions that intelligence and self-improvement are ongoing processes, always allowing for advancement. He acknowledges differences between major labs and Recursive Superintelligence, emphasizing their commitment to open-endedness and their team’s contributions to AI research.
Though not just aiming to be a research lab, Recursive Superintelligence intends to develop impactful products for humanity. Socher hints at shorter-than-expected timelines for product releases, suggesting progress is ahead of schedule.
Finally, Socher discusses the significance of computing power, highlighting future challenges in deciding how to allocate resources for solving global problems. This alignment of computational resources and problem-solving will be crucial for progress.
