According to market-fixated tech pundits and professional skeptics, the artificial intelligence bubble has popped, and winterās back. Fei-Fei Li isnāt buying that. In fact, Liāwho earned the sobriquet the āgodmother of AIāāis betting on the contrary. Sheās on a part-time leave from Stanford University to cofound a company called World Labs. While current generative AI is language-based, she sees a frontier where systems construct complete worlds with the physics, logic, and rich detail of our physical reality. Itās an ambitious goal, and despite the dreary nabobs who say progress in AI has hit a grim plateau, World Labs is on the funding fast track. The startup is perhaps a year away from having a productāand itās not clear at all how well it will work when and if it does arriveābut investors have pitched in $230 million and are reportedly valuing the nascent startup at a billion dollars.
Roughly a decade ago, Li helped AI turn a corner by creating ImageNet, a bespoke database of digital images that allowed neural nets to get significantly smarter. She feels that todayās deep-learning models need a similar boost if AI is to create actual worlds, whether theyāre realistic simulations or totally imagined universes. Future George R.R. Martins might compose their dreamed-up worlds as prompts instead of prose, which you might then render and wander around in. āThe physical world for computers is seen through cameras, and the computer brain behind the cameras,ā Li says. āTurning that vision into reasoning, generation, and eventual interaction involves understanding the physical structure, the physical dynamics of the physical world. And that technology is called spatial intelligence.ā World Labs calls itself a spatial intelligence company, and its fate will help determine whether that term becomes a revolution or a punch line.
Li has been obsessing over spatial intelligence for years. While everyone was going gaga over ChatGPT, she and a former student, Justin Johnson, were excitedly gabbling in phone calls about AIās next iteration. āThe next decade will be about generating new content that takes computer vision, deep learning, and AI out of the internet world, and gets them embedded in space and time,ā says Johnson, who is now an assistant professor at the University of Michigan.
Li decided to start a company early in 2023, after a dinner with Martin Casado, a pioneer in virtual networking who is now a partner at Andreessen Horowitz. Thatās the VC firm notorious for its near-messianic embrace of AI. Casado sees AI as being on a similar path as computer games, which started with text, moved to 2D graphics, and now have dazzling 3D imagery. Spatial intelligence will drive the change. Eventually, he says, āYou could take your favorite book, throw it into a model, and then you literally step into it and watch it play out in real time, in an immersive way,ā he says. The first step to making that happen, Casado and Li agreed, is moving from large language models to large world models.
Li began assembling a team, with Johnson as a cofounder. Casado suggested two more peopleāone was Christoph Lassner, who had worked at Amazon, Metaās Reality Labs, and Epic Games. He is the inventor of Pulsar, a rendering scheme that led to a celebrated technique called 3D Gaussian Splatting. That sounds like an indie band at an MIT toga party, but itās actually a way to synthesize scenes, as opposed to one-off objects. Casadoās other suggestion was Ben Mildenhall, who had created a powerful technique called NeRFāneural radiance fieldsāthat transmogrifies 2D pixel images into 3D graphics. āWe took real-world objects into VR and made them look perfectly real,ā he says. He left his post as a senior research scientist at Google to join Liās team.
One obvious goal of a large world model would be imbuing, well, world-sense into robots. That indeed is in World Labsā plan, but not for a while. The first phase is building a model with a deep understanding of three dimensionality, physicality, and notions of space and time. Next will come a phase where the models support augmented reality. After that the company can take on robotics. If this vision is fulfilled, large world models will improve autonomous cars, automated factories, and maybe even humanoid robots.