According to market-fixated tech pundits and professional skeptics, the artificial intelligence bubble has popped, and winterâs back. Fei-Fei Li isnât buying that. In fact, Liâwho earned the sobriquet the âgodmother of AIââis betting on the contrary. Sheâs on a part-time leave from Stanford University to cofound a company called World Labs. While current generative AI is language-based, she sees a frontier where systems construct complete worlds with the physics, logic, and rich detail of our physical reality. Itâs an ambitious goal, and despite the dreary nabobs who say progress in AI has hit a grim plateau, World Labs is on the funding fast track. The startup is perhaps a year away from having a productâand itâs not clear at all how well it will work when and if it does arriveâbut investors have pitched in $230 million and are reportedly valuing the nascent startup at a billion dollars.
Roughly a decade ago, Li helped AI turn a corner by creating ImageNet, a bespoke database of digital images that allowed neural nets to get significantly smarter. She feels that todayâs deep-learning models need a similar boost if AI is to create actual worlds, whether theyâre realistic simulations or totally imagined universes. Future George R.R. Martins might compose their dreamed-up worlds as prompts instead of prose, which you might then render and wander around in. âThe physical world for computers is seen through cameras, and the computer brain behind the cameras,â Li says. âTurning that vision into reasoning, generation, and eventual interaction involves understanding the physical structure, the physical dynamics of the physical world. And that technology is called spatial intelligence.â World Labs calls itself a spatial intelligence company, and its fate will help determine whether that term becomes a revolution or a punch line.
Li has been obsessing over spatial intelligence for years. While everyone was going gaga over ChatGPT, she and a former student, Justin Johnson, were excitedly gabbling in phone calls about AIâs next iteration. âThe next decade will be about generating new content that takes computer vision, deep learning, and AI out of the internet world, and gets them embedded in space and time,â says Johnson, who is now an assistant professor at the University of Michigan.
Li decided to start a company early in 2023, after a dinner with Martin Casado, a pioneer in virtual networking who is now a partner at Andreessen Horowitz. Thatâs the VC firm notorious for its near-messianic embrace of AI. Casado sees AI as being on a similar path as computer games, which started with text, moved to 2D graphics, and now have dazzling 3D imagery. Spatial intelligence will drive the change. Eventually, he says, âYou could take your favorite book, throw it into a model, and then you literally step into it and watch it play out in real time, in an immersive way,â he says. The first step to making that happen, Casado and Li agreed, is moving from large language models to large world models.
Li began assembling a team, with Johnson as a cofounder. Casado suggested two more peopleâone was Christoph Lassner, who had worked at Amazon, Metaâs Reality Labs, and Epic Games. He is the inventor of Pulsar, a rendering scheme that led to a celebrated technique called 3D Gaussian Splatting. That sounds like an indie band at an MIT toga party, but itâs actually a way to synthesize scenes, as opposed to one-off objects. Casadoâs other suggestion was Ben Mildenhall, who had created a powerful technique called NeRFâneural radiance fieldsâthat transmogrifies 2D pixel images into 3D graphics. âWe took real-world objects into VR and made them look perfectly real,â he says. He left his post as a senior research scientist at Google to join Liâs team.
One obvious goal of a large world model would be imbuing, well, world-sense into robots. That indeed is in World Labsâ plan, but not for a while. The first phase is building a model with a deep understanding of three dimensionality, physicality, and notions of space and time. Next will come a phase where the models support augmented reality. After that the company can take on robotics. If this vision is fulfilled, large world models will improve autonomous cars, automated factories, and maybe even humanoid robots.
