Yet Hodgkinson worries that researchers in the field will pay attention to the technique, rather than the science, when trying to reverse engineer why the trio won the prize this year. âWhat I hope this doesnât do is make researchers inappropriately use chatbots, by wrongly thinking that all AI tools are equivalent,â he says.
The fear that this could happen is founded in the explosion of interest around other supposedly transformative technologies. âThereâs always hype cycles, recent ones being blockchain and graphene,â says Hodgkinson. Following grapheneâs discovery in 2004, 45,000 academic papers mentioning the material were published between 2005 and 2009, according to Google Scholar. But after Andre Geim and Konstantin Novoselovâs Nobel Prize win for their discovery of the material, the number of papers published then shot up, to 454,000 between 2010 and 2014, and more than a million between 2015 and 2020. This surge in research has arguably had only a modest real-world impact so far.
Hodgkinson believes the energizing power of multiple researchers being recognized by the Nobel Prize panel for their work in AI could cause others to start congregating around the fieldâwhich could result in science of a changeable quality. âWhether thereâs substance to the proposals and applications [of AI] is another matter,â he says.
Weâve already seen the impact of media and public attention toward AI on the academic community. The number of publications around AI has tripled between 2010 and 2022, according to research by Stanford University, with nearly a quarter of a million papers published in 2022 alone: more than 660 new publications a day. Thatâs before the November 2022 release of ChatGPT kickstarted the generative AI revolution.
The extent to which academics are likely to follow the media attention, money, and Nobel Prize committee plaudits is a question that vexes Julian Togelius, an associate professor of computer science at New York Universityâs Tandon School of Engineering who works on AI. âScientists in general follow some combination of path of least resistance and most bang for their buck,â he says. And given the competitive nature of academia, where funding is increasingly scarce and directly linked to researchersâ job prospects, it seems likely that the combination of a trendy topic thatâas of this weekâhas the potential to earn high-achievers a Nobel Prize could be too tempting to resist.
The risk is this could stymie innovative new thinking. âGetting more fundamental data out of nature, and coming up with new theories that humans can understand, are hard things to do,â says Togelius. But that requires deep thought. Itâs far more productive for researchers instead to carry out simulations enabled by AI that support existing theories and involve existing dataâproducing small hops forward in understanding, rather than giant leaps. Togelius foresees that a new generation of scientists will end up doing exactly that, because itâs easier.
Thereâs also the risk that overconfident computer scientists, who have helped advance the field of AI, start to see AI work being awarded Nobel Prizes in unrelated scientific fieldsâin this instance, physics and chemistryâand decide to follow in their footsteps, encroaching on other peopleâs turf. âComputer scientists have a well-deserved reputation for sticking their noses into fields they know nothing about, injecting some algorithms, and calling it an advance, for better and/or worse,â says Togelius, who admits to having previously been tempted to add deep learning to another field of science and âadvanceâ it, before thinking better of it, because he doesnât know much about physics, biology, or geology.
