Then thereās Eric Chong, a 37-year-old who has a background in dentistry and previously cofounded a startup that simplifies medical billing for dentists. He was placed on the āmachineā team.
āI’m gonna be honest and say I’m extremely relieved to be on the machine team,ā Chong says.
At the hackathon, Chong was building software that uses voice and face recognition to detect autism. Of course, my first question was: Wouldnāt there be a wealth of issues with this, like biased data leading to false positives?
āShort answer, yes,ā Chong says. āI think that there are some false positives that may come out, but I think that with voice and with facial expression, I think we could actually improve the accuracy of early detection.ā
The AGI āTacoverā
The coworking space, like many AI-related things in San Francisco, has ties to effective altruism.
If youāre not familiar with the movement through the bombshell fraud headlines, it seeks to maximize the good that can be done using participantsā time, money, and resources. The day after this event, the event space hosted a discussion about how to leverage YouTube āto communicate important ideas like why people should eat less meat.ā
On the fourth floor of the building, flyers covered the wallsāāAI 2027: Will AGI Tacoverā shows a bulletin for a taco party that recently passed, another titled āPro-Animal Coworkingā provides no other context.
A half hour before the submission deadline, coders munched vegan meatball subs from Ikeās and rushed to finish up their projects. One floor down, the judges started to arrive: Brian Fioca and Shyamal Hitesh Anadkat from OpenAIās Applied AI team, Marius Buleandra from Anthropicās Applied AI team, and Varin Nair, an engineer from the AI startup Factory (which is also cohosting the event).
As the judging kicked off, a member of the METR team, Nate Rush, showed me an Excel table that tracked contestant scores, with AI-powered groups colored green and human projects colored red. Each group moved up and down the list as the judges entered their decisions. āDo you see it?ā he asked me. No, I donātāthe mishmash of colors showed no clear winner even half an hour into the judging. That was his point. Much to everyoneās surprise, man versus machine was a close race.
Show Time
In the end, the finalists were evenly split: three from the āmanā side and three from the āmachine.ā After each demo, the crowd was asked to raise their hands and guess whether the team had used AI.
First up was ViewSense, a tool designed to help visually impaired people navigate their surroundings by transcribing live videofeeds into text for a screen reader to read out loud. Given the short build time, it was technically impressive, and 60 percent of the room (by the emceeās count) believed it used AI. It didnāt.
Next was a team that built a platform for designing websites with pen and paper, using a camera to track sketches in real timeāno AI involved in the coding process. The pianist project advanced to the finals with a system that let users upload piano sessions for AI-generated feedback; it was on the machine side. Another team showcased a tool that generates heat maps of code changes: critical security issues show up in red, while routine edits appear in green. This one did use AI.