It was a pretty lively summer for me, as I was fortunate enough to be invited to a few festivals and talks, all focused on AI. I wanted to share some thoughts and photos from each event to document the fun I had and the wonderful people I met.
HowTheLightGetsIn Hay Festival
The Institute for Arts and Ideas hosts the world’s largest philosophy and music festival twice a year, and it was a huge honor to be invited back in 2023 to their London venue. I was fortunate enough to return to the stage this May at their Hay-on-Wye location. The scale of the festival is impressive and not at all captured in the video footage of the events, with many parallel debates, lectures, and other special engagements happening simultaneously in tents across the festival grounds.
My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. As the lone voice from a large tech company, I was perpetually in defense mode and received none of the applause that the others did. Oh well – at least I was told by others afterwards that I held my own effectively.
The festival also hosts a limited number of small group discussions for a more intimate setting, and I very much enjoyed being able to lead one about the current state of AI. As a mathematician, I am well aware of how much effort it takes to get the public interested in the beauty of mathematics. By contrast, one of the rewarding aspects of being in the field of AI today is that it’s on everyone’s mind, sparking genuine curiosity from people arriving with questions and a real desire to know more.
East Anglia Festival
A few weeks after the HowTheLightGetsIn Festival, I was on a debate panel with mathematician Marcus Du Sautoy and human rights lawyer Susie Alegre at the East Anglia Festival, held at the stunning and historic Hedingham Castle estate. Provocatively titled “AI: Scourge or Salvation?”, it was another opportunity to have a thoughtful discussion with the public about both the risks and promises of artificial intelligence.
Our panel had even more fireworks than the one I had in Hay. Susie and I got into a bit of a kerfuffle right off the bat over the topic of copyright infringement which nearly derailed our main focus of artificial intelligence. It was an interesting contrast of personalities: Marcus and Susie were older and more vocal, while I was the younger, calmer speaker. Fortunately, I did receive applause from the audience this time, particularly during the Q&A, where I offered a concluding line that resonated with many: artificial intelligence is a supplement, not a substitute, for critical thinking.
Cohere Summer School
On July 11, I had the privilege of giving a virtual talk at the Cohere Labs Open Science Community Summer School. This learning initiative brings together leading minds in machine learning to foster open collaboration and make advanced ML research accessible to a global community. I presented my paper, “Understanding Transformers via N-gram Statistics” which was published at NeurIPS 2024. The talk explored how we can use a more traditional statistical lens to understand the inner workings of modern transformer models.
Machine Learning Street Talk Social
A few days later, Tim Scarfe, a good friend and host of the Machine Learning Street Talk podcast, organized its first meetup event here in London, and it was an honor to be invited as a speaker along with Max Bartolo (Cohere), Enzo Blindow (Prolific), and Mark Bishop (professor of cognitive computing). Given the informal setting, I ended up giving a no-slides talk of my “Understanding Transformers via N-gram Statistics” paper, which I thought ended up being effective because it kept the discussion high level without getting bogged down in details.
My tendency to get to the bottom of things often leads me into lively exchanges on panels, and this time was no exception. I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word “understand”. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.
However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.