sergeyb
bonked 18 Nov 2024 22:28 +0300
original: david_chisnall@infosec.exchange
When I was a PhD student, I attended a talk by the late Robin Milner where he said two things that have stuck with me. The first, I repeat quite often. He argued that credit for an invention did not belong to the first person to invent something but to the first person to explain it well enough that no one needed to invent it again. His first historical example was Leibniz publishing calculus and then Newton claiming he invented it first: it didn’t matter if he did or not, he failed to explain it to anyone and so the fact that Leibniz needed to independently invent it was Newton’s failure. The second thing, which is a lot more relevant now than at the time, was that AI should stand for Augmented Intelligence not Artificial Intelligence if you want to build things that are actually useful. Striving to replace human intelligence is not a useful pursuit because there is an abundant supply of humans and you can improve the supply of intelligent humans by removing food poverty, improving access to education, and eliminating other barriers that prevent vast numbers of intelligent humans from being able to devote time to using their intelligence. The valuable tools are ones that do things humans are bad at. Pocket calculators changed the world because being able to add ten-digit numbers together orders of magnitude faster allowed humans to use their intelligence for things that were not the tedious, repetitive, tasks (and get higher accuracy for those tasks). If you want to change the world, build tools that allow humans to do more by offloading things humans are bad at and allowing them to spend more time on things humans are good at.