Architects of Intelligence is a series of interviews with a grab bag of prominent researchers and entrepreneurs in AI. The roster is the star attraction – most of the interviewees are pretty famous, ranging from neural network pioneers like Geoffrey Hinton to “classic” AI experts like Stuart Russell. Most of them are excellent communicators, and this shines through in their responses. Unfortunately, the quality of the questions leaves a lot to be desired. Martin Ford is a business writer, and he’s more concerned about the impact of AI on the labor market than on how it works and whether it’s safe.… Read the rest
Mars rovers tend to be pretty durable, lasting long beyond the mission timelines they’re designed for. As depicted in a terribly sad XKCD comic, Spirit survived for nearly seven years. Opportunity had an even more impressive lifespan of 15 years; both rovers were designed to last only three months. An explanation I’ve heard for why this happens has stuck with me: no one wants to have the piece of the rover that they were responsible for be the one that ends the mission. If you’re the engineer in charge of the battery design, and the battery fails right at the end of the mission timeline, you’ve technically done your job perfectly – but the thought that the rover could have lasted much longer if not for your broken part would still keep you up at night.… Read the rest
In the “classic” machine learning paradigm of supervised learning, there’s no role for curiosity. The goal of a supervised learning algorithm is simply to match a set of labels for a provided dataset as closely as possible. The task is clearly defined, and even if the algorithm was capable of wondering about the meaning of the data pouring into it, this wouldn’t help with the learning task at all.
For humans, on the other hand, curiosity seems to be an integral part of how we learn. Even before starting school, children are hard at work figuring out the world around them.… Read the rest
The “Chinese Room Argument” is one of the most famous bits of philosophy among computer scientists. Until recently, I thought the argument went something like this: Imagine a room containing a person with no Chinese language proficiency and a very sophisticated book of rules. Occasionally, someone slides a piece of paper with a sentence in Chinese written on it under the door. The room’s inhabitant (let’s call them Clerk) uses the rulebook to look up each Chinese character and pick out the appropriate characters to form a response, which they slide back under the door. Clerk has no idea what the characters or the sentences they form mean, but with a sufficiently sophisticated rulebook, it would look to outside observers like Clerk was conversant in written Chinese.… Read the rest