Book review: “Good economics for hard times”

Abhijit V. Banerjee and Esther Duflo won the 2019 Economics Nobel for their work with experiments such as randomized controlled trials (RCTs), so I was excited to read their well-reviewed book (also from 2019). RCTs are designed to tease apart correlation from causation, and I’ve always found it frustrating that economic policy (at least the way it’s presented by politicians) makes little reference to whether we have evidence that something actually works. To make good predictions about the effects of a particular policy, we need a causal understanding of the system we’re trying to improve, and experiments are the best tools we have to get that understanding.

It turns out that Good economics for hard times doesn’t focus on Banerjee and Duflo’s research, but it is about whether (and what) we know actually works in economics. It stresses that among academic economists with strong reputations, there’s a surprising amount of agreement on many issues (even on things that non-economists think are contentious, like tariffs). This consensus has come about because of a growing emphasis on experimentation rather than purely theoretical economic models.

For example, classic models of supply and demand predict that where there is high demand for labor (and high wages accordingly), workers from other places will want to come and improve their salaries. Since this would increase the supply and depress wages again, the people who already live there have an incentive to prevent anyone else from entering. (This is academic speak for “they’re coming to take our jobs.”) But this clean picture just doesn’t hold up in the real world. Most people are very reluctant to move even if it would be good for them financially. Furthermore, it’s difficult for migrants to directly replace native workers for a variety of reasons, including things like language or cultural barriers, or a lack of local connections. “Natural experiments” (where random events create a natural control group to compare against) show that migration is generally beneficial for migrants and does little if any harm to natives. Listening to politicians would give you the impression that this is a controversial topic, but there is strong agreement among most academic economists!

Other topics get similar treatment, but a major underlying theme is the “stickiness” of real markets: people and firms consistently fail to do the “optimal” thing that economic models predict they should. This isn’t necessarily because they’re irrational; rather (surprise!) people care about things other than their income: family, health, culture, etc. matter, too. Another interesting point is that we have no idea what causes macroeconomic growth, so policies that claim to do this (e.g. by lowering taxes) are essentially snake oil.

Preferences and automation

Along with what in economics is well established, the authors are also clear about what is more speculative, and in the later chapters they delve into topics where the questions are just as important but the answers are murkier. Of particular interest for me were sections on human preferences and the impact of automation on jobs and human well-being, since these are both very relevant to AI and robotics.

Preferences (chapter four) play an important role in economics models, where they’re typically assumed to be stable (not changing over time) and coherent (not contradictory). We know this assumption doesn’t hold in real life. For example, Thinking, fast and slow by Daniel Kahneman famously documents all sorts of experiments where people turn out to have changeable and/or inconsistent preferences. Banerjee and Duflo make the somewhat odd choice to focus the chapter exclusively on one particularly nasty kind of preference: prejudices against groups of people. They argue that because prejudices can be strengthened or weakened by specific factors like media diet or social contact, they don’t fit the stable-and-coherent-preference mold. Therefore, we should focus on designing interventions to reduce prejudices rather than building them into our models as unchangeable facts of life.

I was a bit disappointed with the narrow focus in this chapter. Maybe we can all agree that reducing prejudice is good, but what about other kinds of preferences? In economic models, social welfare is defined in terms of maximizing everyone’s preferences. This is all well and good when our preferences are stable, but if the models take the changeability of preferences into account, then making those preferences easier to satisfy is a perfectly valid way to increase social welfare. If most people want democracy, but an authoritarian government educates them to prefer authoritarianism, are they worse off? If this is bad, why are interventions aimed at reducing prejudice preferences okay? These kinds of questions are only becoming more urgent as social media recommendation and advertising algorithms have increasing influence over our preferences. To the extent that economics has something useful to add to the discussion, it’s not covered in this book.

Chapter seven focuses on automation. Like other things I’ve read, the authors use the economics of the past to draw lessons for a future when an increasing proportion of current jobs may be automated. But whereas most people point to the fact that jobs lost to automation in the past were replaced with new (and often better) ones enabled by technological advancements, Banerjee and Duflo focus instead on negative impacts on blue-collar workers and rising income inequality. After the industrial revolution in the UK, real blue-collar wages dropped and didn’t recover until 1820, over 60 years later. In the modern era, automation may be helping drive income inequality, where especially in the US it has been steadily rising for the past 40 years. The basic hypothesis is that when low-skilled workers lose their jobs to automation, new jobs becoming available won’t always help them: a warehouse worker can’t transition to programming the robot that replaced them overnight.1

However, there are other factors besides automation that could explain income inequality, such as tax policy and the growth of the finance industry in the US and UK. The available evidence doesn’t clearly support one explanation over the others, and academic economists are split on whether future automation will cause significant unemployment. In any case, since income inequality is already a problem, it makes sense to try to address it regardless of whether AI and robotics make things even worse in the coming years. The book’s last chapter examines policies like universal basic income and conditional cash transfers, with an emphasis on preserving people’s dignity in addition to their financial security.

Last thoughts

Overall, I found this book to be a very worthwhile read. I like that the authors are so transparent about where they are confident and where things are more speculative. They come across as genuinely wanting to help readers understand the landscape of ideas in economics that are most relevant to non-economists, rather than persuade them about any particular idea or position. I feel much better equipped to evaluate economic claims in the wild after reading this. I can’t think of a better way to conclude than Good economics for hard times itself does:

The only recourse we have against bad ideas is to be vigilant, resist the seduction of the “obvious,” be skeptical of promised miracles, question the evidence, be patient with complexity and honest about what we know and what we can know… The call to action is not just for academic economists–it is for all of us who want a better, saner, more humane world. Economics is too important to be left to economists.

Footnotes

Read the rest

Book review: “Architects of Intelligence”

Architects of Intelligence is a series of interviews with a grab bag of prominent researchers and entrepreneurs in AI. The roster is the star attraction – most of the interviewees are pretty famous, ranging from neural network pioneers like Geoffrey Hinton to “classic” AI experts like Stuart Russell. Most of them are excellent communicators, and this shines through in their responses. Unfortunately, the quality of the questions leaves a lot to be desired. Martin Ford is a business writer, and he’s more concerned about the impact of AI on the labor market than on how it works and whether it’s safe. I’m not saying that economics isn’t important, but asking Judea Pearl about whether robots will take our jobs is a bit like asking Anthony Fauci about the economic impact of people taking time off work to get their covid vaccines. It’s especially frustrating when the interviewee makes some interesting point about their research, and Ford completely drops the thread to ask about an unrelated topic outside their field of expertise. 

So, you can probably find better interviews with the people featured in this book elsewhere. But in spite of this, I still found the book surprisingly valuable. Why? Because while each interview is unremarkable in isolation, the collection of them shows a fascinating diversity of viewpoints. Ford asks everyone a core set of questions, so we get to see the contrast between researchers in stark relief. Some of the experts are worried about the safety of increasingly sophisticated AI agents; others find the idea of safety concerns so ridiculous that they refuse to discuss it. Deep neural networks are sufficient for human-level AI, or need to be supplemented with some other techniques, or are on the wrong track entirely. AI will worsen inequality, or even the playing field, or neither (because humanity will merge with the AI and transcend these kinds of petty concerns). There’s no shortage of very hot takes. It’s impossible to come away from reading these interviews thinking that there’s consensus about much of anything in AI research.

I also found going through this book to be helpful in building up my mental model of what kinds of research are done by which labs and companies. This knowledge does accrue naturally as you go to talks and read papers, but among all the details it’s easy to lose sight of the big picture. I’ve found myself returning to my notes from Architects of Intelligence many times when I come across a new quote or idea from one of the interviewees, and each time I’ve been grateful for the extra context. Wikipedia is a fantastic resource, but it isn’t a substitute for having a curated set of the most interesting/relevant/useful things you’ve learned about someone. These interviews are a great source of interesting facts about prominent AI researchers and their views.

I hold out hope that someday, someone will put together a similar project with better interviews and more interesting questions. In the meantime, I recommend this book (I guess!). Read the rest

What’s in a lifespan?

Mars rovers tend to be pretty durable, lasting long beyond the mission timelines they’re designed for. As depicted in a terribly sad XKCD comic, Spirit survived for nearly seven years. Opportunity had an even more impressive lifespan of 15 years; both rovers were designed to last only three months. An explanation I’ve heard for why this happens has stuck with me: no one wants to have the piece of the rover that they were responsible for be the one that ends the mission. If you’re the engineer in charge of the battery design, and the battery fails right at the end of the mission timeline, you’ve technically done your job perfectly – but the thought that the rover could have lasted much longer if not for your broken part would still keep you up at night. If enough engineers feel this way, the risk of rover death gets pushed far below the design spec, and the rover has a high chance of surviving a long time.

I was recently reminded of this story when I learned about an explanation for why we age.1 At first glance, the maximum human lifespan seems almost built-in. Even the healthiest people get weaker and more fragile with time, and the effects are clearly visible. The process appears “programmed”: something that happens with a specific cause and for a specific reason. Given humanity’s desire to cheat death, it’s not surprising that a lot of research goes into possible mechanisms of aging, such as telomere decay.2 That research helps us understand the causes of aging. But the reason why these mechanisms exist in the first place is less clear. People who live longer could reproduce more or use their wisdom and experience to increase the odds of survival of their children, just like Mars rovers with long lifespans get to keep doing science. Why wouldn’t the “blind watchmaker” of evolution, given similar incentives to NASA engineers, allow us to live indefinitely if we can avoid serious accidents or disease?

It turns out that a reasonable response to this question has been worked out. It goes like this: everyone is born with some random mutations in their genes. Over long time scales, evolution works because some of these mutations are helpful for survival and get passed along to the next generation. But because our genomes have already been finely tuned by billions of years of natural selection, most random mutations are useless or even harmful. How strongly these bad mutations are selected against depends on when they have their harmful effect. If they kick in early in a person’s lifespan, that person is less likely to successfully reproduce, and the mutated gene will disappear. However, mutations that have a harmful effect later in life can still get passed on to future generations. The net effect of this is that we’re likely to have things go wrong as we age, simply because there isn’t as much evolutionary pressure to prevent them from happening.

This is only compounded by the fact that the longer your life, the more likely you are to run into trouble of one sort of another. If you have a deadly accident, it doesn’t much matter whether your genes could have supported another hundred years of healthy lifespan. This means that the higher the everyday risk of death is, the less valuable longer life is from an evolutionary perspective. A gene that helps a person survive only once they reach 120 has very little worth, because it’s so unlikely they’ll live that long – whether due to external causes, or simply because of the effects of other genes. As a result, many small harmful effects kick in as we get older and the selective pressure against them falters. Together, these many small internal factors add up to the single externally visible process of aging, to the point where it begins to look like there was a built-in lifespan all along.

If this is right, then neither humans nor Mars rovers have true “lifespans.” We’re both incredibly complex systems that can go wrong in many different ways. One day, one of them inevitably catches up with us – but it’s not because we’ve reached some designated end of the road. Individual genes, just like individual engineers, don’t “want” to be responsible for the last straw. But eventually, it’s almost certain that something else will have gone wrong – and as long as a given piece makes it to that point, it can feel secure in the knowledge of a job well done. Read the rest

“Disney Dreamlight Valley” is Japanese immersion paradise

The language-learning intermediate plateau is tough. You’ve grown out of beginner-friendly resources, and you see diminishing returns from traditional studying because the more advanced grammar points and vocabulary are less frequently used. But at the same time, at this level you’re finally starting to see glimpses of the payoff from all that studying: for the first time you can sort of understand your favorite shows or books or whatever else motivated you to learn the language in the first place, and when this happens it’s magic!

At this point, you want to move beyond textbooks and study apps and find materials in your target language – podcasts, books, TV, shows, video games, forums, etc – and spend as much time with them as possible. But, at least for me, this is easier said than done. Getting the difficulty level right is tricky, of course. It’s also challenging to stay motivated, because using your target language is a lot of work. Ideally, by choosing media you enjoy, getting invested in what’s happening can help with pushing through the extra challenge. But if you pick things you really love and force yourself to read/watch/play them in your target language, it can be frustrating because you’re missing out on a lot of the nuance that you’d be able to understand in your native language. On the other hand, if you go too far in the other direction and choose something that you’d never touch normally, it’s difficult to force yourself to spend much time with it.

I finally got past the plateau in French within the past year or so, where I can now comfortably watch/read/listen without fear of missing too much or quickly becoming exhausted. I’m still struggling with Japanese. That’s where Disney Dreamlight Valley, a game released in early access last month, comes in. Gameplay-wise, it’s a pretty shameless Animal Crossing clone: you earn money, gather materials, and find, buy, or craft things to customize your character and village. The hook, of course, is that the villagers are all Disney and Pixar characters.

I’m not a Disney fanatic – to be honest, I’m lukewarm on most of the films I’ve seen – but playing this game in Japanese is the best thing that’s happened to my progress since I found the excellent Nihongo con Teppei / Nihongo con Teppei Z podcast. The simple and familiar (read: blatantly copied) gameplay makes the instructions easy to follow. Meanwhile, the cast of Disney characters are an endless source of varied and entertaining dialogue and questlines. The character variety is especially nice for Japanese, where word choice and sentence structure depend on gender, age, and personality. Furthermore, prior familiarity with these characters gives context that helps a lot with guessing the meanings of words I don’t know. The fantasy setting doesn’t hurt, either – a lot of the dialogue revolves around a small set of concepts like “night thorns” or “dream shards,” many of which are English loan words spelled out phonetically.

When I’ve read enough dialogue to set my head spinning, I can cool off by doing some mindless fishing or farming for a bit. This comes with a gentler form of immersion, where I pick up the names of fish or flowers simply by encountering them. And when I’m ready to brave walls of text again, there are always new requests from the villagers to take on. This loop, where I can effectively control the language difficulty in an intuitive way, is my favorite part of the game. Because of it, I’ve not only had no problem persuading myself to stick with Dreamlight Valley, I’m even finding it mildly addictive!

With the caveat that finding good practice materials is very subjective, I can solidly recommend this game for language immersion. In addition to Japanese (and of course English), it’s currently available in French, Italian, Spanish, German, and Mandarin (simplified characters). Read the rest

Valentines

Inspired by the Planet Money podcast, I’d like to celebrate Valentine’s Day by giving out some virtual valentines to things I’m especially glad are real things that exist.

Water Bears

Whether or not tardigrades, or “water bears,” can really survive quantum entanglement, they still have a lot going for them: they’re insanely tough and oddly charismatic. So I was absolutely delighted to discover a wholesome puzzle game based on rehydrating tardigrades that are in their dried-out hibernation-like “tun” state. The puzzles themselves are only fine, but Water Bears is totally worth playing just for the tardigrades. They even make cute alien-ish purring sounds. The only thing preventing this game from reaching a perfect 10/10 rating is that I can’t pet the tardigrades.

How to Cook Everything by Mark Bittman

The ambitious title of this cookbook is, surprisingly, very nearly justified. You know how many recipes give long lists of highly specific ingredients, down to telling you what kind of salt to use or, heaven forbid, specifying that you need to use organic cranberries? This book is the antonym of that. Each recipe comes with several possible variations and notes about what you can change or substitute. If most recipe books treat cooking like following Ikea instructions, Bittman’s approach is more like teaching you how to paint or write programs: here’s the basic structure and technique, now go nuts. I’ve now made dozens of recipes from this book, and not only have I gotten a lot of delicious food, but I also think that working from this has made me a better cook. The vegetarian version is also excellent.

Effective Altruism: An Introduction podcast

Last year I discovered the 80,000 Hours podcast, which produces long, intense interviews with people working on how to do as much good as possible. Effective Altruism: An Introduction is a set of 10 episodes picked from the archive to help new listeners get up to speed with the most important ideas from the effective altruism community. I’ve found the quality and depth of the interviews in this series to be impressive, and learned about many fascinating questions and perspectives I’d never considered before. I’m glad that there are smart people thinking seriously about how to maximize the long-term welfare of humanity (and what doing so even means), and that I have the opportunity to learn from their insights.

Exhalation by Ted Chiang

This is a collection of short stories about robots and free will. A full year after reading it, I still remember several of the stories in great detail, and think about them regularly. They’re the kind of stories I would have liked to be able to write, if I were an author.

Integrated Chinese textbooks

These textbooks are used by the Mandarin classes I’ve been taking at Berkeley, and the books are a large part of the reason why I’ve stuck with it so long (I’m currently in the fourth semester). When I took French in high school, I was well into my second year before I read an entire paragraph-long story rather than random disjointed sentences. In contrast, the Integrated Chinese books center each lesson around a dialogue or short narrative about a recurring cast of characters. The dialogues come with videos that land squarely in so-bad-it’s-good territory. The reading and listening exercises are frequently humorous, with characters gossiping, complaining, and telling each other off. New vocabulary and grammar is used regularly in subsequent lessons, helping memorization come naturally and creating a feeling of progression and growing empowerment as you gain the ability to understand more complicated topics. The textbooks are aimed at classroom use, but if anyone used similar ideas in an app targeting independent learners, I think it would be incredibly effective. Read the rest

A car-less person’s guide to hiking around Berkeley

Last week I was happy to have the opportunity to attend a research workshop on causality, hosted by the Simons Institute here at Berkeley. One of the other attendees asked for recommendations for things to do around the East Bay. Maybe because most of my time here has been under the influence of covid (but, honestly, I’m kind of just like this), I found myself rambling about different places I like to walk or hike around. Berkeley is a hiker’s paradise: hills overlooking the bay and San Francisco (with beautiful sunsets to boot), a huge diversity of plant life, and many miles of trails, all within 1.5 hours of the UC Berkeley campus on foot. I’d like to briefly mention a few of my favorite spots, just in case it’s useful to other students or visitors.

Tilden Park

Turtles in Jewel Lake

This is a hilly natural area to the northeast of the Berkeley campus with many hiking trails leading to great views, including Inspiration Point. There may be something to that name – I had an important (to me at least) research insight while walking around this area. When I mentioned this to a lab colleague, I learned that according to local legend, Nobel prize-winning economist Daniel Kahneman was also very fond of hiking in Tilden as a way to develop his ideas. If the potential for Nobel-quality research ideas isn’t enticing enough, I’m happy to inform you that Tilden’s Jewel Lake is home to some extremely cute native turtles. As a last bonus, if you don’t mind being surrounded by toddlers, there’s even a place where you can ride around on a working scale model steam train. (I have done this as a grown adult and yes it was worth it.)

Fire trails and Siesta Valley

The Berkeley fire trails start directly next to campus, just to the east of the California Memorial Stadium, so you can get away from car traffic and go on an excellent hike right after finishing classes or meetings. Although technically the Fire Trail is one specific trail that provides access for fire-fighting equipment, I’m really referring to a collection of trails in this area that can be accessed by starting here. Some areas are quite steep, but once you gain some altitude, the beautiful views come frequently along almost any route you might pick. I could give directions to some of my favorite spots, but I think exploring is a large part of the fun. These trails boast an impressive diversity of scenery along just a few miles, from lush forests and views of campus and the bay, to more arid spots looking eastward over grassy areas and wildflowers.

Indian Rock Park

This tiny park is dominated by the titular rock, which towers over nearby houses and provides one of the most easily accessible scenic views of the bay. Although there are now stairs carved into it, the rock played an important role in the history of modern rock climbing, and people still go bouldering there. A small crowd of locals can be found there nearly every evening around sunset, but the view is well worth seeing at any time of day. The rock’s name probably comes from the acorn grinding pits carved into it by the indigenous Ohlone people. (The sunset image at the top of this post is from a path just south of the park).

Albany Bulb

Last but very much not least, the Albany Bulb is a vaguely bulb-shaped piece of land jutting into the San Francisco Bay north of the Berkeley Marina. It’s the site of a former landfill, closed almost 40 years ago, for construction debris. This may not sound like a promising location for a park, but nature has reclaimed the piles of broken concrete and twisted rebar to make something new and strangely beautiful. There are large trees twisted by the wind and vibrant wildflowers. Lizards sun themselves on the concrete, and ground squirrels burrow around it. And because this is Berkeley, local artists have helped the reclaiming process too, by turning the debris into a kind of indie open-air art gallery with a continuously changing rotation of things to discover. There aren’t any explanatory plaques or price tags or curators – only individual artists taking a place that might have been ruined or abandoned, and choosing to celebrate it instead. Whenever I visit the Bulb, I can’t help but feel hopeful that humanity can learn from our past mistakes. You couldn’t ask for a better metaphor for what we might achieve by working alongside the natural world that is our home, and by recognizing and growing from the past instead of attempting to bury it. Whether you buy into my waxing poetic or not, I can’t recommend this place enough.

Read the rest

Curiosity in minds and machine learners

In the “classic” machine learning paradigm of supervised learning, there’s no role for curiosity. The goal of a supervised learning algorithm is simply to match a set of labels for a provided dataset as closely as possible. The task is clearly defined, and even if the algorithm was capable of wondering about the meaning of the data pouring into it, this wouldn’t help with the learning task at all.

For humans, on the other hand, curiosity seems to be an integral part of how we learn. Even before starting school, children are hard at work figuring out the world around them. They crave novelty and surprise, and delight in finding solutions to new challenges. Their choice of things to investigate or focus on is internally motivated: they target objects and skills that present just the right amount of challenge (too little is boring, too much is frustrating). Although adults might provide toys and encouragement, children seem to find exploring and gaining knowledge intrinsically rewarding. (This isn’t just limited to humans, either – for example, mice will endure an electric shock to explore a new environment.1) “Curiosity” is difficult to define precisely, but surely choosing to learn about something just for the sake of it gets at the heart of what it means to be curious.

It shouldn’t come as a surprise that humans and machine learning algorithms learn in very different ways. However, unlike supervised learning, there seems to be a role for curiosity in reinforcement learning (RL). In RL, an algorithm (also called an agent) must choose actions in an attempt to reach some particular goal state, where it receives a reward that allows it to learn which actions are beneficial. For simple goals, an RL agent may be able to stumble upon a solution simply by choosing random actions until it receives a reward for the first time. However, this strategy doesn’t work so well if the task demands a precise sequence of actions. For example, when playing a video game, many hazards may need to be avoided before any points can be scored. 

In these cases, the RL agent has no idea what to do to reach the goal, and it might try random actions for years without success. However, the failures provide important information about what kinds of states don’t lead to a reward. This suggests a new strategy: systematically trying as many new things as possible, while keeping track of what has been tried so far. Various algorithms have been proposed to do this, and when implemented, the RL agent purposefully seeks out new situations, looking very much as if it is curious about its environment. One of these new situations will eventually be the goal, and the RL agent will successfully learn to complete the task.

The simplest way to make a “curious” RL agent is to have it try to predict what is going to happen next, and give it an “exploration bonus” when this prediction fails. Intuitively, the agent can make good predictions about situations it has encountered before, but not about novel ones. Therefore, it receives the exploration bonus for learning how to achieve new kinds of states. How well does this behavior reflect human-like curiosity? Well, the exploration bonus causes the agent to seek out novelty, just like a curious human. However, this version of curiosity seems strangely passive: anything that’s unpredictable is equally rewarding to the RL agent, and it has no concept of some kinds of novelty being more interesting or relevant than others. This turns out to be a very practical concern, because the RL agent will seek out any source of unpredictability in its environment, such as random TV static.

Several methods proposed in the last several years, such as Random Network Distillation (RND) and episodic curiosity, have solved the “TV static problem.” In fact, these kinds of approaches to exploration are so successful that they enabled RL agents to achieve superhuman performance on difficult Atari video games like Montezuma’s Revenge. However, they achieve this through algorithmic tricks that rule out randomness in the environment when calculating exploration bonuses. An RND agent still has no basis for deciding that some kinds of novelty are more important than others.2

This is important because, unlike in video games, the real world is incredibly complex. Predicting the consequence of every action, especially in the presence of active humans and a dynamic environment, is intractable. Humans deal with this in part by having relatively narrow interests and things that matter to us, either because they are extrinsically rewarding (e.g. income, food), or because we simply get curious about specific things. Exploration and play, especially in children, are almost certainly tied to overall learning, development, and gaining mastery in interacting with the world. But children learn actively; they choose their own problems to solve, mysteries to explore, and questions to try to find answers to. They introduce their own structure into a world that resists simple rules and explanations.

A related idea in machine learning is that of automatic curricula, where algorithms attempt to successively choose learning tasks that are neither too easy nor too difficult in an attempt to smooth the learning process. Similarly, in active learning3 algorithms seek out more information when they’re uncertain. But the truth is that we have very little idea why children (and humans in general) get curious about some things rather than others, even though this is probably crucial to how we learn to make sense of our highly complicated environments. Human curiosity has already proved a fruitful source of inspiration for machine learning; future insights from cognitive scientists about how and why we get curious are likely to propel further developments and even better algorithms.

A note of caution

As AI agents become more competent and have more influence over our lives, ensuring that their behavior is truly beneficial becomes increasingly important. Even when AI agents only optimize for a single human-provided objective, it has been well established that there could be serious unintended consequences (for example, since the AI has to be operational to complete its task, it’s incentivized to resist attempts to turn it off for almost any objective). From this view, the idea of AI agents actively exploring and choosing their own goals sounds especially risky.4 If this turns out to be helpful or even necessary for robust learning in complex environments, how could we ensure that the things an AI decides to attempt aren’t harmful? Learning more about how young children play and explore may help, but children have societal scaffolds (like being physically weaker than adults) that may not apply to robots and AI algorithms. Ensuring safety while supporting learning is an important topic for research. Read the rest

Life is amazing: Fish-eating bat (Myotis vivesi)

Two Myotis vivesi bats roosting in a rock crevice

Bats make up around one-fifth of all mammal species, so it shouldn’t come as a surprise that they’ve adapted to just about every ecological niche you could think of. Even so, I think Myotis vivesi is something special. These little guys subsist almost entirely on seafood that they catch during long fishing expeditions on the open ocean. They have huge claws to grasp fish with, and long, stable wings that allow them to glide efficiently and carry heavy prey. During the day, they roost among the rocks and cliffs of islands in the Gulf of California. Since their land home is quite dry and they spend so much time at sea, they can even get by only on seawater! Given these traits (which sound more like they belong to a seabird than a bat), it’s maybe not so surprising that they can fly up to 50km out to sea in a single evening.1 They’re even comfortable floating on the water.

The fish-eating bat is classified as Vulnerable by the IUCN due to its limited habitat, but fortunately the population seems to be stable, so these lovable weirdos should be able to keep doing their thing for the foreseeable future.

Look at those claws! Photo credit: ginnical (CC BY-NC license).

Header image: Myotis vivesi in a rock crevice on Isla San Ildefonso. Photo credit: Winifred Frick (CC BY-NC license). Read the rest

A common misconception about the Chinese Room Argument

The “Chinese Room Argument” is one of the most famous bits of philosophy among computer scientists. Until recently, I thought the argument went something like this: Imagine a room containing a person with no Chinese language proficiency and a very sophisticated book of rules. Occasionally, someone slides a piece of paper with a sentence in Chinese written on it under the door. The room’s inhabitant (let’s call them Clerk) uses the rulebook to look up each Chinese character and pick out the appropriate characters to form a response, which they slide back under the door. Clerk has no idea what the characters or the sentences they form mean, but with a sufficiently sophisticated rulebook, it would look to outside observers like Clerk was conversant in written Chinese. The conclusion of the argument was that even if we were to create an AI system that could pass a Turing test in Chinese (or any other language), that wouldn’t be sufficient to conclude that it actually understands Chinese. Understanding, here, means something like conscious awareness of what the Chinese characters mean and what is being said with them.

It turns out that this conclusion is quite different from what Searle, who originally proposed the Chinese Room thought experiment, intended.1 Searle wasn’t trying to argue that consciousness was difficult or impossible to detect in machines, he was arguing that it is impossible for a digital computer to be conscious at all.2 To understand why, consider this version of the thought experiment: someone sends me a program that they claim passes the Turing test. I take the assembly code for this program, print it into a giant manual, and shut myself up in an unused basement in Cory Hall. When a piece of paper with some writing on it is slid under the door, I use the manual to pick the responses, just as Clerk did before. In this fashion, I essentially become the computer running the program. But just like Clerk, I’m not conscious of the meaning of the sentences I receive as input or the reasoning behind selecting one output over another. This means (according to Searle) that a regular computer running this code also couldn’t be conscious of these things, even if it does pass the Turing test. Therefore, no matter how sophisticated a program is, the computer running it won’t achieve consciousness. Searle sums up this viewpoint by saying: “Symbol shuffling… does not give any access to the meanings of the symbols.”

Since humans are conscious and do have access to meanings, Searle believed that there is something special about the brain over and above any digital computer. He is commonly quoted as saying “brains cause minds” (i.e. the “software” running on the brain doesn’t create a mind, at least not by itself – something about the physical brain itself is critical). This stronger conclusion is unsurprisingly not widely accepted among AI researchers, who generally believe that a digital computer (perhaps a very powerful one) running the right kind of software could achieve understanding and consciousness. 

Most philosophers also seem to object to the Chinese Room Argument. One criticism of Searle’s argument is so well-known that it gets its own name: the “systems response.” This argument accepts Searle’s assumption that Clerk wouldn’t understand Chinese simply by manipulating symbols, but notes that we can’t logically conclude from this that the system that includes both Clerk and the rulebook doesn’t understand Chinese. Searle appears to struggle to take this objection seriously – how could a rulebook understand anything?3

The systems response seems a little less absurd when we consider just how sophisticated the rulebook would have to be to pass a serious Turing test. Imagine a human judge asks a computer to explain a bad joke. The computer might respond that explaining jokes ruins them, but when pressed, give an explanation that explains the cultural context of the joke and why the punchline is amusing.4 A rulebook that could exhibit this kind of behavior would have to be unimaginably complex! The problem with the Chinese Room Argument is that it invites us to imagine manipulating symbols according to a (say) dictionary-size rulebook, then extrapolate our intuition about this scenario to the wondrously complex software that would be needed to exhibit a human-like mastery of language. If you seriously consider just how far this extrapolation needs to go, it’s reasonable to entertain serious doubts as to whether the simple dictionary-rulebook case tells us anything at all about a program that passes the Turing test.    While the Systems Response and other criticisms make it difficult to take Searle’s conclusion that brains must have a special “consciousness sauce” missing in digital computers too seriously, these criticisms also don’t establish the other extreme, namely that a Turing test-passing program really would understand language (or be conscious). Therefore, the conclusion I’m left with is quite similar to my original misunderstanding of the Chinese Room Argument: computers may or may not achieve consciousness someday, but knowing for sure whether a future computer thinks or understands may not be possible. Read the rest