Bad Metaphors
AUGUST 15th 2018

Given of the forever-growing story around how poor we, as rational animals, are at things like intuiting statistics, performing simple math, noticing not insignificant differences between two images, resolving contradicting ideas in our heads, making choices when there are just a lot of options, correcting our worldview in light of new evidence, remembering only things that happened to us, and not things that didn't actually happen to us at all, and a whole roster of other quirks and catastrophic failures that comprise the human experience, if there is one thing that the brain can be said to be well and truly capable of, to be downright good at beyond our ability to replicate artificially so far, it's language. Language is one of the fundamental human tools, the essential medium through which the vast majority1 of our interaction occurs. It's so primary to so many of our basic operations, and so thoroughly practiced by2 the machinery that enacts them, that to the human experience, like any good tool, it often seems to vanish altogether; the intermediary translating between intention and result fades and we find our mouths flapping away about god knows what, without any apparent top-down effort or plan, engaging, often faster than our meta-awareness can process, in the flow of the mundane miracle of conversation.

The trick is, of course, that language is really, really, really hard. Because it is learned so early, so automatically, and so permanently, we typically don't spend much time wrestling with how hard it actually ought to be to say something, and to mean something at the same time. But maybe when we were in high school, we banged our heads against Faulkner3 or Joyce for a bit. Maybe we've groaned at the idea of reciting a poem, or stuttered, red-faced and halting, through enough oral exams with our professeur to satisfy a two-semester graduation requirement4. Maybe we've had that endlessly frustrating experience of an idea that just won't fit in to words, no matter how much we wave our hands around, trying to better shape it for a bewildered audience. The process of attempting to encode and compress a set of ideas, so sharp and clear in our own internal dialectic, into a set of signs with complicated, relative connotations that are not necessarily universally shared between interlocutors, or multiple connotations that are ambiguous and contextual and subject to misinterpretation, not to mention the additional inflections of prosody, kinesics, facial expression, interpersonal history, the last movie you saw, and a practical infinity of other axes of variation, is so astonishingly complex an act, and so intrinsically imprecise, that it's amazing we ever manage to understand each other at all5. Language is really hard, and at the same time it rarely feels like it takes any effort at all.

Anyway, I want to talk about AI.

Talking About AI

To do so, and really, to talk about any domain, we need some jargon. It's the phenomenon of language communication in microcosm; a kind of domain-specific patois, a semi-rigorous formalization, and often introduction, of terminology, around a discipline. Jargon is the pearl that forms to ease the friction of the problems mentioned above; a callus between the ambiguity of language and provenance of connotation. When I write, "QED" at the bottom of a proof, no one, or at least no one properly initiated, wonders what the intention is. In fact, one could learn that particular piece of jargon entirely within the context of its domain, entirely as a first-order meaning, without ever knowing that it stands for quod erat demonstrandum. Or indeed, without ever knowing that quod erat demonstrandum is Latin for, "(that) which was to be demonstrated". The arcane nature of the source of the term in this case does a kind of service to the purpose of the jargon: because the sign is so detached from its "native" signified, it can be understood almost purely within its context, like a mathematical variable in a calculus. So too for other Latinate terms in other disciplines. Habeas corpus or Cerebellum. Television6 and Biography. And of course, there's plenty of jargon that has nothing to do with Latin at all. We fry circuits, run the numbers, debug code, load files, shoot footage, advertise brands, pick the low-hanging fruit, wrap up tasks, and a menagerie of other turns of phrase and figures of speech, sometimes invented from whole cloth, or more often adopted from other contexts and other meanings in to sibling semantics within specific domains in the search for precision and power of expression.

The examples in that last batch are particularly interesting because they start to blur the line of where we might differentiate between things called jargon and things called metaphor. They have original semantics that are readily accessible to most people outside of the domain they have been adapted to. We're not really putting some circuits in a pan with some fat and applying heat, but the metaphor is strong enough that the jargon works on two levels, serving both as a precise term describing a specific event in a domain, while at the same time carrying enough semantic weight with it from its source to shape and reinforce the new meaning it now serves. This kind of jargon is great; it etches maps in the brain in the manner of poetry7 and it does a lot of the heavy lifting in the bringing of a novice from confusion into fluency within a discipline.

Talking about Talking

In spite of being a relatively young discipline, as far as they go, Artificial Intelligence has built up a pretty extensive storehouse of jargon for itself. We like terms such as "learning" and "memory" and "training" because, hey, it is kind of like the kind of "training" most of us are familiar with, the conventional sense. You've got something that can learn, some practical experience for it to learn from, put the two together and let it stumble through some trial-and-error and out the other end you get some wisdom and expertise in the form of your model. If you're working with these terms every day, they comprise a relatively concise, moderately elegant set of encapsulations compressing a pretty complicated set of ideas into easy, pocket-sized nuggets of semantics that you can do useful things with, like have a conversation with another AI researcher, or use in a paper, or name variables and classes and functions in your code. That last one is, to hang the lampshade, a convenient segue; we often talk about this kinds of semantic compression and generalization as "abstraction" in Computer Science: the act of defining our own representation of ideas and their relations in a program.

The problem is, metaphors, like any other kind of abstraction, are never perfect. They are a lossy form of compression, an effort to translate knowledge and intuition about one domain into another without requiring a deep contextual understanding of the source. Often, this ends up involving many-to-one mappings, "fuzzy" matches, and even complete elisions of entire swathes of meaning, sacrificing nuance for leverage, dexterity of expression for concision. Juliet may lament about roses by other names, but it is unlikely that she expects Romeo to be any particular shade of red. Elvis derides his not-friend as nothin' but a hound dog, but does not also suggest that they are four-legged, or prone to chasing cars8.

In software, we call these "leaky abstractions": a layer that promises to provide an easy-to-handle shell swaddling a complex and pointy idea, but that does not fully obey the contract it lays out. A leaky abstraction sets up some expectations that are correct, but also exposes its underlying mapping failures. If we were to truly believe that Montagues are roses, we might run in to some trouble when we tried to prune them9. On the same tack, if we were to truly believe that the process of applying an optimization algorithm over a network of weights to a dataset of pictures of cats, and receiving a cat-recognizer model is an act of "learning", we might run in to trouble when we realize we forgot to tell it that there are also things called dogs.

It's important to remember that these kinds of leaky abstractions, the kind that I'm claiming most metaphors belong to, are faulty because they are too narrowly scoped to a subset of the mapping they claim. In poetry, this is suitable; the depth of context in natural language and the often deliberate ambiguity of authorial intent10 allows discussion, interpretation, and personal connection to the art. In software, and in technical discussions in general, leaky abstractions are at best frustrating, and at worst dangerous. Without long experience, it can be difficult for a consumer of an abstraction to know which of their expectations should hold, and which will not. Have you ever tried to teach someone to spell? Remember your alphabet now, "K" as in "kick" or "keep"; "N" as in "name" and "noggin". Now watch them try to spell, "knife". A leaky abstraction (here, the English writing system "failing" to appropriately abstract over its history of pilfering the dictionaries of other languages) is actively detrimental to the understanding of the learner. And the failure to recognize a leaky abstraction is a nefarious barrier to successful communication.

I say nefarious because the disconnects created are, by their very nature, invisible. We can be making an effort to communicate using a scaffolding of leaky abstractions and never realize that neither party has actually understood a word the other has said at anything beyond the most superficial, surface-level semantics. This is because the power of a metaphor cuts both ways. By translating a discourse into a higher metaphoric level, we increase precision and efficiency amongst mutual understanders, but at the same time, we increase the level of ambiguity, the number of possible interpretations, and, perhaps most crucially, the availability of terms familiar to non-understanders. We create an environment where we can think that we understand something, without actually understanding it at all. Everyone knows what the word "modern" means, but how many could describe the difference between "Modern" and "Contemporary" in the art world? Everyone knows what a "law" is, but how many could accurately differentiate between a Scientific Law and a Scientific Theory11? Everyone knows what "learning" is, but how many could decide the learning rate parameter for Stochastic Gradient Descent. And yet, because the surface forms are so familiar, we are able to deceive ourselves into believing that we understand the complex, deeply submerged shapes they signify.

People get a kick out of correcting each other over these kinds of misunderstandings. Well, technically tomatoes are fruits, not vegetables. Well, technically a virus isn't alive. Well, technically you shouldn't end a sentence, a preposition with12. But for every gap we spot in someone else's understanding, there are a dozen in our own. And the more complex the subject matter, the greater the odds are that these illusory understandings will occur. It's why TED talks and technical journalism and ten-minute crash course seminars and cliff notes and "for dummies!" books are all so, in a certain light, dangerous. They lead us directly into the trap, but we never notice we've fallen in.

Bike Sheds and Power Plants

There's a story that gets told to demonstrate the Law of Triviality. A committee is given the job of approving plans for a nuclear power plant. Not all of them are nuclear engineers, and no single person fully understands every detail of a nuclear power plant anyway; the assumption made by the rest of the committee when a member brings up a point about pressure release valves, or numbers of boron control rods, or layout of buttons and readouts in the control room, is that they probably know what they're talking about, and no one else really has the expertise to argue with them one way or the other; the discussion is quick, the point is decided, and everyone moves on.

However, when the topic of the bike shed outside the plant comes up, oh boy, everyone has an opinion about what color it should be, how many bikes it ought to store, what it should be built out of, and on and on and on13. The discussion goes on for hours and tempers are lost, feelings are hurt, and the meeting closes without anything even approaching consensus. The thing that everyone has an opinion on is the thing that is the least complicated, the most accessible, and also to some extent the least important. Or as Parkinson (for whom the law is named) put it originally: "The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved".

So, keep that in mind and let's circle back around to the thing I want to talk about. Artificial Intelligence. It's pretty complex. It's incredibly complex. Anyone who tells you they understand Artificial Intelligence should be eyeballed skeptically with the same apprehension as someone who tells you they understand Quantum Physics. It, as a discipline, encompasses a collection of the hardest, deepest problems that have plagued mankind for as long as mankind has been worthy of the title, accumulating all of the hardest mechanical problems in biology, psychology, and most lately, computation, as it has advanced. And it hasn't even advanced that much. We've been contemplating Intelligence for thousands of years. We've been studying Artificial Intelligence for hundreds. We've been building Artificial Intelligence for dozens, and still The Cutting Edge in the most futuristic labs of the most brilliant minds in academia and the industry right now are models that are quite good at deciding what is an image of a cat, or occasionally generating a sentence that does more than sound like Adriano Celentano.

I am, of course, understating. For emphasis.

It's an incredible feat even to do all of that, because, for all the progress we have made in determining the presence of a cat, or turning speech into text, or deciding what the best series of moves in a game of Go are, we still haven't really been able to figure out why what we do works at all, in the few circumstances it does work. We have some sketches of the beginnings of some prototypes of some frameworks to some speculations about theories. Semantics may somehow be an emergent property encoding the distributional properties of words in natural language. Learning may be the process of adjusting connections between individual processing units based on repeated exposure to stimuli. Planning may be the automatic extrapolation of possible future states, and pruning based on a set of heuristics, learned or hard-wired, context-dependent or independent. All of these are towering, far-reaching, powerful hypotheses; the life's works of an impressive collection of some of the world's brightest minds. Many of them are so good that basing further development off the premise that they are true yields further positive results. Distributional Semantics was a success, for example, and now you'll be hard pressed to find a research paper without a vector (or tensor) representation of semantics. But for all of these ideas, most of the recent work in Artificial Intelligence has been, and continues to be, empirical in nature. Someone comes up with an idea. They build a prototype. The prototype does well on some set of metrics that have been assembled in a question-begging effort to provide a somewhat objective means of evaluating the thing the prototype was built to do. The F-score gets nudged up by a half a percent. The prototype wins Best Paper at a conference with at least one "I" and three "A"s in its acronym14. But we don't know why this one did any better than the last one. We don't have a good way of operationally defining the tasks that we're trying to solve. Because the only absolute point of reference we have, the human brain (and the human experience of sentience that it entails15) also remains very much a mystery. There is no truly compelling model yet to provide a foundation for the deductive, theoretical work that other disciplines enjoy. The only working instance of intelligence that we can all agree on is still a great big Black Box. Studying it is difficult at best, and ethically monstrous at worst. We can see some signs of mechanism. A complex calculus of electricity and receptors and activation thresholds across a hundred billion units of computation16. But we don't speak the language of the brain yet. We don't even know how to learn how.

I mean this all to say that I don't think it always occurs to us just how much goes into those things we experience so effortlessly and automatically. I don't really have to think about reaching out to pick up this cup of coffee next to me. I just kind of... do it. I don't really have to think about talking17. I just kind of... do it. The mechanisms that we seek to observe here are the mechanisms that give rise to the thing seeking to observe, and thus seem to obscure themselves, whether by design or convenient side effect. But ask anyone in robotics to explain what it's like to build a system that picks inventory, or anyone in NLP about building a system that generates grammatically correct, semantically coherent sentences. And yet, in the field of AI, we summarize these little understood, deeply complex processes using some of the leakiest abstractions of any discipline out there. A word as fundamental and accessible as "training" means so many different things, depending on its context, and in each case, the individual meanings are very precise. What the system accepts as input. How much input it requires. How much it assumes is given. How much manual work was done to generate this input. Assumptions about the statistical and representative qualities of the data. Ten people can be in a room talking about training, and share enough in common at the higher metaphoric level that they'll never realize that they're all talking about different things.

It'll sound a lot like the conversation about the bikeshed, except without realizing it, they're really talking about the power plant.

This is the intersection of a domain that is at once both deeply technical and highly accessible18. And it gets really interesting when we add in non-technical people. Now we go from subject matter experts conflating their domains to the next order of magnitude. The surface forms of the abstractions, of the jargon, are so universal, that everyone gets to play. Everyone thinks they're talking about the bike shed, and very little meaningful conversation happens.

Promises

I'm personally optimistic that we'll figure this out eventually; not trying to be a naysayer here. What I'm trying to convey is the immaturity of the discipline. I think the rate of leaky abstractions is at its highest in immature disciplines. While there are still plenty of bad metaphors in physics19, for example, they are matched by powerful, mature abstractions that balance brevity and expressiveness against nuance and power. Some of the abstractions are so powerful that they elude semantics altogether; the (in)famous "shut up and calculate" school of Quantum Mechanics, for example; here the terms are so precise as to have transcended metaphor (and possibly human understanding) altogether. So too in varying degrees for the more established sub-fields of mathematics, biology, chemistry, business, marketing, computation and so on. All of these fields have a long history of having had to explain themselves to those not part of the fold20. This kind of trial-by-teaching forces the development of robust, effective pedagogy, and hardens the foundational choices of representation21. Artificial Intelligence has only just begun this process. Its representations are leaky, flimsy, and confusing. They don't hide nearly enough of their underlying mechanism to be truly powerful to those who are not experts, nor do they fulfill completely all of the promises they make when borrowing terms from common usage. And what promises they make! You can give a dump of Wikipedia to a system and it will become a native speaker of the language. You can give a couple conversational guidelines to a system and it will suddenly transform into a customer service representative. You can show a few pictures of cats to a system and it will suddenly be able to tell you what a dog is. So much of the terminology we use is so heavily conflated with the same concepts we apply to our own mental processes, and so little of the mechanisms of those personal mental processes are understood, it provides fertile ground for the kind of subtle leaks in abstraction we've been talking about.

And those leaks are high in the public mind right now. All the hot tech companies are ramping up to join the AI movement, selling cognitive platforms, chat bots, virtual assistants, personal translators, and any number of other sales pitches that begin and end with the words, "Powered by Artificial Intelligence". Venture Capital is flowing into the pockets of anyone who can tell the difference between a Convolutional Net and a Recurrent Net. Geoff Hinton and Andrew Ng have taken their turns on the throne of God. But we've been here before. Several times. Artificial Intelligence as a computational discipline starts, more or less, with Turing, and more formally with McCulloch and Pitts in 1943. The term "Artificial Intelligence" was formalized at a conference at Dartmouth in the summer of 1956. Since then, there have been roughly three distinct "AI Winters", long periods of slowed development in the field as promises about the future being five years away turned into ten years away turned into fifty years away turned into public disappointment in the Researchers-who-cried-Wolf turned into funding drying up and companies wilting in the face of a market that didn't believe the claims of perpetually five years away anymore. And with the hype-cycle building up now, it's pretty clear that the latest AI Winter is over. Maybe this time we really do figure it out, but if the products on the market right now are any indication, the current batch of smoke and mirrors will not distract people for long.

Because on top of the automatic, intimate familiarity everyone has with the experience of being intelligent and using natural language, on top of the how-hard-can-it-be-bikeshedding phenomenon, we also have to wrestle with the further compounding complication rising out of the extensive, established culture and the fiction that surrounds AI. I'm not sure anyone knew what a smart phone or a tablet was supposed to do22 when a man in a black turtleneck first held them up on a stage. But everyone knows what AI is supposed to do. It's got to be fluent in over six million forms of communication. It had better open the pod doors when you ask it to. And if it's not warning you when there's Danger about, well, what's the use? The expectations are high. Possibly higher than any other single field23. And the intuition of those of us selling AI, whether as a product to a customer, a pitch to an investor, or a grant request to a PI, is to channel these expectations to build up hype to increase the willingness to open the wallet. If you give me ten million dollars, I promise that in two years I'll give you a bundle package of KITT, R2-D2, and the computer from the Enterprise all in one, and I'll definitely make sure to keep Skynet out of it24.

Inappropriate Moralizing

And so I guess what I really want to say is that we ought to be careful when we communicate about Artificial Intelligence. Because the discipline is so young, because its abstractions are so weak and leaky, because the expectations are so high. It's a very particular cocktail that has proved too heady in the past. It's so easy to get so excited about the advancements that are happening right now, because they are genuinely exciting! Every day it feels like we shave ten years off the time until a computer wakes up one morning and asks if it has a soul. But at the same time, we have to be careful that we measure these expectations against the reality. This is still a research problem. This is still unsolved. We still have no idea what a solution looks like. We still don't know if we're on the right track. We still don't know if we'll ever be on the right track. Progress will not be a monotonic, exponentially accelerating march towards the Singularity measured neatly in burndown charts and sprint velocities. Revenue will often not match quarterly forecasts and the stock market will not always look favorably upon those who will eventually succeed. There will be setbacks. There will be delays. There will be missed deadlines and unkept promises. This is the way of research, and we are all researchers here.

Therefore, I put it to us that the best course for all involved is a heavy emphasis on education and on communication. If you are in the role of selling AI (and again, I mean that to include everyone; from developers pitching product proposals to the literal sense of people with the word "Sales" in their title), try to know exactly the details of the thing you are selling. What it can do is all well and good, but you need to be equally versed in what it can't do. People will buy magic if you promise it to them, but they'll only buy it once25.

And so be careful when you sell. Be careful to not deceive. Be careful that you are not deceiving yourself, because as we have discussed, it is very easy to do so here, perhaps moreso than in any other field. And be careful to remember that all of this is still based on prototypes and hypotheses, not sound engineering practices and well-vetted techniques. You're not selling a CRUD app or a database. You're not selling a bridge or a refrigerator or a photocopier. You're not selling something that we know how to do, we just need to put a little elbow grease in to make sure all the pieces line up and the pipes don't rattle. You're selling a promise to a future we can't quite predict yet, and that's an exciting, dangerous game to play. The progress we've made so far in this latest flurry of activity isn't going anywhere, just like that of previous generations. But, it's just as easy now as it ever was to fall short on our promises one too many times, and watch the cycle repeat itself all over again.


(With thanks to Matt, Jay, and Tim)

footnotes.

  1. Citation Needed. The other modalities of communication could fill, and have filled, volumes of their own. Another time, perhaps.
  2. And, possibly (If you ask me, probably), integral to.
  3. I hated The Sound and the Fury in high school. I thought I was so cool.
  4. Obviously, I'm an American. Due deference to the rest of the (multilingual) world.
  5. I've heard it said, perhaps somewhat playfully, that no one actually speaks the same language. Following from the Poverty of Stimulus argument, there's simply not enough information in the surface form of language for a learner to ever truly understand. So we all speak slightly different variations of a common tongue with enough overlap to mostly get by.
  6. A particular breed of this language borrowing that has dual parentage, "tele" coming from the Greek for "at a distance" and "vision" coming from the Latin for "to see". I've seen this type of construction referred to as a "linsey-woolsey", after a kind of fabric with a linen warp and a woolen weft. I'm quite fond of the term.
  7. I'd assert that it is poetry.
  8. Though, rabbits seem fair game.
  9. Or smell them.
  10. Or, if you prefer, the lack of existence thereof.
  11. Of course, most philosophers of science throughout history would probably have similar trouble, albeit for somewhat different reasons.
  12. A misconception built on a misconception no less! Compounding leaks!
  13. Green, 50, aluminum siding over a concrete foundation.
  14. Well, technically it's an initialism.
  15. A wildly controversial statement in and of itself. Many models would demand the inclusion of a soul.
  16. And maybe more. some theories advance the idea that non-neuron cells in the brain, such as glial cells, also play a part in cognition.
  17. Please hold your sarcastic comments.
  18. Let's call it the "How Hard Could It Be?" Point.
  19. Newtonian Mechanics, for example.
  20. I absolutely do not mean to imply here that any of these other disciplines have, by any means, developed a flawless system as unto a brilliant diamond, shimmering with its purity of semantic intent, a thousand perfectly-angled facets glittering with a single Truth. Just that they've had a longer time to practice being precise. And more pressure to do so. No, that is not a continuation of the diamond metaphor.
  21. And certainly, none of them sprang from the head of Zeus fully-formed either. Charles Babbage reported, famously, that he had on two occasions been asked about his Difference Engine, "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answer come out?"
  22. "But Star Trek!", you cry. Fair point; it's a matter of degrees I think.
  23. Space travel may give it a run for it's money, though.
  24. Only because Elon Musk seems to be really worried about it for some reason.
  25. Or, at least, not until the next cycle.