Pages

Tuesday, July 29, 2008

Daniel Dennett: Autobiography (Part 1)

Dan Dennett is Co-Director of the Center for Cognitive Studies and is Austin B. Fletcher Professor of Philosophy at Tufts University. His latest book is Breaking the Spell (Viking, 2006).

by Daniel Dennett, Philosophy Now

Reposted from:
http://www.philosophynow.org/issue68/68dennett.htm

What makes a philosopher? In the first of a two-part mini-epic, Daniel C. Dennett contemplates a life of the mind – his own. Part 1: The pre-professional years.

It came as a pleasant surprise to me when I learned – around age twelve or thirteen – that not all the delicious and unspeakable thoughts of my childhood had to be kept private. Some of them were called 'philosophy', and there were legitimate, smart people who discussed these fascinating topics in public. While less immediately exciting than some of the other, still unspeakable, topics of my private musings, they were attention-riveting, and they had an aura of secret knowledge. Maybe I was a philosopher. That's what the counselors at Camp Mowglis in New Hampshire suggested, and it seemed that I might be good at it.

My family didn't discourage the idea. My mother and father were both the children of doctors, and both had chosen the humanities. My mother, an English major at Carleton College in Minnesota, went on for a Masters in English from the University of Minnesota, before deciding that she simply had to get out of Minnesota and see the world. Never having been out the Midwest, and bereft of any foreign languages, she took a job teaching English at the American Community School in Beirut. There she met my father, Daniel C. Dennett Jr, working on his PhD in Islamic history at Harvard while teaching at the American University of Beirut. His father, the first Daniel C. Dennett, was a classic small town general practitioner in Winchester, Massachusetts, the suburb of Boston where I spent most of my childhood. So yes, I am Daniel C. Dennett III; but since childhood I've disliked the Roman numerals, and so I chose to court confusion among librarians (how can DCD Jr be the father of DCD?) instead of acquiescing in my qualifier.

My father's academic career got off to a fine start, with an oft-reprinted essay, 'Pirenne and Muhammed', which I was thrilled to find on the syllabus of a history course I took as an undergraduate. His first job was at Clark University. When World War II came along, he put his intimate knowledge of the Middle East to use as a secret agent in the OSS, stationed in Beirut. He was killed on a mission, in an airplane crash in Ethiopia in 1947, when I was five. So my mother and two sisters and I moved from Beirut to Winchester, where I grew up in the shadow of everybody's memories of a quite legendary father. In my youth some of my friends were the sons of eminent or even famous professors at Harvard or MIT, and I saw the toll it took on them as they strove to be worthy of their fathers' attention. I shudder to think of what would have become of me if I had had to live up to my own father's actual, living expectations and not just to those extrapolated in absentia by his friends and family. As it was, I was blessed with the bracing presumption that I would excel, and few serious benchmarks against which to test it. It was assumed by all that I would eventually go to Harvard and become a professor – of one humanities discipline or another. The fact that from about the age of five I was fascinated with building things, taking things apart, repairing things, never even prompted the question of whether I might want to become an engineer – a prospect in our circle about as remote as becoming a lion tamer. I might become an artist – a painter, sculptor or musician – but not an engineer.

In my first year in Winchester High School I had two wonderful semesters of ancient history, taught by lively, inspiring interns from the Harvard School of Education. I poured my heart into a term paper on Plato, with a drawing of Rodin's Thinker on the cover. Deep stuff, I thought; but the fact was that I hardly understood a word of what I read for it. More important, really, was that I knew then – thank you, Catherine Laguardia and Michael Greenebaum wherever you are – that I was going to be a teacher. The only question was, what subject?

I spent my last two years of high school at Phillips Exeter Academy, largely because my father's old friends persuaded my mother that this was obligatory for the son of DCD Jr. Thank you, long-departed friends. There I was immersed in a wonderfully intense intellectual stew, where the editor of the literary magazine had more cachet than the captain of the football team; where boys read books that weren't on the assigned reading; where I learned to write (and write, and write, and write). My Olivetti Lettera portable typewriter (just like Michael Greenebaum's – cool!) churned out hundreds of pages over two years, but none of it was philosophy yet.

As much to upset the family's expectations as for any other reason, I eschewed Harvard for Wesleyan University, and arrived with advanced placement in math and English, having had excellent teachers in both areas at Exeter. I didn't want to go on in calculus, but they twisted my arm to take an advanced math course, under the mistaken idea that I was some sort of mathematical prodigy. I acquiesced, signing up for something called 'Topics in Modern Mathematics', taught by a young lecturer from Princeton, the logician Henry Kyburg in his first job. Since I and a grad student in the math department were the only two students enrolled in the course, Henry asked and got our permission to make it a course in mathematical logic. He promptly immersed us in Quine's Mathematical Logic, followed by Kleene, Ramsey, and even Wittgenstein's Tractatus, among other texts. Quite a first course in logic for a seventeen year-old! If I had been a mathematical prodigy, as advertised, this would no doubt have made pedagogical sense; but I was soon gasping for air and in danger of drowning. Freshman year was turning out to be more challenging than I had expected.

One night as I crammed in the math library, I took a breather and scouted out the shelves. Quine's From a Logical Point of View caught my eye, and I sat down to sample it. By breakfast I had finished my first of several readings of it, and made up my mind to transfer to Harvard. This Quine person was very, very interesting – but wrong. I couldn't yet say exactly how or why, but I was quite sure. So I decided, as only a freshman could, that I had to confront him directly and see what I could learn from him – and teach him! A reading of Descartes' Meditations in my first philosophy course, with Louis Mink, not only confirmed my conviction that I had discovered what it was I was going to teach, but narrowed the field considerably: philosophy of mind and language transfixed my curiosity.

When I showed up at Harvard in the fall of 1960, the first course I signed up for was Quine's philosophy of language course, and the main text was his brand new book, Word and Object. Perfect timing. I devoured the course, and was delighted to find that the other students in the class were really quite as good as I had hoped Harvard students would be. Most were grad students; among them (if memory serves) were David Lewis, Tom Nagel, Saul Kripke, Gil Harman, Margaret Wilson, Michael Slote, David Lyons. A fast class.

When it came to the final exam I had never been so well prepared, with As on both early papers, and every reading chewed over and over. But I froze. I knew too much, had thought too much about the problems and could see, I thought, way beyond the questions posed – too far beyond to enable any answer at all. Quine's teaching assistant, Dagfinn Follesdal, must have taken pity on me, for I received a B- in the course. Follesdal also agreed to be my supervisor when two years later I told him that I'd been working on my senior thesis, 'Quine and Ordinary Language' ever since I'd taken the course. I didn't want Quine to supervise me, since he'd probably show me I was wrong before I got a chance to write it out, and then where would I be? I had sought Quine out, however, for bibliographical help, asking him to direct me to the best anti-Quinians. I needed all the allies I could find. He directed me to Chomsky's Syntactic Structures, the first of Lotfi Zadeh's papers on fuzzy logic, and Wittgenstein's Philosophical Investigations, which I devoured in the summer of 1962, while on my honeymoon job as a sailing and tennis instructor at Salter's Point, a family summer community in Buzzards Bay (my bride, Susan, was the swimming instructor). 1962-3, my senior year at Harvard, was exciting but far from carefree – I was now a married man at the age of 20, and I had to complete my four-year project to Refute Quine, who was very, very interesting but wrong. Freed from the diversions and distractions of student life,
I worked with an intensity I have seldom experienced. I can recall several times reflecting that it really didn't matter in the larger scheme of things whether I was right or wrong: I was engulfed in doing exactly what I wanted to be doing, pursuing a valuable quarry through daunting complexities, and figuring out for myself answers to some of the most perplexing questions I'd ever encountered.
Dagfinn, bless his heart, knew enough not to try to do more than gently steer me away from the most dubious overreachings in my grand scheme. I was not strictly out of control, but I was beyond turning back.

The thesis was duly typed up in triplicate and handed in (by a professional typist, back in those days before word-processing). I anxiously awaited the day when Quine and young Charles Parsons, my examiners, would let me know what they made of it. Quine showed up with maybe half a dozen single-spaced pages of comments. I knew at that moment that I was going to be a philosopher. (I was also an aspiring sculptor, and had shown some of my pieces in exhibits and competitions in Boston and Cambridge. Quine had taken a fancy to some of my pieces and always remarked positively on them whenever we met, so I had been getting equivocal signals from my hero – was he really telling me to concentrate on sculpture?) On this occasion Quine responded to my arguments with the seriousness of a colleague, conceding a few crucial points (hurrah!) and offering counter-arguments to others (just as good, really). Parsons sided with me on a point of contention. I can't remember what it was, but I was mightily impressed that he would join David against Goliath. The affirmation was exhilarating. Maybe I really was going to be a philosopher.

But if so, I was going to be a rather different philosopher from those around me. I had no taste for much that delighted my Harvard classmates or the graduate students. Ryle's Concept of Mind was one of the few contemporary books in philosophy that I actually liked. (Another was Stephen Toulmin's The Place of Reason in Ethics, which seems to have vanished without a trace, whereas I thought it was clearly superior to the other readings in my ethics courses.) I couldn't see why others found Ryle so unpersuasive. To me, he was obviously and refreshingly right about something deep, in spite of various overstatements and baffling bits. I decided that Ryle would make a logical next step in my education, so I applied to Oxford, to read for the notoriously difficult B.Phil degree. Burton Dreben tried to dissuade me – now that Austin had died, he assured me, there was nobody, really, in Oxford with whom to study. I also applied to Berkeley, though I can't remember why. And I applied to Harvard, but Harvard wisely had a policy of not admitting their own graduates, and I treasured the letter of rejection I got from the then Dean of Graduate Admissions, Nina Dennett: she signed it 'Aunt Nina', although she was a somewhat more distant relative. I also got rejected by all three Oxford colleges to which I had applied. Back then, they had no university-wide admissions system, and I had applied, as it turned out, to three of the most popular colleges among Rhodes and Marshall scholars: Balliol, Magdalen and University. They were oversubscribed with Americans with scholarships and had no room for me, even though I would be paying for myself with a modest legacy from DCD the first, who had died a few years earlier. But just as I was about to send Berkeley my downpayment to reserve a married student apartment for the fall term, out of the blue I received a letter from the Principal of Hertford College, Oxford, telling me that they were prepared to admit me to read for the B.Phil in philosophy. I had not applied to Hertford, and in fact had never even heard of it, and at first I suspected that somebody who knew of my disappointment was playing an evil prank on me. I looked up Hertford College in the Oxford University Bulletin, confirmed its reality, and accepted. It didn't matter which college I was in, reading for the B.Phil: my supervisor would be one of the professors – Ryle, Ayer or Kneale – and I figured that I would almost certainly be able to work with Ryle, although his name hadn't come up in my correspondence with Hertford. Years later, Ryle told me that he'd been on the admissions committee at Magdalen and read Quine's letter of recommendation. Magdalen couldn't fit me in, so he'd sent the application with a little note to a friend in Hertford, where they were eager to get a few American grad students. So I owed more than I guessed to both my mentors.

My wife and I sailed to England in the summer of 1963. I carried with me an idea I had had about qualia, as philosophers call the phenomenal qualities of experiences, such as the smell of coffee or the 'redness' of red. In my epistemology course at Harvard with Roderick Firth, I had had what struck me as an important insight – obvious to me but strangely repugnant to those I had tried it out on. I claimed that what was caused to happen in you when you looked at something red only seemed to be a quale – a homogeneous, unanalyzable, self-intimating 'intrinsic' property. Subjective experiences of color, for instance, couldn't actually owe the way they seemed to their intrinsic properties; their intrinsic properties could in principle change without any subjective change; what mattered for subjectivity were properties that were – I didn't have a word for it then– functional, relational. The same was going to be true of [mental] content properties in general, I thought. The meaning of an idea, or a thought, just couldn't be a self-contained, isolated patch of psychic paint (what I later jocularly called 'figment'); it had to be a complex dispositional property – a set of behavior-guiding, action-prompting triggers. This idea struck me as congenial with, if not implied by, what Ryle was saying. But when I got to Oxford, I found that these ideas seemed even stranger to my fellow graduate students at Oxford than at Harvard.

This was already beyond the heyday and into the decline of 'ordinary language philosophy', but thanks to the lamentable phenomenon of philosophical hysteresis (graduate students tend to crowd onto bandwagons just as they grind to a halt), Oxford was enjoying total domination of Anglophone philosophy. It was a swarming Mecca for dozens – maybe hundreds – of pilgrims from the colonies who wanted to take the cloth and learn the moves. There was the Voltaire Society and the Ockham Society, just for graduate students. At one of their meetings in my first term, in the midst of a discussion of Anscombe's Intention, as I recall, the issue came up of what to say about one's attempts to raise one's arm when it had gone 'asleep' from lying on it. At the time I knew nothing about the nervous system, but it seemed obvious to me that something must be going on in one's brain that somehow amounted to trying to raise one's arm, and it might be illuminating to learn what science knew about this. My suggestion was met with incredulous stares. What on earth did science have to teach philosophy? This was a philosophical puzzle about 'what we would say', not a scientific puzzle about nerves and the like. This was the first of many encounters in which I found my fellow philosophers of mind weirdly complacent in their ignorance of brains and psychology, and I began to define my project as figuring out as a philosopher how brains could be, or support, or explain, or cause, minds. I asked a friend studying medicine at Oxford what brains were made of, and vividly remember him drawing simplified diagrams of neurons, dendrites, axons – all new terms to me. It immediately occurred to me that a neuron, with multiple inputs and a modifiable branching output, would be just the thing that could compose into networks which could learn by a sort of evolutionary process. Many others have had the same idea, of course, before and since. Once you get your head around it, you see that this really is the way – probably, in the end, the only way – to eliminate the middleman, the all-too-knowing librarian or clerk or homunculus who manipulates the ideas or mental representations, sorting them by content.

With this insight driving me, I began to see how to concoct something of a 'centralist' theory of intentionality. (This largely unexamined alternative was suggested by Charles Taylor in his pioneering book, The Explanation of Behaviour in 1964.) The failure of Skinnerian and Pavlovian 'black box' behaviorism to account for human and animal behavior purely in the 'extensional' terms of histories of stimulus and response suggested that we needed to justify a non-extensional, 'intensional' (with an 's') theory of intentionality (with a 't'): a theory that looked inside at the machinery of mind and explained how internal states and events could be about things, and thereby motivate the mental system of which they were a part to decide on courses of action. [see box on p.24] The result would be what would later be called a functionalist, and then teleofunctionalist, theory of content, in which Brentano and Husserl (thank you, Dagfinn) and Quine could all be put together, but at the subpersonal level. The personal/subpersonal distinction was my own innovation, driven by my attempts to figure out what on earth Ryle was doing and how he could get away with it. It is clear that my brain doesn't understand English – I do – and my hand doesn't sign a contract – I do. But it is also clear that I don't interpret the images on my retinas, and I don't figure out how to make my fingers grasp the pen. We need the subpersonal level of explanation to account for the remarkably intelligent components of me that do the cognitive work that makes it possible for me to do clever things. In order to understand this subpersonal level of explanation, I needed to learn about the brain; so
I spent probably five times as much energy educating myself in Oxford's Radcliffe Science Library as I did reading philosophy articles and books.


I went to Ryle, my supervisor, to tell him that I couldn't possibly succeed in the B.Phil, which required one to submit a (modest) thesis and take three very tough examinations in the space of a few weeks at the end of one's second year. As I have already mentioned, I was an erratic examination-taker under the best of conditions, and I was consumed with passion to write my thesis. I knew to a moral certainty that I would fail at least one of the examinations simply because I couldn't make myself prepare for it while working white hot on the thesis. I proposed to switch to the B.Litt; a thesis-only degree that would let me concentrate on the thesis and then go off to Berkeley for a proper PhD. To my delight and surprise, Ryle said that I might have to settle for a B.Litt as a consolation prize of sorts, but that he was prepared to recommend me for the D.Phil, which also required just a thesis. With that green light, I was off and running, but the days of inspiration were balanced by weeks and months of confusion, desperation and uncertainty. A tantalizing source of alternating inspiration and frustration was Hilary Putnam, whose 'Minds and Machines' (1960) I had found positively earthshaking. I set to work feverishly to build on it in my own work, only to receive an advance copy of Putnam's second paper on the topic, 'Robots: Machines or Artificially Created Life?' from my mole back at Harvard (it was not published until 1967). This scooped my own efforts and then some. No sooner had I recovered and started building my own edifice on Putnam paper number two than I was spirited a copy of Putnam paper number three, 'The Mental Life of Some Machines' (eventually published in 1967) and found myself left behind yet again. So it went. I think I understood Putnam's papers almost as well as he did – which was not quite well enough to see farther than he could what step to take next. Besides, I was trying to put a rather different slant on the whole topic, and it was not at all clear to me that, or how, I could make it work.
Whenever I got totally stumped, I would go for a long, depressed walk in the glorious Parks along the River Cherwell. Marvelous to say, after a few hours of tramping back and forth with my umbrella, muttering to myself and wondering if I should go back to sculpture, a breakthrough would strike me and I'd dash happily back to our flat and my trusty Olivetti for another whack at it. This was such a reliable source of breakthroughs that it became a dangerous crutch; when the going got tough, I'd just pick up my umbrella and head out to the Parks, counting on salvation before suppertime.


Gilbert Ryle himself was the other pillar of support I needed. In many regards he ruled Oxford philosophy at the time, as editor of Mind and informal clearing-house for jobs throughout the Anglophone world, but at the same time he stood somewhat outside the cliques and coteries, the hotbeds of philosophical fashion. He disliked and disapproved of the reigning Oxford fashion of clever, supercilious philosophical one-upmanship, and disrupted it when he could. He never 'fought back'. In fact, I tried to provoke him, with elaborately-prepared and heavily-armed criticisms of his own ideas, but he would genially agree with all my good points as if I were talking about somebody else, and get us thinking what repairs and improvements we could together make of what remained. It was disorienting, and my opinion of him then – often expressed to my fellow graduate students, I am sad to say – was that while he was wonderful at cheering me up and encouraging me to stay the course, I hadn't learned any philosophy from him.

I finished a presentable draft of my dissertation in the minimum time (six terms or two years) and submitted it with scant expectation that it would be accepted on first go. On the eve of submitting it, I came across an early draft of it, and compared the final product with its ancestor. To my astonishment, I could see Ryle's influence on every page. How had he done it? Osmosis? Hypnotism? This gave me an early appreciation of the power of indirect methods in philosophy. You seldom talk anybody out of a position by arguing directly with their premises and inferences. Sometimes it is more effective to nudge them sideways with images, examples, helpful formulations that stick to their habits of thought. My examiners were A.J. Ayer and the great neuroanatomist J.Z. Young from London – an unprecedented alien presence at a philosophy viva, occasioned by my insistence on packing my thesis with speculations on brain science. He too had been struck by the idea of learning as evolution in the brain, and was writing a book on it, so we were kindred spirits on that topic, if not on the philosophy, which he found intriguing but impenetrable. Ayer was reserved. I feared he had not read much of the thesis, but I later found out he was simply made uncomfortable by his friend Young's too-enthusiastic forays into philosophy, and he found silence more useful than intervention. I waited in agony for more than a week before I learned via a cheery postcard from Ryle that the examiners had voted me the degree.

Since I had the degree, I wouldn't need to go to U.C. [University of California] Berkeley after all. So on a wonderful day in May 1965, a few weeks after my 23rd birthday, I sent off two letters to California: I accepted an Assistant Professorship at U.C. Irvine, where A.I. Melden was setting up a philosophy department in a brand new campus of the university; and I declined a Teaching Assistantship at U.C. Berkeley, saying only that I had found another position. I didn't dare say that it was a tenure track position at a sister campus! I was a little worried that there might be some regulations of the University of California prohibiting this sort of thing, whatever sort of thing it was. Ah, those were the glorious expansionist days in American academia, when it was a seller's market in jobs, and I had garnered two solid offers and a few feelers without so much as an interview, let alone a campus visit and job talk.
For formality's sake, Melden asked me to send a curriculum vitae along with my official acceptance letter, and I had to ask around Oxford to find out what such an obscure document might be.


© Prof. Daniel C. Dennett 2008



• This two-part article was written in 2003 but has not previously been published. You will be able to read the second part in the next issue of Philosophy Now.

No comments:

Post a Comment