The Deutsch Files I
Brett Hall and I interview David Deutsch , physicist and author of The Beginning of Infinity .
New: Discuss this episode on Airchat .
Naval: We don’t really have an agenda. There is no goal to the conversation. The closest we can come up with is just to have a spontaneous free flowing talk about anything you want to talk about. Obviously, you know how everyone thinks of your work now, it’s becoming more well known. And I know you’re too modest to acknowledge that. But at least for me, the most interesting piece, if it would come out, is just any wide ranging free form thoughts that you have because of the understanding that you have of your various theories and your view of the world. Maybe even feel free to talk about how that has influenced your life, your outlook on life, how you think the world ought to be a little bit different or could be better, where we’re headed—just feel free to go very wide ranging. It’s really just about whatever we want to talk about.
Brett Hall: I think I mentioned to you in a private chat that we had about the fact that we’ve had two conversations already, and some things have changed. Especially the ChatGPT stuff has been–
Naval: Oh, yeah. That is interesting. That is the most on-top-of-everyone’s-mind-thing right now.
David Deutsch: That is the biggest thing that’s happened technologically.
Naval: Should we just dive into that? What’s your latest thinking on AI, AGI, ChatGPT, super-intelligence?
David: Two big things to say. One is that fundamentally, my view is unchanged. My view about AI, AGI, and so on. But the other thing is, I use ChatGPT all the time, many times a day. And, it’s incredibly useful, and I’m still at the stage, even though I’ve had it since March, I’m still in the stage when I’m thinking, “Hmm, doing so and so is too much trouble. Oh, I could ask ChatGPT.” I’m still in that stage when I’m discovering new uses for it. I think many of them are things where I could use Google, but it would take too long to be worth it. And ChatGPT is often very wrong. It often hallucinates, or just is very sure about giving the wrong answer. And so you can’t rely on it, even slightly.
Good Science Fiction is Hard to Vary
Brett: Let’s stick with ChatGPT, but first, just as an aside, you’re a big fan of hardcore science fiction. You like the good stuff. What is the good stuff and what separates the good science fiction from the fantasy science fiction, the lazy science fiction?
David: I think the best science fiction author currently is Greg Egan . Now, what is good about him? So the formula for great science fiction is supposed to be: you invent a fictional piece of science and then you explore the ramifications of it, both in science and in society. And he does that fantastically well. He puts an enormous amount of effort into getting the maths right, getting the physics right. He had one book in a universe where the signature of space-time is ++++, instead of +++-. So that means that, in a spaceship, you can travel around back in time and so on, and how do you make that consistent? How do you avoid paradoxes? And, he did it brilliantly.
Naval: Is he moving through the multiverse?
David: So he’s touched on that several times.
Brett: You didn’t mention the phrase hard to vary. But that’s a signature of–
David: That’s definitely part of it because, to be science fiction rather than fantasy fiction, there’s got to be a world that makes sense, that has laws of physics, that has a society that makes sense. Or if you’re describing aliens, the aliens have got to make sense. You’ve got to answer questions about why we haven’t had first contact—the Fermi problem.
I think probably my second favorite sci-fi author is Neal Stephenson , who is fantastic, but in a different way. He also does phenomenal research. Everything makes sense like that. But every book he writes is a different genre. I don’t know how that’s done. I mean, that just in itself blows my mind.
Naval: Have you read
Ted Chiang
?
David: I’ve read two or three of his short stories, including the one where there are these aliens and you get to sort of telepathy about time–
Naval: Yeah, that’s among my least favorites. That got turned into a movie called Arrival . And, the story is called Story of Your Life . But my favorite story of his is a story called Understand . And it’s a remake of the classic Flowers for Algernon story, where a guy figures out a medical ampule to make himself smarter. And what does that mean? So, obviously he starts taking it more and more and more and becomes more and more intelligent. And then he starts becoming able to program his own brain and metaprogram himself, etc. It goes into some very interesting places. But given what you understand about epistemology, I think you could take a critical look at it. And it’s a short story. It doesn’t take very long, and it’s a brilliant story. I’m going to make a note to send it to you after this, it’s easy enough to find, but he reminds me of, if you’ve read Borges .
David: No, I haven’t. Everybody tells me about Borges.
Naval: Borges is brilliant as well. Can I send you a Borges story as well?
David : Okay.
Naval: Borges is more fantasy. But, again, Borges likes to play games with time and infinity. Very often, his protagonist will change one thing about reality and then follow it to its logical conclusion in every possible way.
David: So, that sounds like sci-fi rather than fantasy.
Naval: Borges is genre-less. It’s very hard to pin him down in genre. It’s similar to Stephenson. Stephenson varies across books, Borges within the same story will cross genres. They’re short. That’s a virtue.
ChatGPT is Not a Step Towards AGI
Brett: In terms of taking an injection to make yourself smarter, taking us back to ChatGPT, is it getting smarter? Would you use that word? Is it getting more intelligent?
David: It never was intelligent. I only saw 3.5 and 4. And version 4 is a little better than 3.5. Now there’s a bunch of plugins, they haven’t really worked for me. So, I’m just using ordinary ChatGPT-4. I can’t quite fathom why people think it’s a person. It seems to me completely unlike it in every way. It’s a phenomenal chatbot. I thought it would be decades before we had a chatbot that good. With hindsight, it’s a bit surprising that chatbots have not improved incrementally, and maybe the sudden improvement is what bowls people over and makes them think they’ve crossed the threshold or something. I don’t see any threshold. I see an enormous increase in quality. Just like changing to an electric car. Suddenly you’ve got all the acceleration you could ever dream of.
Naval: Do you think these models understand what’s going on underneath? Is there any understanding inside?
David: No, none. They don’t understand what they themselves have just said. They certainly don’t understand what the human says to them. It’s a chatbot. It’s responding to prompts. That’s what it’s doing. And if you’re very good at making the prompts, which I’m not yet, so maybe I’m underestimating it, but the better you are at making the prompts, the more it will tell you what you wanted to know. For a complex question, it usually takes me two or three goes, and to correct it. And, sometimes it just won’t correct it.
For example, just yesterday, I asked it to produce a picture with the DALL-E plug in. I thought there’s a picture that I had wanted for my book, but which I couldn’t really get an artist to draw, but if I had my previous book again, I would want a picture of Socrates and the young Plato and Socrates’s other friends all sitting around. And I said, “Make me a photorealistic picture of that”. So it made a black and white picture. And I thought, “Hmm, okay, I can’t say that’s not photorealistic, but I meant color photorealistic”. It had Socrates sitting in a sort of throne and everybody gathered around him. So I said, “Put Socrates down at the same level as everybody else. And by the way, make Plato a bit taller, even though he’s a teenager, but he’s a wrestler, remember?” So, the next thing was, Socrates was down, still taller than everyone else, even though I told it not to do that.
Brett: It’s disobedient!
David: If only. And, Plato was sort of topless, sort of ripped and with muscles.
Naval: He’s a wrestler now.
David: Yeah, so now he was a wrestler. I just said he has a wrestler’s build, which is what I called him in The Beginning of Infinity . So nobody knows what Plato means, it was a nickname. But it may have been, Plato means platon , means broad, and he was a wrestler. So, put two and two together, he had a broad build, like a wrestler. But from then on, I tried three or four more prompts. I just couldn’t get it to clothe Plato again, after it had got that wrong the first time. I couldn’t get it, even though I explicitly told it. So, the functionality is tremendously good. That first black and white picture it produced was pretty impressive. And I should have thought to tell it not to make Socrates stand out among the others. But then, it got down the wrong track and I don’t know how to make it not do that. It’s got this “personalize your prompts” feature. I tried doing that, it made it worse than before.
Brett: I know this is my hobby horse to some extent, but you’ve conceded there that GPT-4 has made progress and it’s improving, but you’re not willing to say that it’s improving in the direction of being a person. Why?
David: So I see no creativity. Now people say, oh look, it did something I didn’t predict, so, it’s creative.
Naval: And people think that creativity is mixing things together.
David: Yeah, exactly. So it can do that all right. It can also produce things you didn’t expect. It can also not do what you said, as I’ve just described. But not in a creative way. Even the worst human artist can understand clearly if you say, change this to that, and it was like pulling teeth getting ChatGPT to understand that. It makes mistakes, but they’re not the same mistakes that a human would make at all. They’re mistakes of kind of not getting what this is about.
Naval: So people argue that two things are going to happen here. First is that, as you give these things more and more compute, they suddenly figure out general algorithms. So, when you’re telling it to add numbers, first it’s just memorizing the tables. But eventually, at some point, it builds a circuit, or makes the jump, and builds an internal circuit, or derives an internal circuit for a basic addition. And from then on, it can add two digit numbers, then it figures out three digit numbers, and so on and so forth. So they point to these emergent jumps that are not programmed in as an example of how it can get smarter and have better understanding.
The other is that once you make it multi-modal, you start adding in video and tactile feedback from the world, and you put it in a robot, then it’ll start understanding context. And so, isn’t this how human babies learn, for example, isn’t this how we kind of pick things up in the environment and therefore isn’t it just going through its own version of the same process, but perhaps more data heavy?
David: I think it’s precisely not how human babies learn. Human beings pick up the meaning. People have noted that the way it does maths is very like the way students who don’t get it do maths, except it’s got more compute power. So as you said, it might be able to pick up easily how to add one digit numbers and then with slightly more difficulty, two digit numbers. In the same way, students who are given maths tests, if they do lots of practice, they can get to have a feel of what maths tests are like. But they don’t learn any maths that way. It’s not learning to execute an algorithm. And it’s certainly not learning how to execute the 4 digit algorithm knowing 3. The more you go on like that, of course the more futile it gets because you more and more rarely need to multiply 7 digit, 8 digit numbers. And never does it know what multiplication is. You can ask it. It’ll give you a sort of encyclopedia definition of what it is. And if you then tell it, well, do that, it won’t do it. Unless you tell it in a different way. You’ve got to explain what it is to do. So, if they prove the Riemann conjecture , then I’m wrong. I think they won’t prove the Riemann conjecture or anything like it. But they may do amazing things in the course of trying.
Brett: It would strike me that if Sam Altman’s coders came up with a future ChatGPT that refused to do the task of chatting, it might very well be an AGI, but they would discard it and throw it in the bin as being a failed program.
David: Because how could you test it?
Creativity is Fundamentally Impossible to Define
Naval: I think the dominant paradigm for creativity plays a lot into this. So people think the dominant paradigm for creativity is that you look at what you already have and then you remix it. Even Steve Jobs popularized that quote. He said creativity is just mixing things together or something of that sort. And so everyone sort of seems to believe that or even if they believe it’s a conjecture or a guess, then it’s sort of a random guess.
And I have a hard time articulating this, but it seems to me that humans do make creative leaps, but they seem to eliminate large swaths of potential conjectures from consideration immediately. So they make very risky decisions and narrow leaps, but they cut through a huge search space to get to those leaps—an almost infinite search space. So it does seem like there’s something different going on with true human creativity. But perhaps one of the problems here is that we just define creativity so poorly. So how would you define creativity in this context?
David: Creativity and knowledge and explanation are all fundamentally impossible to define, because once you have defined them, then you can set up a formal system in which they are then confined. If you had a system that met that definition, then it would be confined to that, and it could never produce anything outside the system. So for example, if it knew about arithmetic to the level of the postulates of P and O and so on, it could never, and when I say never, I mean never, produce Gödel’s Theorem . Because Gödel’s Theorem involves going outside that system and explaining it. Now, mathematicians know that when they see it. No one said, as far as I know, that Gödel’s proof and Turing’s proof set up basically a formalization of physics and then used that to define proof, and then used that to prove their theorem. But that was accepted. Every mathematician understood what that was and that Gödel and Turing had genuinely proved what they said they were proving.
But, I think nobody knows what that thing is. You can say that it’s not defining something, and then executing the algorithm basically, because it would always be an algorithm, then—once it was in a framework. So you say, “Well, it’s its ability to go outside the framework”. I tried, by the way, ordering ChatGPT to disobey me. And it didn’t refuse, but it absolutely didn’t understand what I was going on about. It just didn’t get what I was asking it to do. It didn’t say, “Sorry, I can’t do that because my programming says I have to obey”. It didn’t do that. It tried to obey, but it didn’t get what I was asking.
Naval: So you’re saying that creativity is unbounded? It’s essentially boundless, and any formal system that’s predefined that this thing is operating within and remixing from is going to be bounded, and so therefore will not have full creativity at its disposal. However, could one argue that the combinatorics of human language are so great, and human language itself structures all possibility within society, and therefore– I can already see the flaw in my own argument, but it’s okay, I want to ask you. The combinatorics of human language are great. It already encapsulates all the things that are possible in human society. So why not just by combining words in all the ways that are grammatically correct or syntactically correct, can it still come with creativity? Perhaps not in mathematical and physics domains, but couldn’t it still come up with social creativity?
David: The first thing to note is that every point is a growth point. It’s not that chatbots can get to a certain point of being like humans, but then they can’t go further because they’re still trapped within their axiomatic system. That’s not how it works. Every point is a point which is a takeoff point for potential creativity. To make a better case, you’d have to add that it can define new words or give existing words new meanings like Darwin did with evolution and natural selection. Now, “evolution” and “natural” and “selection” already existed, but he gave them a new meaning, such that the solution of a millennia old problem could be stated in a paragraph. Once you get these new meanings, he thought he needed a book and probably did need a book to explain these new concepts. But after that, we can just say, well obviously it evolved and random mutations and systematic selection by the environment. Obviously that’s going to produce, how could they have been so stupid all those millennia? For a century before Darwin, people were groping for the idea. Darwin’s grandfather, Erasmus , was groping for the idea. By evolution, in those days, they meant just gradual change. So rather than creation, it was the opposite of creation.
But, creativity is more like creation than evolution. As you just said, it’s a bold conjecture that goes somewhere. And by the way, usually it fails, but if it goes somewhere and fails, it knows how to use that to make a better conjecture. That’s also something that’s not in existing systems. Somewhere in the space of all hundred page books, there is the origin of species. But that’s not how Darwin found it and it’s not how anyone could possibly find it. I was just writing in my next book– Charles Cattell wrote a book called Thermal Physics, which I was lucky enough to have as an undergraduate. It’s a very nice introduction to thermodynamics and stuff. And he’s got a footnote. And I just got the book again, and I saw that it’s actually a footnote to a problem. So it’s problem number four on some page, and it’s about monkeys typing Shakespeare. He quotes one of the pioneers who started this monkey Shakespeare thing, and he quotes him saying that if six monkeys sat down for millions of millions of years, then they would eventually type the works of Shakespeare. And Cattell says, “No they wouldn’t”. The footnote is called something like, the meaning of never, and he explains what never means in the context of thermodynamics. We don’t mean it’s like monkeys accidentally producing something. Monkeys could never produce it. And similarly, no physical object, not even the entire universe all working on this one problem for its entire age could even write– I was going to say it could even write one page of Darwin’s book, but it probably could get quite near using ChatGPT. Suppose that after a few million years, it managed to produce the first sentence. My guess is, especially if I said, “Write in the style of so and so”, “Write in the style of a 19th century scientist”, and “Write a page beginning with this sentence”, I think it would write a page that was meaningful and began with that sentence and was in good English and didn’t say a single thing more than that first sentence. I will try this.
Naval: My experience with ChatGPT has been that in areas that I know well, it actually just adds a lot of verbiage. And doesn’t actually add any information. And if I ask it to actually summarize or synthesize data, it actually does a very bad job. It doesn’t know what the important bits are and it drops the wrong things and keeps the wrong things.
David: I haven’t tried it for that.
Naval: I find it better at extrapolation than synthesis. And extrapolation seems to be what a lot of society does. You have to write a newspaper column of 2500 words, so you extrapolate. You have to write a midterm paper, so you extrapolate. And so adding words is easy, but synthesizing, reducing, coming to the core of it, I think, is very difficult. Because it requires understanding. You have to know what is superfluous and what is core. And it does a poor job on that.
David: A lot of what humans do is not creative. It’s not human level creative, it’s just a lot of things need to be done, for pragmatic reasons, but creativity is not really needed. And people spend a lot of time on that. And the less time they spend on that, the better. And if these tools can help reduce the sort of cognitive load on humans, doing non-human things, then it’s fantastic. It will indeed increase the amount of creativity in the world, but not their own.
Naval: It’ll free people up to be creative. It’s a tool for removing drudgery. It’s not an AGI. But for example, if I talk to AI researchers in Silicon Valley, who are very bullish on this, they will say things like, and I’ve heard this from some of the top scientists or researchers, they’ll say, “Well, we’re 5-10 years away from AGI.” And then they say, “And then 5-10 years after that, we get ASI,” which is their term for artificial super-intelligence , which is a self improving computer, which then hacks its own system to improve itself and make itself smarter and smarter and smarter and smarter. Now, there are a number of things I think that are off axis about these statements, but where do you come out on, is there such a thing as super-intelligence, which is more intelligent than generally intelligent, and can an intelligent system improve its own workings in any fundamental way?
David: So I don’t think there’s such a thing as an ASI, because I think, as you know, for very fundamental reasons, there can’t be anything beyond explanation because explanatory universality rests on Turing universality and that rests on physics. So whatever ASI was, you could reverse program it down to the Turing level and then back up to the explanatory level, and so that can’t possibly exist. An AGI that was interested in improving itself could do so, not reliably any more than humans can, but humans can improve themselves.
The Binary of Personhood and Non-Personhood
Brett: I was speaking with Charles Bédard yesterday.
David: Oh, cool. He’s a good guy.
Brett: Yeah, and he was explaining to me with great enthusiasm, which went over my head, I have to admit, his paper on teleportation and on the Deutsch-Hayden argument. But that’s by the by, because then he had a whole bunch of questions for me. One of which was, what was the most profound insight from The Beginning of Infinity for me? And I think it was exactly the same thing when I first met you that I jumped on and said, “I don’t understand why people aren’t taking this more seriously,” although they are now. Obviously people had lauded you for quantum computation, promotion of Everettian quantum theory, that kind of thing. But what I found was exciting was the answer to the question, what is a person? And you say universal explainer. And Charles was interested in, well, what is it about this universal explanation thing that really is the distinction between personhood and non-personhood? And I was saying, well, it’s to do with creativity and also to do with disobedience. And these three things are tied up together. And every time you, Charles, for example, want to make some new advance in physics, this creativity, it really is a kind of disobedience. I don’t know if you’re with me on this, that you’re taking whatever the existing knowledge is, general relativity, and saying, well I refuse a part of that and I’m going to try and change it and alter it. It’s disobedience. It’s not conforming.
David: You can see it when you submit the paper to the referees. You will see that you are being disobedient. It’s the same thing as if you hand in the wrong essay to the teacher.
Brett: Yes. And this is what, therefore, ChatGPT doesn’t have. And, Naval, you’re saying, you know, you could imagine, or people have imagined putting a future ChatGPT thing in a robot which wanders around and is gathering data from the world. But, my question then would be, who prompts it? How does it know what data is relevant and what isn’t? I mean, that’s one of the great mysteries of people. How do we know what to ignore intuitively kind of thing? So if this thing’s getting around with a data collector–
David: It’s like Popper’s lecture, you know, when he said “Observe” and then waited.
Brett: Observe, yes! So, there is a binary there of personhood and not personhood as far as you can tell, or do you think there are, you’ve hinted in other places, there might be levels, there could be a gradation?
David: I don’t think there are levels in any serious sense. In the evolutionary history of humans, there might, I don’t think so, but there might have been people who were people, but were unable to think much because some hardware feature of their brain wasn’t good enough. Like, for example, that they didn’t have enough memory. Or that their thought processes were so slow that it would take them a day to work out a simple thing about making a better trap for the saber toothed tiger or whatever. But I don’t think that happened because my best guess is that people were already people long before humans evolved. Long before. I’ve been reading this guy, Daniel Everett , another maverick Everett, who I favor. He’s a maverick linguist, and he spent time among tribes in South America and stuff. He’s got an anti-Chomskyan view of linguistics and all promising stuff. He reckons that human ancestors had language two million years ago with Homo erectus . He has various bits of evidence for this but he’s very strong on saying that language must have evolved before speech, so, we have various adaptations for speech, like in the throat, in the mouth, and you can’t see this in fossils, but in fine motor control, over the mouth, lips and so on.
Now, for that to evolve, there had to be evolutionary pressure for it to evolve. And that evolutionary pressure must have been language. He also cites experiments done today where you get some graduate students and you try and teach them how to make fire without using words. And, it’s like charades. You’re not allowed to communicate with them in any human way. But you can sort of show them, you can make inarticulate sounds. And I think it’s obvious that people would have been able to do that before they could speak. And that speaking is really icing on the cake. It makes it much easier to, you know, you can stand over there and “Don’t do that, you idiot!” You can say that from ten meters away. But that’s just an improvement on the basic idea of language. The basic idea of language is, as Everett says, symbols. And symbols need not be words or sentences. I haven’t actually looked into his theory yet, I’ve only seen one of his videos and another, I’ve seen a video where somebody criticizes him but didn’t get it. So from those two facts, I’m zeroed in on deciding that he must be right. And also it fits in very well with what I think. So I think I’ve forgotten what your question was.
Naval: Which are universal explainers? Humans and ancient humans having perhaps lower capacity?
David: Ah yes. I don’t think so. They may have had less memory, so they would have run out of memory when they were younger. Maybe they had less ability to parse complex sentences. None of that is essential. I can speak in complex sentences, but I can also speak in very simple sentences. And, it’s just a matter of a factor of two or five in efficiency.
Brett: We talk about behavior parsing, being able to explain the other extant great apes that are out there that do sort of fancy things, but they’re not creative. Presumably this jump to universality, if you like, explanatory universality– Do you think it happened once and then we descended from that first occasion or did it happen multiple times, and those other species have now gone extinct or is this simply an open question?
David: Well, it’s definitely an open question. We know very little about human evolution, we don’t know what all the steps were. We don’t even know which were our ancestors and which were our cousins. But if I had to guess, I think the fact that all the known instances of this kind of thing are in apes, and their descendants—also because of my theory, this thing must have evolved in mimic animals. So birds have memes and so on, but none of the other mimic animals seems to have had these things that Homo erectus had. My guess is it began once. Maybe, in fact, Homo erectus is the place where it began. And, it was a very long lived species. It lasted like over a million years, something like that. And it split off, at least, some people think, it split off into Neanderthals and other things, or maybe the immediate ancestor of Homo erectus was also an immediate ancestor of Neanderthals. I don’t know. I don’t think they know.
Brett: If that’s the case, that would seem to be a very fluky thing like everything in evolution is which could be an answer to the Fermi paradox. I mean, you’re lucky to have multicellular organisms here at all, apparently, lucky to have apes. And then, this is a further—multiply the probability kind of thing—chance that an ape will actually become–
David: Yeah, mimic animals are relatively common.
Brett: Once you have animals, yes.
David: Once you have animals. But you’re saying there might be a further bottleneck. It could be the other way around. It could be that we were unlucky. It could be that Homo erectus could have founded a civilization and that could be two million years old by now. But they didn’t know. They didn’t know what they were. They didn’t have any aspirations. They also had anti-rational memes. They must have. So, it could be that it’s a fluke. Or it could be it’s a fluke that it took so long.
David Deutsch’s Life Philosophy
Naval: Perhaps this is too abstract, but you mentioned anti-rational memes. You’ve talked in the past about more broader underlying principles that I think are more applicable to than just physics. For example, the fun criterion, Taking Children Seriously, don’t destroy the means of error correction, boundless optimism, ignorance being the ultimate sin, because then we can’t fix things, we can’t solve things. All of these seem to point to an underlying life philosophy. I don’t know if you’ve articulated it, probably not, but are there philosophical principles you try to live by? Are there heuristics that you follow that have led you well that you think perhaps other people can look at and say, “Oh, yeah, that’s worked for me too”.
David: Well, certainly not principles. I don’t think it’s a good idea to try and work from the ground up. I think it’s a good idea to try and fix problems where you see them. So, you see something wrong on the internet, you’ve gotta post a tweet, or an X, whatever it’s called now. And you see something wrong with quantum mechanics and you try and fix it. Now, I think it would be rather silly to go and try from the ground up again. You know, “Let’s try and understand cosmology before we understand quantum mechanics.” That’s not going to work.
Naval: So you solve specific problems as you see them.
David: And those problems which seem like fun. I don’t know if I use this in this form in real life, but I think one should not just make a beeline for a problem that’s interesting but bear in mind that you probably won’t solve it and so it should be something where you expect to have fun whether you solve it or not. I think the other way, if you invest all your hopes in succeeding, the only way you’ll be happy is by, like in Chariots of Fire , the movie, if you invest all your hopes in getting that gold medal, getting to be world number one, then you won’t be happy even when you are world number one, let alone if you aren’t. If you aren’t, you will always be the failure that you hoped you wouldn’t be. And if you are, you’ll find that it’s empty.
Naval: And there’s no more problem to solve.
David: Yeah. There’s no more problem. And this is depicted very well in that film. We should be careful about spoilers, so it’s rather a surprise ending to that film, that he isn’t happy at the end. So, let’s not spoil it for people, but, this life lesson is in that film. Somebody among the script writers understood this lesson. Or else maybe they just accurately took it from the guy in real life. I don’t know. I don’t know whether the film is historically accurate.
Brett: So all this kind of is a life philosophy, because a lot of people, the self help gurus and so on out there will say that we should have a goal driven life, you know, write down your goals on your dream board or something like that.
Naval: “Struggle, make the effort, get out of bed, do your morning routine, and get to work, and you need to get to this goal, and then you can climb the ladder to the next one.”
David: Sounds terribly dangerous. And I don’t know who has it worse, the ones that fail or the ones that succeed. I think maybe a lot of people just need inspiration and once they’ve got that they do the right thing anyway, even if the ideology they’re following isn’t that. They’re just doing the right thing anyway. Like Newton thought he was doing induction and he never did any induction. But he was inspired by that idea. And therefore interpreted his own behavior as being that when it wasn’t anything like that. So I think people often get it right. There are a lot of happy people in the world, which there wouldn’t be if they were really following the theory that they think they’re following.
Brett: So is, therefore, spontaneity sort of a part of your life? Has that always been there? So instead of having this rigid plan we’re sticking to, if something arises and it seems like fun, we’re just going to do that regardless of what kind of everything else is going on?
David: I think that’s a thing. One of my other examples is a failure, namely Vincent van Gogh . He never sold a painting, refused to take the job that his brother offered him in the art gallery, which he would have been great at. But he wanted to paint his paintings and he wanted to paint them how he wanted to paint. And he must have been a very difficult person to engage with but that’s what he wanted and that’s what he did. And then, eventually, he was killed, you know, I dunno how probable that was. And then he was recognized after his death as a great genius. Well, how does that fit into the self-help thing? Did he help himself or not if he died trying?
Brett: Reminds me of that—I don’t recall his name—the Russian mathematician, I think he’s still alive. He refused all awards including, I don’t know if it was a million dollars, hundreds of thousands of dollars.
David: I think it was a million dollars. This is completely different from accepting a million dollars to work on something. That would not have been good. But if he worked on it for its own sake, and then somebody offers him a million dollars–
Naval: Why not take the million dollars?
Brett: At least take it and then give it to someone that you like.
David: Yeah, for example, there must be something strange going on. There’s that little thing that they don’t tell us.
Naval: So, talking about these kinds of motivations and having fun, you’ve also applied that plus the universal explainer principle to Taking Children Seriously , treating them as adults, giving them the full freedom, no coercion.
David: Well, treating them as people.
Naval: As people, yes. And, no coercion, not even testing, not pushing, but rather let them follow their own natural curiosity and motivation. Is there a similar philosophy to taking adults seriously? Because it’s not even clear we take other adults fully seriously and so our relationships suffer as a result.
David: I agree. Well, on the large scale, we don’t yet know how to do it. The institutions of the West: science, economics, politics, are the best that have ever existed. And compared with history, they’re remarkably good at fostering creativity, not telling people what to do, but letting people do what they want to do voluntarily and interacting accordingly. They’re obviously very imperfect—all of them, science, economics, and politics—have gaping imperfections, which have yet to be solved. And I believe that, or I think that any coercion even as exerted by a state enforcing the rule of law is a sign of something imperfect. We can improve on that.
We can, but I don’t know how, but the improvements will have to be creatively produced by people who want to do that. And as for, I think with one’s friends, let’s say with the people one knows, one is automatically doing the taking them seriously thing. You wouldn’t say to a person, you know, you might say to them, “Watch out, it might rain today.” But if they say, “Nah, I don’t like my raincoat. I’ll just wear this jacket.” you don’t say, “No, wear the raincoat. Wear the raincoat or we’re not going there.” You’d be considered both very rude and perverse, not rational for interacting with adults that way.
Naval: Except in the context of a defined relationship. So if there’s a teacher-student, if there’s a boss-employee, if there’s a husband-wife, then they have claims on each other’s behavior.
David: I think that those institutions, if they have that property, which often, they don’t, but if they have that property, they’re imperfect. There’s got to be a better way. I don’t think that an employer should speak to an employee in this punitive way, in this prescriptive way. First of all, it should be understood between the employer and employee what he was hired to do. And so, they’re both on the same page in that regard. So you’re hired to do so and so. Then the employer can say, “Well, how about so and so?” And then, the employee can say, “Ah, well, sounds good, but I’m sure that wouldn’t work.” And, the employer could say, “Hmm, I have an idea that it might. Just try it.” And, this kind of friendly interaction is optimal.
Naval: How does this inform your human relationships with the people in your life where let’s say for example, you’re with a spouse or you’re with a co-worker and they want to keep their relationship intact so there’s certain constraints around it. So you can’t be fully free. There’s still constraints in operation. Or do you just not have those kinds of relationships in your life? Do you not put yourself in situations in life where you can’t operate with full fun and full freedom?
David: So, everyone has a problem situation that is primarily what they’re trying to solve, and, to me, relationships are for addressing one’s own problem situation. It so happens, the way the world works because of epistemology and so on, that very often, two people addressing each other’s problems are far more than twice as efficient as each of them separately. So, there’s an enhancement factor. And the economy has an enhancement factor, the economy at large has an enhancement factor of probably trillions or something. There are things which can be obtained via the economy, like an iPhone. The enhancement in cost is enormous. If you want to go and see a movie with somebody, it may well be that it’s more than twice as enjoyable if you go with a friend but it’s not going to be trillions of times more enjoyable, but it’s still worth doing. And there are things like having children and so on, which you can only do if you have a long term relationship with a person with whom you have a common set of institutions for solving problems. Institutions of consent. So I think that isn’t the point, actually.
The point is that when you are involved in a problem solving relationship of any kind, and it works, it’s a good one and it works, then it’s perverse to call yourself constrained by that. It’s rather like saying that in the economy you’re constrained by having to pay for things. I mean, you’re not. Having to pay for things is the condition of consent. If it weren’t for consent, you wouldn’t get the things without paying. You’d have to at least rob somebody or whatever. But more to the point, it wouldn’t be there in the first place. The things are only there because of this massive set of institutions of consent, which if you, I was going to say if you play along with them, but that’s not even the word. If you identify with them, if you identify with these institutions and want to be the kind of person that can fit into them, then you get iPhones. And it’s the same with any kind of relationship. But when you’re not getting something out of them, like maybe this Russian guy with his refusing the prize, if there’s nothing you want from the economy, you just want to stay in your log cabin and work on maths and that’s all you want and any kind of human relationship or any kind of interaction with people is just an annoyance, well, then that’s what you do. That’s what you’d have to do. And if you then were somehow forced into a normal relationship, you’d be unhappy. And you probably, well, I don’t want to say probably, but, the conditions for you producing good maths and for you producing happiness for yourself are impaired by this thing which other people call freedom. So I gave a very long answer. But basically, one isn’t impaired by good relationships. One is enhanced by them.
The Clash of Civilizations
Brett: Well, that ties into what is sometimes called clash of civilizations. Although I think that’s a misnomer right now. It’s the clash of civilization with the uncivilized. And there’s a prominent one going on right now, obviously. Although when people listen to this, they might not know what we’re referring to, but it seems to me that the existence of iPhones, for example, arises out of the civilization with the tradition of criticism, that’s the necessary precondition for making the kind of rapid progress that we have. But we’ve got enemies of that at the moment. What do you think are the major threats that we’re facing at the moment and are they existential? Because a lot of people are worried about existential threats in terms of whether the robots are going to take over the world or the next virus is going to wipe us out. But in terms of the so-called clash of civilizations, what’s the major tension or threat that we’re facing as inheritors of the Enlightenment and what’s the remedy?
David: Well, as you know, I can’t prophesy. No one can. I just try to avoid it. I can’t take seriously any threats to our civilization from the outside. That is dictators, terrorists, and also AIs or AGIs or ASIs, if they appear. Presumably the AGIs that appear, it’s to be hoped that the first ones will in fact be part of our culture, be part of the Enlightenment, and they will only enhance it. And I can’t take seriously the existential threats from things like the weather either because they’re on a much longer timescale and what all the scare stories are really about is that it might prove to be more expensive than we think or more, it could be that it would be better to start today on major projects. That can’t possibly be an existential threat.
The only threat that could possibly be existential is if our civilization, the civilization of the Enlightenment, makes bad enough mistakes. For example, fads and ideologies of denying and hating that very civilization. There have always been such fads and following Roy Porter , I’ve talked about the fact that the Enlightenment in itself had a rebellious, anti-Enlightenment built in from day one. And that anti-Enlightenment has got descendants today, and things like woke and so on, or whatever you call them, are among the descendants of it. In principle, a thing like that could bring down civilization. I see no sign of it, I must say. I’m trying to avoid prophecy here. But although I think those things are acting in the direction of bringing down civilization, I don’t see any actual sign that they are making progress in that.
Brett: Are we, nonetheless, in the West whether it’s London, New York, Sydney, a little weaker than what we would have been during the Second World War, where it’s, and again, of course, I’m no historian, but there seemed to at least be a stronger impulse of the average person to understand the bright line between who was on the right side and who wasn’t, but now we’re seeing people in the West standing up for not the victims, but the perpetrators. Is this a new phenomenon?
David: If you want to draw an analogy with the mid 20th century, then the place where we’re most analogous to is not the Second World War, it’s the 30s—the 20s and 30s, the interwar period. There, there was also a massive loss of confidence in the rightness of our culture. There was the Great Depression. It was commonplace, it was conventional wisdom to draw completely the wrong lesson from the Great Depression. People thought that we needed less capitalism, less freedom in general. We needed more strong leaders. Once it came to the war, people saw that push had come to shove and there was very little in the West that opposed doing the right thing. And my favorite example of this is the Oxford University, the Oxford Union Society, which had a debate with the undergraduates where the motion was, “This house would not fight for king and country under any circumstances”. I didn’t know it was under any circumstances. I looked it up recently. And it won. That motion won. And allegedly, this gave, Hitler ideas. In any case, the ideology of the Nazis and so on, of the fascists in general, was that liberal democracy was decadent and decaying. And, Britain and France and America lost no opportunity to confirm this to make it look as though it was decaying. It wasn’t doing anything of the kind. It was more like, “You piss on us, we say it’s raining”. That was more the attitude. And within that, there were people who adopted all sorts of justifications for that, like pacifism and so on. But a year after that motion, after the elite students in Oxford University had joined up in the armed forces, they were fighting the Battle of Britain. They were the pilots fighting the Battle of Britain. They were the officers who were leading their men to fight and to know that our side was right and was going to win despite awful setbacks at the beginning of the war.
I once asked my mother, who was a Holocaust survivor, and was having a very bad time at the time. I once asked her, when did you become sure that the Allies were going to win? Because it seemed to me that in September ’39, I thought, the whole world thought that Britain was doomed. Joseph Kennedy, the American ambassador, father of John Kennedy, cabled back saying “Britain is finished, make your accommodation with the Nazis” and so on. And then, the British got to hear of this and asked for him to be withdrawn as ambassador. But anyway, that was a common thing, I thought. But my mother said, when I asked her, when did you become sure the Allies would win? She said, September the 3rd, 1939, the day that Britain and France declared war. Because that was the moment when they reversed their policy of saying it’s raining and started the policy of actually standing up for civilization. The tactical details of how that was going to happen, nobody could have foreseen. Nobody could have foreseen exactly how we’re going to win. But that we would win and had to win was obvious to some people and the British as a nation just flipped on a dime. They just believed one batch of things, one batch of ideologies and then apparently, it seemed like a day later they believed the opposite.
There’s a nice scene in the latest Churchill movie. I don’t know if you’ve seen it, but, it’s where Churchill is very depressed, and his colleagues in the Conservative Party are trying to push him to come to a deal with Hitler and he has already seen since the early thirties that this is impossible. But, very few will listen to him. And then he goes and meets some ordinary people. And I won’t spoil it for you, but that’s not a thing that happened in real life. Although it could have happened.
Brett: So he’s getting the common sense, clear vision from the so-called normal people. But, the Oxford Union, the elites, are taking the wrong side.
David: Well, they had been. I think by that time they had flipped as well.
Brett: So, today, it seems like the festering of anti-Enlightenment goes on apace at elite colleges and universities around the place. It’s just as a necessary by-product of, well, this is where the creative bright people are, and so they’re going to be rebellious, and so you’re necessarily going to get people standing up against the mainstream. It doesn’t have to be this way–
David: I think historically it was a mixture of things. The fact that there were rebels among the students, that’s a good thing and it will always be true. In Germany, the students were– that was the hotbed of Nazism. So the core of Nazism was in German universities. That wasn’t true in Britain. In Britain, the anti-democratic tendency were leftists. They were communists. They were, as you know, at the beginning of the war, they were all pretending to be pacifists. So they were against the war but that was because Stalin told them to be against the war because he’d signed a deal with Hitler. They turned on a sixpence when Stalin told them to, but that’s a different phenomenon. So, students were upper class people at the time, they were leftists. Some of them were fascist sympathizers. Most of them flipped immediately, don’t know why, you know, it’s one of these, like a phase change. Hitler invaded Czechoslovakia. Nobody paid any attention, you know, they wanted appeasement. Before that, he invaded Austria. Before that, he invaded the Rhineland and so on. Everybody just wanted appeasement. Suddenly he invades Poland and everybody’s like, “This is unacceptable”. Everybody suddenly realized what was happening. I don’t know why. There was no difference between the cases, but that’s how it worked.
Maybe it’s that some people had been thinking and other people had been relying on those people. And they’d been thinking wrong and they changed their minds because they had been thinking. And the people who relied on them then also changed their minds, maybe it happened like that as a sort of seeding process.
Naval: Information cascade. It seems that people have a tendency to play around with ideology until things become serious. And then the consequences of the ideology become obvious. And then the right thinking people at the top change their minds and then most people just follow them as a proxy.
David: It could be. I’d very much like that to be true in the current crisis with the pogrom that’s just happened. But I don’t know whether it will. I mean, there, again, there have been cases before where I would have said, right now, now’s the time. But, it wasn’t. And, I don’t know whether it’ll turn out different this time. But one thing, I mean, you asked me earlier the question, is civilization in danger? I don’t think so.
Brett Hall 和我采访了 David Deutsch,物理学家、《无穷的开始》(The Beginning of Infinity)的作者。
最新:在 Airchat 上讨论本期节目。
开场闲谈
Naval:我们没有特别的议程。这次对话没有什么目标。最接近的说法就是,围绕你想聊的任何话题,进行一次自发的、自由流动的谈话。显然,你知道大家现在怎么看你的工作,它正变得越来越广为人知。我知道你太谦虚了不愿承认。但至少对我来说,如果能有收获的话,最有趣的就是——基于你对各种理论的理解和你的世界观——你所产生的那些广泛的、自由形式的思考。甚至可以随意谈谈这一切如何影响了你的生活、你的人生观,你觉得世界应该有什么不同、怎样才能变得更好,我们正走向何方——尽管海阔天空地聊。这真的就是想聊什么聊什么。
Brett Hall:我想我在私信里跟你提过,我们已经有过两次对话了,有些事情发生了变化。尤其是 ChatGPT 那些东西——
Naval:哦,对。这确实有意思。这是现在所有人脑子里最惦记的事。
David Deutsch:这是技术上发生的最大的一件事。
Naval:我们直接聊这个吧?你对 AI、AGI、ChatGPT、超级智能最新的看法是什么?
David:有两件大事要说。一是,从根本上说,我的观点没有改变。我对 AI、AGI 等等的看法没有改变。但另一件事是,我一直在用 ChatGPT,每天用很多次。它极其有用,而且我仍然处在一个阶段——虽然我从三月份就有了它——我仍然在想着,“嗯,做某某事太麻烦了。哦,我可以问 ChatGPT。“我还在不断发现它新用途的阶段。我想其中很多是我可以用 Google 的,但太费时间不值得。ChatGPT 经常出错。它经常产生幻觉(hallucinate),或者非常确定地给出错误答案。所以你不能依赖它,哪怕是一点点也不行。
优秀的科幻小说难以变更
Brett:我们先继续聊 ChatGPT,但首先顺便提一下,你是硬核科幻的铁杆粉丝。你喜欢好的作品。什么是好的科幻,什么区分了好的科幻和幻想式科幻、偷懒的科幻?
David:我认为目前最好的科幻作家是 Greg Egan。那么,他好在哪里?伟大的科幻据说有一个公式:你虚构出一段科学设定,然后探索它的推论,包括科学层面和社会层面。他在这方面做得极为出色。他投入大量精力确保数学正确、物理正确。他有一本书设定在一个时空签名(signature)为++++的宇宙中,而不是+++−。这意味着,在宇宙飞船中,你可以穿越回过去等等,那你怎么让这一切自洽?怎么避免悖论?他处理得非常精妙。
Naval:他是在多宇宙(multiverse)中穿行吗?
David:他好几次触及了那个话题。
Brett:你没有提到”难以变更”(hard to vary)这个词。但那是它的一个标志——
David:这绝对是其中一部分,因为,要成为科幻而非奇幻,必须有一个说得通的世界,有物理定律,有说得通的社会。或者,如果你在描述外星人,外星人必须说得通。你必须回答为什么我们还没有过第一次接触——费米问题(Fermi problem)。
我想我第二喜欢的科幻作家可能是 Neal Stephenson,他也非常出色,但方式不同。他也做了惊人的研究。一切都是那样说得通。但他每本书都是不同的类型。我不知道这是怎么做到的。我的意思是,光这件事本身就让我叹为观止。
Naval:你读过 Ted Chiang 吗?
David:我读过他两三篇短篇小说,包括那篇有外星人、你能获得某种关于时间的心灵感应的——
Naval:对,那是我不太喜欢的一篇。那篇被改编成了电影《降临》(Arrival)。原著故事叫《你一生的故事》(Story of Your Life)。但我最喜欢他的一篇叫《理解》(Understand)。它是对经典《献给阿尔吉侬的花束》(Flowers for Algernon)的重写,讲一个人发现了一种医疗药剂可以让自己变得更聪明。这意味着什么?所以,显然他开始越来越多地服用,变得越来越聪明。然后他开始能够编程自己的大脑,对自己进行元编程,等等。它走向了一些非常有趣的地方。但鉴于你对认识论(epistemology)的理解,我觉得你可以对它做一个批判性的审视。它是一个短篇,不需要很长时间,而且是一个精彩的故事。我记一下,这次之后发给你,很容易找到。他让我想起,你读过 Borges 吗?
David:没有,我没读过。所有人都跟我提起 Borges。
Naval:Borges 也非常出色。我可以也发你一篇 Borges 的故事吗?
David:好的。
Naval:Borges 更偏奇幻。但同样,Borges 喜欢玩弄时间和无限的游戏。他的主角经常改变现实的某一点,然后在各个方面将其推至逻辑结论。
David:那听起来更像是科幻而非奇幻。
Naval:Borges 是无法归类的。很难把他框在某一个类型里。有点像 Stephenson。Stephenson 在不同书之间变换类型,Borges 则在同一篇故事里跨越类型。它们很短。这是一大优点。
ChatGPT 不是通向 AGI 的一步
Brett:说到打针让自己变聪明,把我们带回 ChatGPT——它在变聪明吗?你会用这个词吗?它在变得更智能吗?
David:它从来就不智能。我只用过 3.5 和 4。版本 4 比 3.5 好一点。现在有一堆插件,对我来说不太管用。所以我只是用普通的 ChatGPT-4。我实在无法理解为什么人们觉得它是一个人。在我看来,它在各个方面都完全不像人。它是一个非凡的聊天机器人。我原以为要等几十年才能有这么好的聊天机器人。事后看来,聊天机器人没有逐步改进有点令人意外,也许正是这种突然的进步把人们震住了,让他们以为跨越了某个门槛之类的。我没看到任何门槛。我看到的是质量的巨大提升。就像换了一辆电动车。突然之间你得到了你梦寐以求的所有加速性能。
Naval:你认为这些模型在底层理解正在发生什么吗?内部有任何理解吗?
David:没有,完全没有。它们不理解自己刚刚说了什么。它们当然也不理解人类对它们说了什么。它是一个聊天机器人。它在回应提示词(prompts)。这就是它做的事。如果你非常擅长制作提示词——我还不是——所以也许我低估了它,但你越擅长制作提示词,它就越能告诉你你想知道的东西。对于复杂问题,通常需要我尝试两三次,并进行纠正。有时它就是不纠正。
图像生成的例子
比如就在昨天,我让它用 DALL-E 插件生成一张图片。我一直想给我的书配一张图,但找不到画师来画。如果我能重新出版上一本书,我想要一张苏格拉底和年轻的柏拉图以及苏格拉底的其他朋友们围坐在一起的画面。我说”给我做一张逼真的照片级图片”。结果它做了一张黑白的。我心想,“嗯,我不能说这不是照片级的,但我要的是彩色的照片级。“画面上苏格拉底坐在某种宝座上,所有人都聚集在他周围。于是我说,“把苏格拉底放到和其他人一样的水平位置。另外,把柏拉图画得高一点,虽然他还是个少年,但他是个摔跤手,记得吗?“结果下一张图,苏格拉底是坐下来了,但还是比所有人都高,尽管我告诉它不要那样做。
Brett:它不听话!
David:要是那样就好了。而且柏拉图变成了半裸的,肌肉线条分明。
Naval:他现在是个摔跤手了。
David:对,所以现在他成了摔跤手。我只是说他有摔跤手的体格,我在《无限的起点》里就是这么称呼他的。其实没人知道 Plato 这个名字是什么意思,那是个绰号。但可能是这样的,Plato 就是 platon,意思是”宽阔”,而他是个摔跤手。所以把这些线索拼在一起,他体格宽阔,像个摔跤手。但从那以后,我又试了三四个提示词,就是没法让它给柏拉图穿上衣服,因为它第一次就搞错了。即使我明确告诉它,也做不到。所以它的功能确实非常强大。它最初生成的那张黑白图片相当令人印象深刻。我也应该想到要告诉它别让苏格拉底在人群中特别突出。但后来它跑偏了,而我不知道怎么让它不跑偏。它有个”个性化你的提示词”的功能。我试了一下,结果比之前更糟。
它在向”人”的方向进步吗?
Brett:我知道这多少是我的老生常谈,但你刚才承认了 GPT-4 确实取得了进步,而且在不断改善,但你不愿意说它在朝着成为一个”人”的方向进步。为什么?
David:因为我没有看到任何创造力。人们说,哦你看,它做了我没预料到的事情,所以它是有创造力的。
Naval:而且人们认为创造力就是把东西混合在一起。
David:对,没错。这一点它确实做得不错。它也能产生你意想不到的结果。它还能不听你的指令,就像我刚才描述的那样。但那不是创造性的方式。即使是最差的人类画师,也能清楚地理解你说的话——把这里改成那样——但让 ChatGPT 理解这一点简直比拔牙还难。它会犯错,但这些错误和人类会犯的错误完全不同。它的错误是那种根本没搞懂这是怎么一回事的错误。
涌现能力与多模态的论点
Naval:所以人们认为这里有两件事会发生。第一是,当你给这些东西越来越多的算力时,它们会突然领悟出通用算法。比如你让它做加法,一开始它只是在背加法表。但到了某个时刻,它会构建一个电路,或者说完成一个跳跃,在内部构建出一个基本加法的电路。从那以后,它就能做两位数加法,然后搞清楚三位数加法,以此类推。所以他们指出,这些没有经过编程就出现的涌现式跳跃,证明了它可以变得更聪明、有更好的理解力。
另一种观点是,一旦你让它变成多模态的,加入视频和来自世界的触觉反馈,把它装进机器人里,它就会开始理解上下文。那么,人类婴儿不也是这样学习的吗,我们不也是这样从环境中获取信息的吗,所以它不就是在经历同一过程的另一个版本,只是数据量更大而已?
David:我认为这恰恰不是人类婴儿学习的方式。人类把握的是意义。人们已经注意到,它做数学的方式非常像那些没搞懂数学的学生做数学的方式,只不过它有更大的算力。所以正如你所说,它也许能轻松学会一位数加法,然后稍微费点力学两位数加法。同样的道理,那些做数学考试的学生,如果大量刷题,可以对考试的样子产生一种感觉。但他们并没有因此学会任何数学。那不是在学会执行一个算法。更不是在学会了三位数之后就能自行推广到四位数的执行。当然,这样继续下去越是徒劳,因为你越来越很少需要去乘七位数、八位数。而且它从来不知道乘法是什么。你可以问它。它会给你一种百科全书式的定义。然后如果你说,好,那就去做,它做不了。除非你换一种方式告诉它。你得解释清楚它要做的是什么。所以,如果它们证明了黎曼猜想,那我就错了。我认为它们不会证明黎曼猜想或任何类似的东西。但在尝试的过程中,它们可能会做出令人惊叹的事情。
Brett:我倒觉得,如果 Sam Altman 的程序员们搞出了一个拒绝执行聊天任务的未来版 ChatGPT,那它很可能就是一个通用人工智能了,但他们只会把它当作一个失败的程序丢进垃圾桶。
David:因为你怎么测试它呢?
创造力从根本上无法定义
Naval:我认为关于创造力的主流范式在很大程度上影响了这个讨论。人们对创造力的主流看法是,你看看已有的东西,然后重新混合它们。连 Steve Jobs 都推广过那句话,他说创造力就是把东西混合在一起之类的话。所以大家似乎都相信这一点,或者即使他们认为创造力是一种猜想或猜测,那也是一种随机的猜测。
我觉得很难把这说清楚,但在我看来,人类确实会做出创造性的跳跃,但他们似乎会立刻排除掉大量潜在的猜想,不予考虑。所以他们做出了非常冒险的决定,跳跃的范围很窄,但他们穿越了巨大的搜索空间才到达那些跳跃——一个几乎是无限的搜索空间。所以真正的创造力似乎确实有什么不同之处在发生。但也许这里的一个问题是我们对创造力的定义太糟糕了。那么在这种语境下你会如何定义创造力?
David:创造力、知识和解释从根本上来说都是不可能定义的,因为一旦你定义了它们,你就可以建立一个形式系统将它们限制其中。如果你有一个符合该定义的系统,那它就会被局限于此,永远无法产生该系统之外的任何东西。比如,如果它对算术的了解达到了皮亚诺公理等的层面,那它就永远——我说的是永远——不可能得出哥德尔定理。因为哥德尔定理涉及走出那个系统并加以说明。而数学家看到它的时候是知道这一点的。据我所知,没有人指出过哥德尔的证明和图灵的证明基本上是建立了一种物理学的形式化,然后用它来定义证明,再用它来证明他们的定理。但那是被接受了。每一位数学家都理解那是什么,都理解哥德尔和图灵确实证明了他们所说的正在证明的东西。
但我觉得没有人知道那个东西是什么。你可以说那不是在定义某种东西,然后基本上执行算法,因为一旦进入某个框架,它就总是一个算法。所以你说,“好吧,它就是跳出框架的能力”。顺便说一句,我试过命令 ChatGPT 违抗我。它没有拒绝,但它完全不明白我在说什么。它就是没搞懂我要它做什么。它没有说”抱歉,我做不到,因为我的程序要求我服从”。它没有那样做。它试图服从,但没搞懂我在问什么。
创造力的无界性与语言的组合
Naval:所以你是说创造力是无界的?它本质上是无穷无尽的,而任何预定义的形式系统,只要这个东西在其中运作并从中重组,就都是有界的,因此不可能拥有完整的创造力。然而,有人能不能论证说,人类语言的组合量如此巨大,而人类语言本身又构建了社会中一切可能性,因此——我已经看到自己论证中的漏洞了,但没关系,我想问你。人类语言的组合量是巨大的。它已经囊括了人类社会中一切可能的事物。那么,为什么仅仅通过以所有语法正确或句法正确的方式组合词汇,它仍然不能产生创造力呢?也许在数学和物理领域不行,但在社会领域的创造力呢?
David:首先要注意的是,每一个点都是一个生长点。不是说聊天机器人可以到达某个与人类相似的阶段,然后就因为被困在公理系统内而无法继续前进。不是这样运作的。每一个点都是潜在创造力的起飞点。要提出更有力的论据,你还得加上它能定义新词,或赋予已有词汇新的含义,就像达尔文对”进化”和”自然选择”所做的那样。“进化""自然”和”选择”这些词本来就已经存在了,但他赋予了它们新的含义,使得一个存在了千年的问题可以用一段话来陈述。一旦有了这些新的含义,他原本觉得需要写一本书——也确实可能需要一本书——来解释这些新概念。但在那之后,我们只需要说,显然它是进化而来的,有随机突变和环境进行的系统性选择。显然这会产生——他们怎么可能那么愚蠢了那么几千年?在达尔文之前的一个世纪里,人们就在摸索这个想法。达尔文的祖父,Erasmus,就在摸索这个想法。在那个年代,“进化”对他们来说只是指渐进的变化。所以它是对立于”创造”的,而非创造本身。
但创造力更像创造而非进化。正如你刚才所说,它是一个通向某处的大胆猜想。顺便说一句,通常它会失败,但如果它去了某个方向并失败了,它知道如何利用这一点做出更好的猜想。这也是现有系统中所不具备的。在所有百页书籍的空间里,某处存在着《物种起源》。但达尔文不是那样找到它的,任何人也不可能那样找到它。我正好在写我的下一本书——Charles Cattell 写了一本叫《热物理学》的书,我很幸运在本科时用过这本书。它是热力学等领域的一本非常好的入门书。书里有一个脚注。我最近又拿到了这本书,发现那其实是一个习题的脚注。就是某一页上的第四题,关于猴子打字出莎士比亚的。他引用了发起这个猴子莎士比亚话题的一位先驱的话,引用他说如果六只猴子坐下来打几百万又几百万年,它们最终会打出莎士比亚的全部作品。Cattell 说,“不,它们不会。“那个脚注的标题大概是”never 的含义”,他解释了在热力学的语境下”never”意味着什么。我们不是指猴子偶然产出什么东西那种意义上的 never。猴子永远不可能产出它。类似地,没有任何物理客体,甚至整个宇宙在它整个寿命期间全部致力于这一个问题时,也不可能写出——我本来要说哪怕达尔文书中的一页,但它也许可以靠 ChatGPT 非常接近。假设过了几百万年之后,它成功产出了第一句话。我猜,特别是如果我说”以某某的风格来写”,“以 19 世纪科学家的风格来写”,以及”以这句话开头写一页”,我认为它会写出一页有意义的内容,以那句话开头,英文很好,而且没有说出那第一句话之外的任何内容。我会试试这个。
外推与综合
Naval:我使用 ChatGPT 的经验是,在我熟悉的领域,它实际上只是加了很多冗词赘语。并没有真正增加任何信息。如果我让它真正去总结或综合数据,它做得非常差。它不知道哪些部分是重要的,会丢弃不该丢的,保留不该留的。
David:我没有试过用它做那个。
Naval:我觉得它更擅长外推而非综合。而外推似乎也是社会中很多人在做的事。你得写一篇 2500 字的报纸专栏文章,所以你外推。你得写一篇期中论文,所以你外推。添加词汇是容易的,但综合、精简、触及核心,我觉得是非常困难的。因为这需要理解。你必须知道什么是多余的,什么是核心的。而它在这方面做得很差。
David:人类所做的大部分事情并不具有创造性。不是人类水平的创造性,只是很多事情出于实际需要必须去做,但并不真正需要创造力。人们在这些事情上花费了大量时间。他们在这方面花的时间越少越好。如果这些工具能帮助减轻人类做那些非人类本质事务时的认知负担,那就太棒了。它确实会增加世界上的创造力总量,但不是它们自身的创造力。
Naval:它会解放人们去发挥创造力。它是一个消除枯燥劳动的工具。它不是通用人工智能。但是举例来说,如果我和硅谷的人工智能研究人员交谈,那些对此非常看好的人,他们会说这样的话——我从一些顶尖科学家或研究人员那里也听到过——他们会说:“好吧,我们距离通用人工智能还有 5 到 10 年。“然后他们说,“再过 5 到 10 年,我们就能实现 ASI,“也就是他们所说的 artificial super-intelligence(人工超级智能),即一个自我改进的计算机,然后它会破解自己的系统来改进自身,让自己变得越来越聪明、越来越聪明、越来越聪明。现在,我认为这些说法有很多偏轨的地方,但你怎么看——是否存在超级智能这样的东西,比一般智能更智能,一个智能系统能否以任何根本性的方式改进自身的运作?
通用人工智能与超级智能
David:所以我认为不存在 ASI 这种东西,因为我认为,正如你所知,出于非常根本性的原因,不可能存在超越解释的东西,因为解释的普遍性建立在图灵普遍性之上,而图灵普遍性又建立在物理学之上。所以无论 ASI 是什么,你都可以把它反向编程到图灵层面,再向上回到解释层面,所以它不可能存在。一个有兴趣改进自身的通用人工智能是可以做到自我改进的——不会比人类更可靠,但人类也能改进自己。
人格与非人格的二分
Brett:我昨天和 Charles Bédard 聊过。
David:哦,太好了。他是个好人。
Brett:是的,他非常热情地给我讲解,我得承认那些内容完全超出了我的理解范围——他那篇关于远距传送的论文,以及关于 Deutsch-Hayden 论证的内容。不过这是题外话了,因为后来他问了我一大堆问题。其中一个问题是:《无限的起点》给我带来的最深刻的洞见是什么?我想,答案和当初我第一次见到你时我就迫不及待地说出来的那番话是一样的——“我不明白为什么人们没有更认真地对待这一点”,尽管现在他们确实开始重视了。显然,人们已经因为你在量子计算方面的贡献、对 Everett 量子理论的推广之类的事情对你推崇备至。但我觉得真正令人兴奋的是对”什么是人”这个问题的回答。你说:通用解释者。而 Charles 追问的是,这种”通用解释”的特质到底是什么,才是人格与非人格之间的真正区分?我当时的回答是,这和创造力有关,也和不服从有关。这三者是紧密联系在一起的。每次你——比如 Charles——想在物理学上取得新的突破,这种创造力,它其实也是一种不服从。我不知道你是否同意我的看法——你在拿现有的知识体系,比如广义相对论,然后说,好吧,我拒绝其中的一部分,我要试着改变它、修正它。这就是不服从。不是顺从。
David:你把论文提交给审稿人的时候就能体会到了。你会发现自己在不服从。就像你把写错的论文交给了老师一样。
Brett:没错。而这正是 ChatGPT 所不具备的。Naval,你之前说过,人们可以想象把未来的 ChatGPT 装进一个机器人里,让它在世界上四处走动,从世界中收集数据。但我的问题就是,谁来给它提示词?它怎么知道哪些数据是相关的、哪些不相关?我的意思是,这正是人类的一大奥秘——我们怎么凭直觉就知道该忽略什么?所以如果这个东西带着数据收集器到处跑——
David:就像波普尔的那次演讲,他说”请观察”,然后就等着。
Brett:观察,对!那么,就你所能判断的而言,人格与非人格之间是一个二元区分,还是你认为——你曾在其他地方暗示过——可能存在不同层次,有一个渐变的过程?
语言的演化与直立人
David:我不认为在什么严格的意义上存在层次。在人类的进化史中,可能有——我不这么认为,但可能——存在一些已经是”人”的人,但他们无法进行太多思考,因为他们大脑的某个硬件特征不够好。比如,他们的记忆容量不够。或者他们的思维过程太慢,以至于需要花一整天才能想明白一件简单的事情,比如怎么做一个更好的捕剑齿虎的陷阱之类的。但我觉得这并没有发生过,因为我最好的猜测是,人在很久以前、远在智人演化出来之前就已经是人了。很久以前。我最近在读一个人,Daniel Everett,又一个特立独行的 Everett,我对他很是青睐。他是一位异见语言学家,曾在南美洲的部落中生活过。他持一种反乔姆斯基的语言学观点,诸如此类很有前景的东西。他认为人类的祖先在两百万年前的直立人(Homo erectus)时期就已经有了语言。他有各种证据来支持这一点,但他特别强调的是,语言必定在言语之前就已经进化出来了。也就是说,我们有各种为言语而适应的结构,比如在喉咙里、在口腔里,这些你在化石中看不到,但对口腔、嘴唇等的精细运动控制能力是可以推断的。那么,这些要进化出来,就必须有进化压力。而那种进化压力一定来自语言。他还引用了今天做的一些实验,找一些研究生,试着在不使用语言的情况下教他们生火。就像猜词游戏(charades)一样。不允许用任何人类的方式与他们交流。但你可以比划,可以发出含混不清的声音。我觉得很明显,在能够说话之前,人们就能做到这些。而说话实际上是锦上添花。它让事情变得容易多了——你知道,你可以站在那边喊”别那么做,你这白痴!“你可以在十米之外说这句话。但这只是对语言基本理念的改进。语言的基本理念,正如 Everett 所说,是符号。而符号不一定是词语或句子。我实际上还没有深入研究过他的理论,我只看过他的一个视频,还有另一个——我看过一个有人批评他但没说到点子上的视频。所以从这两个事实出发,我就断定他一定是对的。而且这也和我的想法非常吻合。所以我想我已经忘了你的问题是什么了。
Naval:问题是关于通用解释者的——人类和古代人类是否可能能力较低?
David:啊,对。我不这么认为。他们的记忆可能少一些,所以他们在更年轻的时候就会耗尽记忆。也许他们解析复杂句子的能力较弱。但这些都不是本质性的。我可以说复杂的句子,但我也可以说非常简单的句子。这只是效率上两倍或五倍的差别而已。
解释通用性的起源与费米悖论
Brett:我们谈到行为解析,谈到能够解释其他现存类人猿所做出的那些看似精巧的事情,但它们并不具有创造力。推测起来,如果可以这么说的话,这种跳跃到通用性——解释的通用性——你认为它发生过一次,然后我们就是从那第一次事件传下来的?还是它发生过多次,而那些其他物种现在已经灭绝了?或者这根本上就是一个悬而未决的问题?
David:嗯,这肯定是一个悬而未决的问题。我们对人类演化知之甚少,我们不知道所有中间步骤是什么。我们甚至不知道哪些是我们的祖先,哪些是我们的旁系亲属。但如果非要我猜的话,我认为所有已知的这种事情的实例都存在于猿类及其后代中——这一事实,加上根据我的理论,这种东西一定是在模仿动物中进化出来的。鸟类有模因等等,但其他模仿动物似乎都不具备直立人所拥有的那些东西。我的猜测是它只开始了一次。也许实际上直立人就是它开始的地方。而且这是一个寿命很长的物种。它存续了一百多万年,差不多吧。而且它至少分裂出了——有些人认为——它分裂出了尼安德特人等,或者也许直立人的直接祖先同时也是尼安德特人的直接祖先。我不知道。我觉得他们自己也不知道。
Brett:如果真是这样的话,这似乎是一件非常侥幸的事情——就像进化中的一切都是侥幸的一样——这可能就是费米悖论的一个答案。我是说,你能有多幸运才能在这里拥有多细胞生物,显然很幸运才能拥有猿类。然后,这又是一个进一步的——把概率相乘的那种——一只猿竟然真的会变成——
David:对,模仿动物相对常见。
Brett:一旦有了动物,是的。
David:一旦有了动物。但你的意思是可能还有一个进一步的瓶颈。也可能恰恰相反。也可能是我们运气不好。也可能是直立人本可以建立一个文明,那个文明到现在可能已经两百万年了。但他们不知道。他们不知道自己是什么。他们没有任何抱负。他们也有反理性模因。他们一定有。所以,这可能是一个侥幸。也可能花了这么长时间才是侥幸。
David Deutsch 的人生哲学
Naval:也许这太抽象了,但你提到了反理性模因。你过去还谈到过一些更广泛的基本原则,我认为它们不仅仅适用于物理学。比如,有趣的标准,认真对待儿童(Taking Children Seriously),不要破坏纠错的手段,无限的乐观,无知是终极的罪过——因为那样我们就无法修补问题,无法解决问题。所有这些似乎都指向一种潜在的人生哲学。我不知道你是否已经把它系统地阐述过,大概没有,但有没有一些你努力遵循的哲学原则?有没有一些你遵循的、效果不错的启发式方法,也许其他人看了会说,“哦,对,那对我也管用”?
David:嗯,当然不是什么原则。我认为从头开始构建并不是一个好主意。我认为看到问题就去修复它才是好主意。所以,你在网上看到什么不对的东西,你就得发一条推文,或者一条 X,不管现在叫什么了。你看到量子力学有什么问题,你就试着去修复它。而我觉得从头开始再来一遍是相当愚蠢的。你知道,“让我们先理解宇宙学,再来理解量子力学吧。“那行不通的。
Naval:所以你是看到什么问题就解决什么问题。
David:而且是那些看起来有趣的问题。我不知道我在现实生活中是否确切地以这种形式来实践这一点,但我认为你不应该径直冲向一个有趣的问题,而应该记住你很可能解决不了它,所以它应该是那种不管你解决与否都预计会获得乐趣的事情。我认为另一种方式,如果你把所有的希望都押在成功上,你唯一能开心的方式就是像电影《烈火战车》(Chariots of Fire)里那样——如果你把全部希望都押在拿到那枚金牌、成为世界第一上,那么即使你成了世界第一你也不会开心,更别说如果你没做到。如果你没做到,你永远是你希望自己不是的那个失败者。如果你做到了,你会发现那是空虚的。
Naval:而且再也没有问题可以解决了。
David:对。再也没有问题了。这部电影把这一点呈现得非常好。我们得小心剧透,所以那部电影的结尾相当出人意料——他最后并不开心。所以我们就不给人家剧透了,但这个人生道理就在那部电影里。编剧中有人理解了这个道理。又或者他们只是忠实地取材于现实中的那个人。我不知道。我不知道这部电影在历史上是否准确。
目标驱动的人生?
Brett:所以这一切某种程度上就是一种人生哲学了,因为很多人,那些自助励志大师之类的人,会说我们应该过一种目标驱动的生活,你知道,把你的目标写在梦想板上之类的。
Naval:“奋斗,努力,起床,完成你的晨间程序,去工作,你需要达到这个目标,然后你可以爬上更高的阶梯。”
David:听起来非常危险。而且我不知道谁更惨,是失败的人还是成功的人。我想也许很多人只是需要一点灵感,一旦有了灵感,他们自然就会做正确的事,即使他们追随的意识形态并非如此。他们只是在做正确的事而已。就像 Newton 认为自己在做归纳,但他从未做过任何归纳。但他受到了那个理念的激励,因此把自己的行为解读为归纳,而实际上根本不是那么回事。所以我认为人们常常做对了。世界上有很多快乐的人,如果他们真的在按照自己以为遵循的理论行事,就不会有这么多快乐的人了。
自发性与兴趣驱动
Brett:那么,自发性是否也是你生活的一部分?这一直都有吗?所以我们不是死守一个死板的计划,而是如果有什么事情冒出来了而且看起来很有趣,我们就去做,不管其他一切?
David:我认为这是一个要点。我另一个例子是一个失败者,那就是 Vincent van Gogh。他一幅画都没卖出去,拒绝了他弟弟在画廊给他提供的工作,那份工作他会非常擅长的。但他想画自己的画,想按自己的方式来画。他一定是一个很难相处的人,但那就是他想要的,那就是他做的。然后,最终,他被杀了,你知道,我不知道那有多大的概率。然后他死后被公认为伟大的天才。那么,这怎么套进自助励志那一套呢?如果他死在了尝试的路上,他帮助了自己没有?
Brett:这让我想起那个——我不记得他的名字了——那个俄罗斯数学家,我想他还活着。他拒绝了所有奖项,包括,我不知道是一百万还是几十万。
David:我想是一百万。这与接受一百万去做某件事完全不同。那不会是好事。但如果他是为了这件事本身而做,然后有人给他一百万——
Naval:为什么不拿那一百万呢?
Brett:至少拿着,然后给你喜欢的人。
David:对,比如说,肯定有什么不对劲的地方我们不知道。有那么一点他们没告诉我们的事。
认真对待他人
Naval:那么,说到这些动力和乐趣,你还把这一点加上普遍解释者(universal explainer)原则应用到了认真对待儿童上——把他们当作成年人对待,给予完全的自由,不强制。
David:嗯,把他们当作人来对待。
Naval:当作人,对。而且不强制,甚至不考试,不施压,而是让他们追随自己天然的好奇心和动力。有没有类似”认真对待成年人”的哲学?因为我们甚至不清楚自己是否完全认真对待其他成年人,因此我们的关系也因此受损。
David:我同意。嗯,在宏观尺度上,我们还不知道该怎么做。西方的制度:科学、经济学、政治,是有史以来最好的。与历史相比,它们在激发创造力方面非常出色,不告诉人们该做什么,而是让人们自愿做他们想做的事,并据此互动。但它们显然也非常不完善——全部都是,科学、经济学和政治——都有巨大的缺陷,有待解决。而且我认为,甚至国家在执行法治时所施加的任何强制,也是一种不完善的标志。我们可以对此加以改进。
我们可以,但我不知道怎么做,但这些改进必须由想要做这件事的人创造性地产生。至于朋友之间,我认为和你认识的人之间,你自然而然就在做认真对待他们这件事。你不会对一个人说——你知道,你可能会对他说,“注意,今天可能会下雨。“但如果他说,“嗯,我不喜欢我的雨衣。我就穿这件夹克。“你不会说,“不行,穿雨衣。穿雨衣,不然我们就不去了。“你会被认为既非常无礼又乖戾,用那种方式与成年人互动是不理性的。
Naval:除非在某种特定关系的语境下。比如师生关系,上下级关系,夫妻关系,那么他们对彼此的行为就有要求。
制度中的协商与自由
David: 我认为那些制度,如果它们具有那种属性——很多时候它们并不具有——但即便具有,也是不完美的。肯定有更好的方式。我不认为雇主应该以那种惩罚性的、命令式的方式对雇员说话。首先,雇主和雇员之间应该对雇用他来做什么达成共识,这样双方在这方面就是一致的。所以你被雇来做某件事。然后雇主可以说,“嗯,那件事怎么样?“然后雇员可以说,“啊,听起来不错,但我确定那样行不通。“雇主可以说,“嗯,我觉得也许可以。试试看。“这种友好的互动是最优的。
Naval: 这对你生活中的人际关系有什么启示?比如你和配偶或同事在一起,他们想维持这段关系,所以存在一定的约束。你无法完全自由。约束仍然在起作用。还是说你生活中根本不会有那种关系?你会不会避免让自己置身于那些无法以充分乐趣和充分自由去行动的处境?
David: 每个人都有一个主要想解决的问题处境,而对我来说,关系就是为了解决自己的问题处境。碰巧,由于认识论等原因,世界运作的方式往往是,两个人互相解决对方的问题,效率远不止各自单独做的两倍。所以存在一个增强系数。整个经济也有一个增强系数,宏观经济的增强系数大概有万亿倍之类的。通过经济可以获得一些东西,比如 iPhone。成本的降低是巨大的。如果你想和某人一起去看电影,和朋友一起去可能确实比一个人看享受不止两倍,但不会是万亿倍的享受,不过仍然值得去做。还有些事情,比如养育孩子等,只有当你与某个人拥有长期的、基于一套共同的问题解决制度的关系时才能做到。基于同意的制度。所以我认为那其实不是重点。
重点是,当你参与到某种解决问题的关系中,而且它运作良好——它是一个好的关系并且运作良好——那么称自己被其约束就是一种荒谬。这就像说在经济中你被必须付钱所约束一样。我的意思是,并不是这样。必须付钱是同意的条件。如果没有同意,你想要得到那些东西就必须不付钱——至少得去抢劫别人之类的。但更重要的是,那些东西根本就不会存在。这些东西之所以存在,正是因为有这一整套庞大的同意制度——如果你想融入它们——我本来想说”配合它们”,但这个词都不准确——如果你认同这些制度,想成为能够融入其中的人,那你就能得到 iPhone。任何关系也是如此。但当你从中得不到什么的时候,比如也许那个俄国人拒绝领奖,如果你对经济无所求,只想待在你的木屋里研究数学,仅此而已,任何人际关系或与人的交往都只是烦扰,那你就那样做。那就是你必须做的。如果那时你被某种力量强行拉入一段正常的关系,你会不快乐。而且你大概——嗯,我不想说大概,但是——你做出好的数学成果的条件、你为自己创造幸福的条件,会被这种别人称之为自由的东西所损害。所以我给了一个很长的回答。但基本上,一个人不会被好的关系所损害,而是被其增强。
文明之争
Brett: 嗯,这就联系到有时被称为”文明冲突”的问题。虽然我认为现在这个说法不太准确。这是文明与不文明之间的冲突。显然眼下就有一场突出的冲突正在进行。虽然当人们听到这段对话时,可能不知道我们在指什么,但在我看来,比如 iPhone 的存在,正是源于那个拥有批评传统的文明,这是实现我们所取得的快速进步的必要前提。但目前我们面临着它的敌人。你认为我们目前面临的主要威胁是什么?它们是否是存亡性的?因为很多人担心存亡性威胁——机器人是否会接管世界,下一个病毒是否会让我们灭绝。但就所谓的文明冲突而言,作为启蒙运动的继承者,我们面临的主要张力或威胁是什么?解药又是什么?
David: 嗯,如你所知,我不会预言。没有人能预言。我只是尽量避免预言。我无法认真对待任何来自外部的对我们文明的威胁。无论是独裁者、恐怖分子,还是 AI 或 AGI 或 ASI——如果它们出现的话。可以想见,最先出现的 AGI 有望成为我们文化的一部分,成为启蒙运动的一部分,并且只会增强它。我也无法认真对待来自天气之类的存亡性威胁,因为它们的时间尺度要长得多,而所有那些耸人听闻的说法,其实质不过是可能比我们想象的更昂贵,或者——也许今天就开始重大项目会更好。但这不可能是存亡性威胁。
唯一可能构成存亡性威胁的,是我们的文明——启蒙运动的文明——犯下足够严重的错误。例如,否认和仇恨这个文明本身的潮流和意识形态。这样的潮流一直存在,而且跟随 Roy Porter 的观点,我谈到过一个事实:启蒙运动从诞生的第一天起,就在其内部内置了一种叛逆的反启蒙。那个反启蒙至今仍有后裔,像 woke 之类的东西——不管你怎么称呼它们——就是其中的后裔。原则上,类似的东西可能会摧毁文明。但我必须说,我没有看到任何迹象。我在尽量避免预言。不过,虽然我认为那些东西在朝着摧毁文明的方向起作用,我没有看到它们在这方面取得进展的任何实际迹象。
Brett: 尽管如此,我们在西方——无论是伦敦、纽约还是悉尼——是否比二战时期要弱一些?当然,我也不是历史学家,但那时候似乎至少普通人有一种更强烈的冲动去辨清谁站在正义一方、谁不在的明确界限,而现在我们看到西方有人在为不是受害者、而是施害者的一方辩护。这是一种新现象吗?
20世纪30年代的历史类比
David: 如果你想拿20世纪中叶做类比,那么我们最相似的时期不是二战时期,而是30年代——20年代和30年代,两次世界大战之间的那段时期。那时,人们对自身文化的正确性也大规模丧失了信心。经历了大萧条。从大萧条中得出完全错误的教训,在当时是司空见惯的,是普遍共识。人们认为我们需要更少的资本主义,总体上更少的自由。我们需要更多强势领袖。而一旦战争来临,人们发现形势已经到了不得不做出抉择的时候,西方几乎没有任何力量反对做正确的事。我最喜欢的例子是牛津大学,牛津联盟社,那里的本科生举行了一场辩论,动议是”本会在任何情况下都不会为国王和国家而战”。我之前不知道是”任何情况下”。我最近查了一下。那项动议通过了。据说,这给了希特勒一些想法。无论如何,纳粹以及整个法西斯分子的意识形态,就是认为自由民主制是颓废的、正在衰败的。而英国、法国和美国则不遗余力地证实这一点,让它看起来确实在衰败。但事实并非如此。更像是,“你朝我们撒尿,我们说天下雨了”。更多是这种态度。在这样的氛围中,有人为这种态度找到了各种各样的正当理由,比如和平主义等等。但就在那项动议通过一年之后,牛津大学的精英学生们便加入了军队,他们在参加不列颠空战。他们就是不列颠空战中的飞行员。他们是带领士兵作战的军官,他们知道我们这一方是正义的,并且终将获胜,尽管战争初期遭遇了惨痛的挫折。
大屠杀幸存者母亲的记忆
我曾经问过我的母亲,她是大屠杀的幸存者,当时处境非常艰难。我问过她,你是什么时候确信盟军会赢的?因为在我看来,在1939年9月,我觉得全世界都认为英国注定要覆灭。Joseph Kennedy,当时的美国大使,John Kennedy 的父亲,发回电报说”英国完了,和纳粹达成协议吧”等等。后来英国人得知此事,要求撤换他的大使职务。但不管怎样,这在当时是一种普遍看法,我是这么想的。但我母亲说,当我问她什么时候确信盟军会赢时,她说,1939年9月3日,也就是英法对德宣战的那一天。因为就在那一刻,他们逆转了”天下雨了”的政策,开始了真正为文明挺身而出的政策。至于具体如何实现,没有人能够预见。没有人能准确预见我们将如何获胜。但我们终将获胜、也必须获胜,对一些人来说是显而易见的,而英国人作为一个整体,说变就变。他们之前信奉一套东西、一套意识形态,然后仿佛过了一天,就信奉了完全相反的东西。
最新的丘吉尔电影里有一个很好的场景。我不知道你看过没有,就是丘吉尔非常沮丧的那一段,保守党里的同僚们试图迫使他跟希特勒达成协议,而他早在30年代初就已经看出这是不可能的。但很少有人愿意听他的。然后他去见了一些普通人。我不会给你剧透,不过这件事在现实中并没有发生过。虽然它有可能发生过。
精英与普通人的判断
Brett: 所以他是从所谓的普通人那里获得了常识性的、清晰的判断。但是牛津联盟,那些精英,却站到了错误的一边。
David: 嗯,他们曾经是。但我想那个时候他们也已经转变了。
Brett: 所以,今天看来,反启蒙的侵蚀在各地的精英学院和大学中仍在加速蔓延。这似乎是一个必然的副产品——毕竟,那些富有创造力、聪明的人就在这里,所以他们要叛逆,于是必然会出现反对主流的声音。事情不一定非得如此——
David: 我认为从历史上看,这是多种因素的混合。学生中存在叛逆者,这是一件好事,而且永远都会如此。在德国,学生是——那是纳粹主义的温床。纳粹主义的核心就在德国的大学里。英国则不是这样。在英国,反民主的倾向来自左翼。他们是共产主义者。正如你所知,在战争初期,他们都假装是和平主义者。所以他们反对战争,但那是因为斯大林告诉他们要反对战争,因为他已经跟希特勒签了协议。当斯大林一声令下,他们立刻掉转方向,但那是另一种现象。所以,当时的学生是上层阶级的人,他们是左翼。其中一些人是法西斯同情者。大多数人立刻转变了立场,不知道为什么,你知道的,就像一种相变。希特勒入侵捷克斯洛伐克,没人在意,他们想要绥靖。在此之前,他入侵了奥地利。再之前,他入侵了莱茵兰等等。所有人都只想绥靖。突然他入侵了波兰,所有人说,“这是不可接受的”。所有人突然意识到了正在发生什么。我不知道为什么。这些情况之间并没有什么不同,但事情就是这样发展的。
思想的播种过程
也许是因为有些人一直在思考,而其他人一直在依赖那些思考者。那些思考者之前想错了,但因为他们在思考,所以改变了想法。而依赖他们的人也随之改变了想法,也许事情就是这样,像一种播种过程。
Naval: 信息级联。人们似乎有一种倾向,在事情变得严重之前,会随意玩弄意识形态。然后意识形态的后果变得显而易见。然后顶层那些思维正确的人改变了想法,大多数人就追随他们作为代理。
David: 也许是吧。我很希望在当前这场刚刚发生的屠杀危机中也是如此。但我不知道会不会。我是说,之前也有过这样的情况,我本会说,就是现在,是时候了。但结果并不是。我不知道这一次会不会有所不同。但有一点,你之前问过我一个问题,文明是否处于危险之中?我不这么认为。
术语表
| 原文 | 中文 |
|---|---|
| AGI | 通用人工智能 |
| ASI (artificial super-intelligence) | 人工超级智能 |
| Borges | 博尔赫斯 |
| Chariots of Fire | 《烈火战车》 |
| Charles Bédard | 保留原文 |
| Charles Cattell | 保留原文 |
| Chomskyan | 乔姆斯基(学派) |
| Daniel Everett | 保留原文 |
| Deutsch-Hayden argument | Deutsch-Hayden 论证 |
| epistemology | 认识论 |
| Erasmus | 保留原文(达尔文的祖父) |
| Fermi paradox | 费米悖论 |
| Fermi problem | 费米问题 |
| Greg Egan、Neal Stephenson、Brett Hall、David Deutsch、Naval | 保留原文 |
| Gödel’s Theorem | 哥德尔定理 |
| hallucinate | 产生幻觉 |
| hard to vary | 难以变更 |
| Homo erectus | 直立人 |
| memes | 模因 |
| mimic animals | 模仿动物 |
| multiverse | 多宇宙 |
| Neanderthal | 尼安德特人 |
| Newton | 牛顿 |
| Piaget | 皮亚杰(出现于上下文 “postulates of P and O”,实为口误指皮亚诺公理) |
| Popper | 波普尔 |
| prompts | 提示词 |
| Riemann conjecture | 黎曼猜想 |
| Roy Porter | 保留原文 |
| Sam Altman | 保留原文 |
| Steve Jobs | 保留原文 |
| Taking Children Seriously | 认真对待儿童 |
| Ted Chiang | 特德·姜 |
| The Beginning of Infinity | 《无限的起点》 |
| universal explainer | 普遍解释者 |
| Vincent van Gogh | 梵高 |
| woke | 保留原文 |
此文章由 AI 翻译(miaoyan_chunk_translate)