如何在变化的世界中成为专家
如何在变化的世界中成为专家
2014年12月
如果世界是静态的,我们可以对我们的信念有单调递增的信心。一个信念经历过的(更多样化的)经验越多,它错误的可能性就越小。大多数人对自己的观点隐含地相信类似的东西。对于那些变化不大的事物,比如人性,他们这样做是有道理的。但对于变化的事物,你不能以同样的方式信任你的观点,这可能包括几乎其他一切。
当专家出错时,通常因为他们是早期世界版本的专家。
有可能避免这种情况吗?你能保护自己免受过时信念的影响吗?在某种程度上,是的。我花了将近十年时间投资早期创业公司,奇怪的是,保护自己免受过时信念的影响正是你作为创业投资者必须做的。大多数真正好的创业想法一开始看起来像坏主意,其中许多看起来不好特别是因为世界的一些变化刚刚将它们从坏变成好。我花了很多时间学习识别这样的想法,我使用的技术可能适用于一般性的想法。
第一步是对变化有明确的信念。那些陷入对观点单调递增信心的人隐含地得出结论世界是静态的。如果你有意识地提醒自己不是,你开始寻找变化。
应该在哪里寻找?除了人性变化不大这个中等有用的概括外,不幸的事实是变化很难预测。这很大程度上是同义反复,但仍然值得记住:重要的变化通常来自不可预见的领域。
所以我甚至不尝试预测它。当在采访中被要求预测未来时,我总是不得不努力即兴想出听起来合理的东西,就像一个没有为考试准备的学生。[1] 但我没有准备不是因为懒惰。在我看来,关于未来的信念很少是正确的,它们通常不值得它们强加的额外刚性,最好的策略就是保持积极开放的心态。不要试图将自己指向正确的方向,而是承认你不知道正确的方向是什么,而是尝试对变化的风向超级敏感。
有工作假设是可以的,即使它们可能会限制你一些,因为它们也会激励你。追逐事物令人兴奋,尝试猜测答案也令人兴奋。但你必须自律,不要让你的假设变成更坚固的东西。[2]
我相信这种被动的方法不仅适用于评估新想法,也适用于产生新想法。想出新想法的方法不是明确地尝试,而是尝试解决问题,并且在这个过程中不忽视你产生的奇怪预感。
变化的风向源于领域专家的无意识思维。如果你在某个领域足够专业,你产生的任何奇怪想法或看似无关的问题本身就值得探索。[3] 在Y Combinator内部,当一个想法被描述为疯狂时,这是一种赞美——实际上,平均来说可能比当一个想法被描述为好时更高的赞美。
创业投资者有非凡的动机来纠正过时的信念。如果他们能在其他投资者之前意识到某个看似无望的创业公司实际上并非如此,他们可以赚大钱。但动机不仅仅是财务上的。投资者的观点被明确测试:创业公司来找他们,他们必须说是或否,然后,相当快地,他们就知道自己是否猜对了。那些对Google说不的投资者(而且有几个人)将在余生中记住这一点。
任何必须在某种意义上押注想法而不仅仅是评论想法的人都有类似的动机。这意味着任何想要这种动机的人都可以拥有它们,通过将他们的评论变成赌注:如果你以某种相当持久和公开的形式写一个话题,你会发现你比大多数人在随意对话中更担心把事情做对。[4]
我发现的另一个保护自己免受过时信念影响的技巧是首先关注人而不是想法。虽然未来发现的本质很难预测,但我发现我能很好地预测什么样的人会做出这些发现。好的新想法来自真诚、精力充沛、独立思考的人。
作为投资者,押注人而不是想法无数次拯救了我。例如,我们认为Airbnb是个坏主意。但我们能看出创始人是真诚、精力充沛、独立思考的。(确实,几乎是病态地如此。)所以我们暂停怀疑并资助了他们。
这似乎也是一种应该普遍适用的技术。让自己周围环绕着那些能产生新想法的人。如果你想在自己的信念变得过时时快速注意到,没有什么比与那些发现会使它们过时的人做朋友更好的了。
不成为自己专业知识的囚徒已经够难了,但只会变得更难,因为变化正在加速。这不是最近的趋势;自旧石器时代以来变化一直在加速。思想催生思想。我不期望那会改变。但我可能错了。
注释
[1] 我通常的技巧是谈论大多数人还没有注意到的当下方面。
[2] 特别是当它们变得足够知名以至于人们开始将它们与你认同时。你必须对你想相信的事情保持额外的怀疑,一旦一个假设开始与你认同,它几乎肯定会开始属于那一类。
[3] 在实践中,“足够专业”并不要求一个人被认可为专家——无论如何,这是一个滞后指标。在许多领域,一年的专注工作加上大量关心就足够了。
[4] 虽然它们是公开的并且无限期地存在,但在论坛和Twitter等地方上的评论经验上似乎像随意对话一样运作。门槛可能在于你写的东西是否有标题。
感谢Sam Altman、Patrick Collison和Robert Morris阅读本文草稿。
How to Be an Expert in a Changing World
December 2014
If the world were static, we could have monotonically increasing confidence in our beliefs. The more (and more varied) experience a belief survived, the less likely it would be false. Most people implicitly believe something like this about their opinions. And they’re justified in doing so with opinions about things that don’t change much, like human nature. But you can’t trust your opinions in the same way about things that change, which could include practically everything else.
When experts are wrong, it’s often because they’re experts on an earlier version of the world.
Is it possible to avoid that? Can you protect yourself against obsolete beliefs? To some extent, yes. I spent almost a decade investing in early stage startups, and curiously enough protecting yourself against obsolete beliefs is exactly what you have to do to succeed as a startup investor. Most really good startup ideas look like bad ideas at first, and many of those look bad specifically because some change in the world just switched them from bad to good. I spent a lot of time learning to recognize such ideas, and the techniques I used may be applicable to ideas in general.
The first step is to have an explicit belief in change. People who fall victim to a monotonically increasing confidence in their opinions are implicitly concluding the world is static. If you consciously remind yourself it isn’t, you start to look for change.
Where should one look for it? Beyond the moderately useful generalization that human nature doesn’t change much, the unfortunate fact is that change is hard to predict. This is largely a tautology but worth remembering all the same: change that matters usually comes from an unforeseen quarter.
So I don’t even try to predict it. When I get asked in interviews to predict the future, I always have to struggle to come up with something plausible-sounding on the fly, like a student who hasn’t prepared for an exam. [1] But it’s not out of laziness that I haven’t prepared. It seems to me that beliefs about the future are so rarely correct that they usually aren’t worth the extra rigidity they impose, and that the best strategy is simply to be aggressively open-minded. Instead of trying to point yourself in the right direction, admit you have no idea what the right direction is, and try instead to be super sensitive to the winds of change.
It’s ok to have working hypotheses, even though they may constrain you a bit, because they also motivate you. It’s exciting to chase things and exciting to try to guess answers. But you have to be disciplined about not letting your hypotheses harden into anything more. [2]
I believe this passive m.o. works not just for evaluating new ideas but also for having them. The way to come up with new ideas is not to try explicitly to, but to try to solve problems and simply not discount weird hunches you have in the process.
The winds of change originate in the unconscious minds of domain experts. If you’re sufficiently expert in a field, any weird idea or apparently irrelevant question that occurs to you is ipso facto worth exploring. [3] Within Y Combinator, when an idea is described as crazy, it’s a compliment—in fact, on average probably a higher compliment than when an idea is described as good.
Startup investors have extraordinary incentives for correcting obsolete beliefs. If they can realize before other investors that some apparently unpromising startup isn’t, they can make a huge amount of money. But the incentives are more than just financial. Investors’ opinions are explicitly tested: startups come to them and they have to say yes or no, and then, fairly quickly, they learn whether they guessed right. The investors who say no to a Google (and there were several) will remember it for the rest of their lives.
Anyone who must in some sense bet on ideas rather than merely commenting on them has similar incentives. Which means anyone who wants such incentives can have them, by turning their comments into bets: if you write about a topic in some fairly durable and public form, you’ll find you worry much more about getting things right than most people would in a casual conversation. [4]
Another trick I’ve found to protect myself against obsolete beliefs is to focus initially on people rather than ideas. Though the nature of future discoveries is hard to predict, I’ve found I can predict quite well what sort of people will make them. Good new ideas come from earnest, energetic, independent-minded people.
Betting on people over ideas saved me countless times as an investor. We thought Airbnb was a bad idea, for example. But we could tell the founders were earnest, energetic, and independent-minded. (Indeed, almost pathologically so.) So we suspended disbelief and funded them.
This too seems a technique that should be generally applicable. Surround yourself with the sort of people new ideas come from. If you want to notice quickly when your beliefs become obsolete, you can’t do better than to be friends with the people whose discoveries will make them so.
It’s hard enough already not to become the prisoner of your own expertise, but it will only get harder, because change is accelerating. That’s not a recent trend; change has been accelerating since the paleolithic era. Ideas beget ideas. I don’t expect that to change. But I could be wrong.
Notes
[1] My usual trick is to talk about aspects of the present that most people haven’t noticed yet.
[2] Especially if they become well enough known that people start to identify them with you. You have to be extra skeptical about things you want to believe, and once a hypothesis starts to be identified with you, it will almost certainly start to be in that category.
[3] In practice “sufficiently expert” doesn’t require one to be recognized as an expert—which is a trailing indicator in any case. In many fields a year of focused work plus caring a lot would be enough.
[4] Though they are public and persist indefinitely, comments on e.g. forums and places like Twitter seem empirically to work like casual conversation. The threshold may be whether what you write has a title.
Thanks to Sam Altman, Patrick Collison, and Robert Morris for reading drafts of this.