Paul Graham: ‘What I worked on’

His thoughts on life choices, education, prestige, and other subjects.

Though I liked programming, I didn’t plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn’t much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored.

[…]

Computer Science is an uneasy alliance between two halves, theory and systems. The theory people prove things, and the systems people build things. I wanted to build things. I had plenty of respect for theory — indeed, a sneaking suspicion that it was the more admirable of the two halves — but building things seemed so much more exciting.

The problem with systems work, though, was that it didn’t last. Any program you wrote today, no matter how good, would be obsolete in a couple decades at best. People might mention your software in footnotes, but no one would actually use it. And indeed, it would seem very feeble work. Only people with a sense of the history of the field would even realize that, in its time, it had been good.

[…]

I wanted not just to build things, but to build things that would last.

One day I went to visit the Carnegie Institute, where I’d spent a lot of time as a kid. While looking at a painting there I realized something that might seem obvious, but was a big surprise to me. There, right on the wall, was something you could make that would last. Paintings didn’t become obsolete. Some of the best ones were hundreds of years old.

[…]

[T]he most important thing I learned, and which I used in both Viaweb and Y Combinator, is that the low end eats the high end: that it’s good to be the “entry level” option, even though that will be less prestigious, because if you’re not, someone else will be, and will squash you against the ceiling. Which in turn means that prestige is a danger sign.

[…]

One day in late 1994 […] there was something on the radio about a famous fund manager. He wasn’t that much older than me, and was super rich. The thought suddenly occurred to me: why don’t I become rich? Then I’ll be able to work on whatever I want.

Meanwhile I’d been hearing more and more about this new thing called the World Wide Web. Robert Morris showed it to me when I visited him in Cambridge, where he was now in grad school at Harvard. It seemed to me that the web would be a big deal. I’d seen what graphical user interfaces had done for the popularity of microcomputers. It seemed like the web would do the same for the internet.

If I wanted to get rich, here was the next train leaving the station.

[…]

I knew that online essays would be a marginal medium at first. Socially they’d seem more like rants posted by nutjobs on their GeoCities sites than the genteel and beautifully typeset compositions published in The New Yorker. But by this point I knew enough to find that encouraging instead of discouraging.

One of the most conspicuous patterns I’ve noticed in my life is how well it has worked, for me at least, to work on things that weren’t prestigious. Still life has always been the least prestigious form of painting. Viaweb and Y Combinator both seemed lame when we started them. I still get the glassy eye from strangers when they ask what I’m writing, and I explain that it’s an essay I’m going to publish on my web site. Even Lisp, though prestigious intellectually in something like the way Latin is, also seems about as hip.

It’s not that unprestigious types of work are good per se. But when you find yourself drawn to some kind of work despite its current lack of prestige, it’s a sign both that there’s something real to be discovered there, and that you have the right kind of motives. Impure motives are a big danger for the ambitious. If anything is going to lead you astray, it will be the desire to impress people. So while working on things that aren’t prestigious doesn’t guarantee you’re on the right track, it at least guarantees you’re not on the most common type of wrong one.

[…]

[…] I worked hard even at the parts I didn’t like. I was haunted by something Kevin Hale once said about companies: “No one works harder than the boss.” He meant it both descriptively and prescriptively, and it was the second part that scared me. I wanted YC to be good, so if how hard I worked set the upper bound on how hard everyone else worked, I’d better work very hard.

One note:

[12] There is a general lesson here that our experience with Y Combinator also teaches: Customs continue to constrain you long after the restrictions that caused them have disappeared. […]

Which in turn implies that people who are independent-minded (i.e. less influenced by custom) will have an advantage in fields affected by rapid change (where customs are more likely to be obsolete).


‘On the experience of being poor-ish, for people who aren’t’

The Resident Contrarian:

A few years back, my wife was at a baby shower hosted by a friend by a mutual acquaintance. In a conversation with the hostess, my wife learned they were in a tough financial position – they were always broke, and no amount of budgeting seemed to help them get ahead; they had cut every cost they could and things were just getting worse and worse. She admitted to my wife that she just felt like she was sinking further and further underwater, and didn’t see any way out for her or her family.

Note: The hostess and her husband were both doctors. They had a combined income somewhere upwards of $200,000 a year, and as the conversation developed my wife learned that their problems started and stopped with the hostess not being able to save quite as much as she’d like once the payments on their very nice house and current-year cars were made. At the time she leaned on my wife for emotional support over finances, our family of four’s income was less than $30,000 a year.

You should know the hostess wasn’t mean-spirited in the least, and we liked her then and continue to do so. But she did have a kind of tunnel vision I’ve since noticed is increasingly common: If you came from a family that did pretty well financially, went to college and then immediately started to do pretty well yourself, it’s hard to get any kind of context for what life is like at lower income levels. This isn’t a matter of the relatively-wealthy being dumb or insensitive; it’s just legitimately difficult to get a handle on what it’s like in a life you’ve never lived, and often being legitimately confused as to why anyone would opt to make less money instead of improving their lot with training and education.

In that spirit, I’d like to offer my services as a sort of has-been-poor guide, to fill you in on what it’s like on the other side of the tracks.

This reminded me of these great (and sad) Reddit threads:


Public toilets in the United States

Nicholas Kristof, in the New York Times:

Here’s a populist slogan for President Biden’s infrastructure plan: Pee for Free!

Sure, we need investments to rebuild bridges, highways and, yes, electrical grids, but perhaps America’s most disgraceful infrastructure failing is its lack of public toilets.

John Cochrane, on his blog:

Now, put on your economist hat. Or even put on your reporter hat. Ask the question why are there no public toilets in America?

[…]

Because it’s illegal to charge for toilets. There were once abundant public toilets in America, as there are in many other countries. And you pay a small fee to use them.

[…]

The absence of pay toilets is in fact a delightful encapsulation of so much that is wrong with American economic policy these days. Activists decide free toilets are a human right, and successfully campaign to ban pay toilets. For a while, existing toilets are free. Within months, upkeep is ignored, attendants disappear, and the toilets become disgusting, dysfunctional and dangerous. Within a few years there are no toilets at all. Fast forward, and we have a resurgence of medieval diseases that come from people relieving themselves al fresco. […]

You will jump to “what about people who can’t afford to pay?” as House did, consuming the majority of her article that should instead have been about practicalities. This too is a great teachable moment. One of the top 10 principles of economics is, don’t silence prices in order to transfer incomes. That dictum is particularly salient here because we’re literally talking about quarters. Let’s add: especially, ludicrously small amounts of income. Is it really wise to silence the incentive to create, provide, and maintain clean safe toilets, in order to transfer a few dollars of income to the less fortunate?

Maybe, you say. But look how well requiring toilets to be free has worked out. Before, a person experiencing homelessness had to beg for a nickel to use a toilet. Now there are no toilets. They are worse off than if we had pay toilets and them no money. And, really, does your and my life need to be so screwed up, does the government have to interfere in a business’ desire to provide a clean restroom and make a little money, and your and my desire to pay a small fee to relieve a bursting bladder, because of the problem of transferring a few dollars’ income?

[…]

Like so many problems in the US, this one can be solved with one simple policy: Get out of the way. Allow businesses to build, maintain, and charge for toilets. Allow people to pay for a service so dearly needed. If we can’t free a market for a service that literally costs 25 cents, heaven help the rest of the economy.


‘Why do members of the political elite insist that they’re not?’

Samuel Goldman in the New York Times:

America’s most powerful people have a problem. They can’t admit that they’re powerful.

[…]

And it’s not only politicians. Business figures love to present themselves as “disrupters” of stagnant industries. But the origins of the idea are anything but rebellious.

[…]

[T]he problem of insiders pretending to be outsiders cuts across party, gender and field. The question is why.

Part of the explanation is strategic. An outsider pose is appealing because it allows powerful people to distance themselves from the consequences of their decisions. When things go well, they are happy to take credit. When they go badly, it’s useful to blame an incompetent, hostile establishment for thwarting their good intentions or visionary plans.

Another element is generational. Helen Andrews argues that baby boomers have never been comfortable with the economic, cultural and political dominance they achieved in the 1980s. “The rebels took over the establishment,” she writes, “only they wanted to keep preening like revolutionaries as they wielded power.”

[…]

It is hard to change deeply rooted cultural tendencies. But there are strategies that might help us reconcile the performance of disruption with the demands of responsibility.

[…]

We should judge public figures by the arguments they make and the results they deliver, not whether they eat caviar, kale or capocollo.

Next, we need to learn from historical figures who embraced Weber’s “ethic of responsibility.”

[…]

Finally, we need to be honest: America has a de facto ruling class. Since World War II, membership in that class has opened to those with meritocratic credentials. But that should not conceal the truth that it remains heavily influenced by birth. […] Admitting the fact of noblesse might help encourage the ideal of oblige.

But there’s a limit to what can be accomplished by exhortation. Ultimately, the change must come from the powerful themselves. Just once, I’d like to hear a mayor, governor or president say: “Yes, I’m in charge — and I’ve been trying to get here for my entire life. I want you to judge me by how I’ve used that position, not by who I am.”


How to think about conspiracy theories

Ross Douthat, in the New York Times:

In reality, a consensus can be wrong, and a conspiracy theory can sometimes point toward an overlooked or hidden truth […]. If you tell people not to listen to some prominent crank because that person doesn’t represent the establishment view or the consensus position, you’re setting yourself up to be written off as a dupe or deceiver whenever the consensus position fails or falls apart.

[…]

Is there an alternative to leaning so heavily on the organs of consensus? I think there might be. It would start by taking conspiracy thinking a little more seriously — recognizing not only that it’s ineradicable, but also that it’s a reasonable response to both elite failures and the fact that conspiracies and cover-ups often do exist.

If you assume that people will always believe in conspiracies, and that sometimes they should, you can try to give them a tool kit for discriminating among different fringe ideas, so that when they venture into outside-the-consensus territory, they become more reasonable and discerning in the ideas they follow and bring back.

[…]

Here are a few ideas that belong in that kind of tool kit.

He explains these four ideas, with good examples:

  • Prefer simple theories to baroque ones
  • Avoid theories that seem tailored to fit a predetermined conclusion
  • Take fringe theories more seriously when the mainstream narrative has holes
  • Just because you start to believe in one fringe theory, you don’t have to believe them all

And then he concludes:

What we should hope for, reasonably, is not a world where a “reality czar” steers everyone toward perfect consensus about the facts, but a world where a conspiracy-curious uncertainty persists as uncertainty, without hardening into the zeal that drove election truthers to storm the Capitol.

It’s that task that our would-be educators should be taking up: not a rigid defense of conventional wisdom, but the cultivation of a consensus supple enough to accommodate the doubter, instead of making people feel as if their only options are submission or revolt.


A comparison between the ‘New York Times’ and the ‘Los Angeles Times’

In 2016, Deadline executive editor Michael Cieply wrote this about the two newspapers:

For starters, it’s important to accept that the New York Times has always — or at least for many decades — been a far more editor-driven, and self-conscious, publication than many of those with which it competes. Historically, the Los Angeles Times, where I worked twice, for instance, was a reporter-driven, bottom-up newspaper. Most editors wanted to know, every day, before the first morning meeting: “What are you hearing? What have you got?”

It was a shock on arriving at the New York Times in 2004, as the paper’s movie editor, to realize that its editorial dynamic was essentially the reverse. By and large, talented reporters scrambled to match stories with what internally was often called “the narrative.” We were occasionally asked to map a narrative for our various beats a year in advance, square the plan with editors, then generate stories that fit the pre-designated line.

Reality usually had a way of intervening. But I knew one senior reporter who would play solitaire on his computer in the mornings, waiting for his editors to come through with marching orders. Once, in the Los Angeles bureau, I listened to a visiting National staff reporter tell a contact, more or less: “My editor needs someone to say such-and-such, could you say that?”

(Via the Scholar’s Stage.)


Late bloomers or opsimaths?

Henry Oliver in the Common Reader:

We are not very good at knowing how to assess people who have not yet succeeded but who might become impressive later on. Why do some people show no sign of their later promise, and how can we think about the lives of those late bloomers who had precarious journeys to their eventual flourishing?

[…]

Scientific work has shown that while fluid intelligence declines relatively young, concrete intelligence continues to strengthen until much later in our lives. The distinction between fluid and concrete intelligence (the difference is between dealing with novel problems vs being expert in something) is a blunt one. But it does help us see clearly that we are better at the sort of thinking that assimilates and responds to new issues better when we are young. This is, for example, why poets are often very young but few historians are.

[…]

The difference between conceptual and experimental thinkers we saw in writers is also seen in Nobel prize winning scientists, with the average age for empirical winners being older than that of theoretical winners.

[…]

Taking the ideas of cognitive peaks, fluid and concrete intelligence, the role of luck and persistence in scientific success, and other recent empirical findings, we should be able to start re-thinking how we write the lives of late bloomers. We might start by dropping the ‘late’ designator all together.

Rather than thinking of people as late bloomers, people who were in some way held back or prevented from success, we would be better off seeing them as opsimaths: smart people who carried on learning and achieved things when the timing and circumstances were right.


Ayaan Hirsi Ali, racism, and the ‘New York Times’

Douglas Murray, in The Spectator:

A pattern has emerged in which whenever somebody raises the issue of whether or not there are any consequences that result from importing large numbers of mainly male migrants from culturally – ahem – different cultures, the person raising the question is accused of being ‘far-right’ or bigoted. If they are white they are called ‘racist’. If they are black they are called the same thing and more.

[…]

In recent days, this formula has again been employed against Ayaan Hirsi Ali.

[…]

But in the scheme of things, it is the New York Times whose campaign against the book [Prey] will register with the most. And so it is worth showing just how false and agenda-laden that piece – written by one Jill Filipovic – actually is.

Murray shows numerous flaws in Filopovic’s review. It’s astounding. And he concludes:

In recent times, the NYT has had a terrible problem – more so than any other mainstream publication – of racism among its staff. The publication has hired writers who make overtly racist comments (Sarah Jeong) and fired other people for allegedly using racist terminology.

I don’t know why the NYT can’t get through a month without an internal racism scandal, but I begin to desire to take it by its own lights and simply accept that the paper in question has a racism problem. And I suppose that a piece like Filipovic’s must be read in this light.

Filipovic seems to think that because Ayaan Hirsi Ali is a black immigrant of Muslim origin she must say only one set of things. When she says a different set of things she must have words put in her mouth by America’s former paper of record. That paper must then muffle the woman’s opinions, defame her and otherwise unvoice her. These have all been tropes in the history of racism. And I suppose that the history of racism is alive, well and continuing at the New York Times. Under the guise of ‘anti-racism’, obviously.


Interview with Martin Baron

In Der Spiegel, shortly before his retirement.

On mistakes in journalism:

We [journalists] make mistakes all the time, regardless of who’s in office. We are a highly imperfect profession, like every profession.

[…]

We have to recognize that we have certain flaws. We’re making decisions in real time, we’re moving quickly, we don’t have time to sit back and think about a lot of the implications of what we do. We should do more of that. But things move at a very fast pace.

On Jeff Bezos as a “patron”:

We’re not a charity. Bezos has made clear from the very beginning that we [Washington Post] would operate like a business. We’ve been profitable for years now. We got another profitable year last year despite everything. So, that’s how we function. And that’s a good thing because it’s really important that we have a sustainable business model – if we were operated like a charity and some day he [Bezos] was tired of operating this charity, we would be in a precarious place. I don’t think that the future of journalism depends on so-called “patrons”.

[…]

It does depend on good owners who have a long-term view and will invest strategically. You have to come up with the right strategic model, like Jeff Bezos did for the Post. He changed our strategy from being a regional publication to being a national and even international one. That was a very smart move.


Scott Alexander on ‘the tragedy of legible expertise’

In Astral Codex Ten:

WebMD is the Internet’s most important source of medical information. It’s also surprisingly useless. Its most famous problem is that whatever your symptoms, it’ll tell you that you have cancer.

[…]

This is actually a widespread problem in medicine. The worst offender is the FDA, which tends to list every problem anyone had while on a drug as a potential drug side effect, even if it obviously isn’t.

[…]

The essence of Moloch is that if you want to win intense competitions, you have to optimize for winning intense competitions – not for some unrelated thing like giving good medical advice. Google apparently has hard-coded into their search algorithm that WebMD should be on the front page for any medical-related search; I would say they have handily won the intense competition that they’re in. […]

WebMD is too big, too legitimate, and too canonical to be good.

[…]

Dr. Anthony Fauci is the WebMD of people.

[…] He’s a very smart and competent doctor, who wanted to make a positive difference in the US medical establishment, and who quickly learned how to play the game of flattering and placating the right people in order to keep power. In the end, he got power, sometimes he used it well, and other times he struck compromises between using it well and doing dumb things that he needed to do to keep his position.

[…]

Dr. Fauci (and WebMD) are legibly good (or at least legibly okay). They sit on a giant golden throne, with a giant neon arrow pointing to them saying “TRUST THIS GUY”. […] In order to stay on that throne, Dr. Fauci will need to get and keep lots of powerful allies (plus be the sort of person who thinks in terms of how to get allies rather than being minimaxed for COVID-prediction).

[…]

This means experts can play an important role; they’re people who are legibly mediocre. […] I think our system for producing legibly-mediocre people is a good start. It doesn’t always pick the most trustworthy people. But it almost always gets someone in the top 50%, sometimes the top 25%. There are few biologists who deny evolution, few epidemiologists who think vaccines don’t work, and few economists who are outright communists.

Somewhat related, from Paul Graham:

How to get good advice from experts: ask what they’d do in your situation. Many experts feel they should just tell you all the options and let you decide. But they usually know which is the right one, and asking what they’d do gives them permission to tell you.