An empty to-do list is a familiar fantasy. We yearn for the mundane to be complete (inbox empty, laundry clean, plumber called) so that we can turn our attention to the sublime. The novel we’d write, the marathon we’d run, the vacation we’d take if we just had the time.
“The day is never coming when all of the other stuff will be ‘out of the way,’ so you can turn at last to building a life of meaning and accomplishment that hums with vitality,” writes Oliver Burkeman in his book, Meditations for Mortals. “For finite humans, the time for that has to be now.”
That means choosing carefully how we want to use our limited time and letting the rest fade away.
It’s easier to agree with Burkeman than to internalize his advice, particularly as tech companies make increasingly compelling promises to revolutionize human productivity. If ChatGPT drafts our emails and takes notes on our meetings and summarizes every book on our bookshelves so we can read them in 6 minutes instead of 6 hours, maybe we can do it all!
But this is the very siren call that Burkeman has warned us about, luring us into what he calls the “efficiency trap.” Being more productive at work doesn’t mean more novel writing and marathon running. It usually just means more work.
We recently invited Burkeman to our Behavioral Design Podcast to discuss the true and false promises of AI for productivity. We discuss the efficiency trap, what readers lose when writers are no longer people, and how to balance the importance of planning for tomorrow with the delight of showing up today.
An excerpt from our conversation is below. The conversation has been edited for brevity and clarity. You can listen to the full episode here.
Aline Holzwarth: It’s not hyperbole to say that your work has truly, deeply changed my life. But there’s a cliché version of your work that most people would be familiar with, which is the idea that life is short, make the most of it. And it’s tempting to reduce it to that.
Part of what makes your work so special, for me at least, is making concrete the 4,000 weeks of life that we have. This idea that we are struggling to stay “on top of everything.” It’s not just hard, it’s impossible. Once you embrace that impossibility of it, there’s something freeing.
Oliver Burkeman: One response you can have when you start thinking about how short life is, “Well then I better treat it like it’s incredibly precious and cram as much as I can into every available space. And if I waste a moment of it, that’s terrible, and I’ve got to be doing extraordinary, impressive things with my work time and my leisure time.”
And that’s not only not the point I want to make, it’s almost the whole thing that I’m setting myself up against. If you think more deeply about our limitations, our finitude, and you let it sink under your skin a bit, it’s relaxing and empowering at once.
It isn’t this rather stressful feeling of, “Oh no. That means I’ve got to be hypervigilant in every moment.” It’s good news. It’s de-stressing news, but it’s also empowering and action-oriented news. The other pole that I’m trying to avoid here is that there’s no point in doing anything. Just sit on the couch and eat potato chips.
I don’t think it’s either of those. I hope, because this is what I would like life to be like, that it can be peaceful and also active.
Aline Holzwarth: One of the primary things that you hear when a new AI tool comes out is: “It will 10x, 20x your productivity. Here are the extreme efficiency gains that you’re going to see from using this tool.”
Tell me all the things that are wrong with that.
Oliver Burkeman: I don’t see any reason why extra efficiency that comes from AI is an exception to the rule about what happens when you get more efficient and optimized that I’ve written about quite extensively now. All things being equal, if you free up time and bandwidth in your work through efficiency savings, then the result is that new capacity fills with even more things to do.
The classic example is if you get much quicker at answering email, you reply to more people more quickly, and then they reply to you, and then you have to reply to those replies. And before you know it, you’re doing more email with your life.
Aline Holzwarth: One interesting related case is AI medical scribes. When a doctor and a patient are in a room, the AI scribe is taking notes on their conversation and then can generate an automated summary and assist the physician in writing up their notes for that patient later.
Right now, physician burnout is a huge problem. Easing up on documentation seems like a purely good thing. However, rather than using this freed up time for the physician to enjoy their evenings at home without working, or have more quality time with a patient, instead they’re seeing more patients.
Oliver Burkeman: Yeah. And this is not a new thing with AI. It’s the idea that the reward for good time management is more work. This is sardonically known as the Kaiser Reward among the people in the Kaiser Permanente health care system because if you see more patients more effectively, the reward will be that you have to see even more patients. And that’s ultimately capitalism. The natural movement of the economic system that we exist in is not going to say to those doctors, “You take it easy with your spare time. And if we need more patients to be seen, we’ll hire some more doctors.” That’s not what would happen.
If you think more deeply about our limitations, our finitude, and you let it sink under your skin a bit, it’s relaxing and empowering at once.
Aline Holzwarth: What do you think about the possible future where machines are doing all of the work, and we can just lie on the beach?
It’s possible that many could do that now, but we choose to work. We enjoy working. Part of me fears that the rise of these AI tools that are creating all of these efficiencies is making the work that we enjoy less enjoyable. So that you become this task master of sorts, where all you’re doing is setting goals for this tool rather than doing the fun critical thinking.
Oliver Burkeman: It’s a great point. This is the AI version of a time-honored complaint that if you get good at a job, you get kicked upwards into being a manager of that job instead of doing the job that you came to do because you found it so interesting.
I have a friend who voluntarily demoted himself in a high school setting away from a management position because he realized that what he wants to do is teach. And I can easily see that if we all become managers of AI and nothing more—there may be people for whom that is a passion, but there’ll be lots of us for whom it isn’t.
Samuel Salzer: I’ve called this automation drift. I’ve come across a writer and a coder recently who have experienced it. The writer has these amazing AI tools, allowing them to turn this lengthy process of writing into just sparking the idea and letting the AI write it automatically. And they find themselves like, “Wow, I can write so much more.”
But what they do now is not writing anymore. They’re just editing. And what they used to enjoy doing was the writing part and they hated editing. Thanks to these tools, they basically ended up outsourcing the writing part to the AI and became an editor doing the stuff they hate.
And then the coder, same story. They enjoy the coding task. They enjoy writing code, and now they just have to debug. Their output may have increased, but it has drifted to this point where they’re doing the least enjoyable part of the task.
Oliver Burkeman: An issue that comes up a lot in these arguments—just on social media and online spaces, I don’t mean in more specialist contexts—somebody will say, “AI could never write a novel as beautiful as a novel by Jane Austen.”
And then someone else will come in and say, “You are so naive, it’s ridiculous. Of course it will. It’s days away. Maybe it’s already done it.”
Sam Altman will come in with his AI-generated short story, and I’m sitting there feeling like, am I going crazy here?
Because what gives a novel written by a human its value is not that it reached a certain level of technical skill that an AI could surely emulate, but that it was written by a conscious, emoting sensibility. I am a conscious emoting sensibility too. In that connection, there is a relationship which there can’t be unless you want to argue that the LLMs are sentient, which is a pretty radical position to take. Even then you’d have to argue that they were sentient in some vaguely similar-to-us way.
Samuel Salzer: The counterargument is that if it passes the Turing test, if you read the story and you find it meaningful, and then you realize it was written by AI, it doesn’t really matter. You had that experience, you had the same emotion.
Oliver Burkeman: I can’t pretend that I care if I find out that the technical instructions for how to use my new dishwasher were written in that way.
But at the end of this call, if I was to discover that in fact you had not really been there, but it had been a brilliant cloned simulation based on scouring all of your previous output and podcasts and things, the idea that nothing would have been lost there is completely crazy to me.
It would make no sense to say, “Well, I had an interesting time and I had some thought-provoking ideas.” That’s not enough. What’s happening is that some people are connecting to each other, and on some level it’s based on my assumption that what’s going on inside your mind is broadly similar to the kind of thing that’s going on inside my mind. Rather than that there’s nobody there.
Samuel Salzer: I had that experience when I realized that many nonfiction books are ghostwritten.
Oliver Burkeman: I think there is a real art to good ghostwriting, which is probably more than just ghostwriting. It’s interviewing and drawing out subjects. I forget the ghostwriter’s name, but Life by Keith Richards is a brilliant book. And that’s only because a real skill is used—and probably the biggest skill is this relational one, right? Hours of chatting with Keith Richards.
But there are contexts where once you discover that’s what happened, you feel scammed. I’ve definitely had journalistic colleagues who’ve interviewed celebrities on the occasion of the publication of their autobiography where it’s unclear if the celebrity has read their own autobiography. I’ll name no names.
What gives a novel written by a human its value is not that it reached a certain level of technical skill that an AI could surely emulate, but that it was written by a conscious, emoting sensibility.
So something there is lost because again, and I feel more convinced of this with every passing month, the thing that matters is always relational in some way. If something is ghostwritten in a way that feels like it violates a relationship, then something is deeply lost.
It isn’t a problem in contexts where there’s no expectation. You don’t feel that you’re in a relationship with another sensibility in the dishwasher instructions. The fact that this could get a lot worse with AI doesn’t mean it’s not already a problem.
Aline Holzwarth: Another pre-AI problem that’s exacerbated by AI is information overload. How should we deal with this version of AI intruding in our lives?
Oliver Burkeman: I’ve written about this funny moment in the earlier history of the social internet, when some leading people sincerely believed that this problem would go away because the filters would get better and better. So you’d take all these haystacks and you’d have a way of picking out the needles that really mattered to you, and you could forget all the rest.
But Nicholas Carr, the technology critic, points out that the exact opposite has happened. If the supply of incoming information is effectively infinite, then when filters get better, all that happens is this massive amount of pure distilled stuff that you want. The problem, he puts it, is not finding a needle in a haystack, but how to deal with haystack-sized piles of needles.
One has to gradually reorient from thinking of these supplies of information as things you’re supposed to get through, get on top of, deal with, and think of them as things that we exist in the midst of and get to pick from.
So the metaphor I use is seeing your “To Read” piles as a river rather than a bucket. Not something that fills up and then it’s your job to empty it, but something that flows past you. You pluck things from it that seem interesting, and you let the others go.
The metaphor I use is seeing your “To Read” piles as a river rather than a bucket . . . You pluck things from it that seem interesting, and you let the others go.
I sometimes think that I’m saving things for later in order to never return to them as well. I’m sitting there writing something, and then I read a post somewhere, and it triggers something in my mind. So I click save for later, and it’s like noticing a thought in meditation. You can let go of it. It’s gone. And then I can get on with the thing I was doing.
Samuel Salzer: AI brings this promise of distilling the most important part of something into its essence. We’ve lived through some of this pre-AI with these services that say, “Read a book? Who has time for that? Here’s a 10-minute summary.” Selling this idea that you can absorb all of the benefit from that book in 10 minutes. And that is obviously, I think you would agree, a false promise.
Aline Holzwarth: As someone who hoards research articles, sometimes the point of the research article is not to get the incredible enjoyment of reading the research article, but just to know the findings so that you can apply them to your work. So that’s where I’m torn on these summarizer tools because I do very much see an upside in distilling a research article to its main points.
The downside is that the AI summarizer tool doesn’t think critically about the methods or the findings. It doesn’t have the ability at this point to tell you whether you should trust the research or not. So you’re not so much deprived of the enjoyment of reading it, but maybe the ability to critically analyze it.
Oliver Burkeman: Cal Newport, whose work I admire a lot, used to say that they are ideal as a way to reach a judgment about which book to then go and read in full without tricking yourself that you’re going to know all about it.
There are so many problems with the idea that you should give up reading books because of summarization services. And I’m not even getting to the point of how any books are going to be produced for these summarization services if nobody’s buying the books. Separate question.
For a lot of reading, the reading is the point. When I read a book, certain things stick with me, and maybe I use them to trigger other ideas. Maybe I’ll quote the book in a subsequent piece of my own writing. I’m the one who gets to say what is salient and important. A summary of what AI thinks the author thinks is most important is beside the point.
Even pre-AI note taking, unless you’re studying a specific work for a university exam, when it’s your job to regurgitate what’s in the book, your job is not to try and make a summary of what the author was trying to deal with. It’s to figure out which bits trigger interesting sparks in your own mind.
I’m sure I could think of a hundred books I’ve read where I can tell you, “Alright. The thing I really remember from that is X,” but who knows if the author thought that was a particularly important thing.
I’ve had that experience myself as an author. People say, “I think the thing that really stuck with me from your book was X.” And then I have to think. I did put that in, didn’t I? It just wasn’t particularly salient to me.
Aline Holzwarth: In Meditations for Mortals, you write:
“If there’s a single truth at the heart of the imperfectionist outlook, it’s the one to which we turn as we begin this final week: that this, here and now, is real life. This is it. This portion of your limited time, the part before you’ve managed to get on top of everything, or dealt with your procrastination problem, or graduated or found a partner or retired; and before the survival of democracy or the climate have been secured: this part matters just as much as any other and arguably even more than any other, since the past is gone and the future hasn’t occurred yet, so right now is the only time that really exists.
“If instead you take the other approach—if you see all of this as leading up to some future point when real life will begin, or when you can finally start enjoying yourself, or feeling good about yourself—then you’ll end up treating your actual life as something to ‘get through,’ until one day it’ll be over, without the meaningful part ever having arrived. We have to show up as fully as possible here, in the swim of things as they are.”
As a behavioral scientist, this is extremely interesting and fraught with complexity. So much of my time is spent trying to help people help their future selves. Saving for retirement, exercising now to be healthy later. I feel this real tension between whether I should be designing for people’s future selves or current selves.
Oliver Burkeman: I’m not sure I have a neat answer to resolve all this. Obviously on some level, it’s both. I think there are a lot of ways in which a lot of us spend too much time thinking about our future selves, and I don’t think that undermines the idea that there are ways in which we definitely should, and which people like you should be helping us to do so.
I think an awful lot of the project of trying to change your life, trying to get better habits and all this, is motivated by this idea that there’s something both deeply wrong with you now, but also deeply provisional about life now. And the important thing is to “get there.” Of course when you get there, you’re having the same thought about a new future point. So you’re never actually showing up to life.
We need to make sure that the ways in which we are taking care of our future selves don’t become forms of punishment that vacate meaning from our present selves.
I think being aware of that risk is totally compatible with using some of your present moments for judicious future planning. That can entail being a bit more indulgent and gentler toward your present self, making sure that your present self is actually having a good time.
I hate any kind of response to this fascinating conundrum that ends in a compromise, but one simple response to this is to say that we need to do both of these things. We need to make sure that the ways in which we are taking care of our future selves don’t become forms of punishment that vacate meaning from our present selves.
Aline Holzwarth: Some of the more effective ways to get people to do the things now that will benefit them later are to make them more meaningful or fun. To take your exercise example, joining a rock climbing gym with some other dad buddies of yours, that could be something that achieves both of those goals.
Oliver Burkeman: So there we go. Beyond the compromise into the synthesis. These things are the same thing in some way. And I feel like that is the case incredibly frequently with this sort of puzzle.
Samuel Salzer: A few years back, I went to visit my dad, and I noticed that he was doing the dishes and didn’t have a dishwasher. I was like, “Dad, have you ever thought about getting a dishwasher? It’s convenient. It will save you time. You don’t have to worry about doing the dishes anymore by hand.”
And he said, “I enjoy doing the dishes.”
I said, “You enjoy doing the dishes?”
And he said, “Yeah. A long time ago, I realized that I’ll have to do the dishes so often in my life. My future is going to have so much dish washing on a daily basis that I’m gonna find a way to enjoy it.”
So he found a way to make dish washing his daily meditative practice, and I thought that was very inspiring. Maybe the dishwasher is overrated sometimes.
Aline Holzwarth: What is your most controversial opinion about AI?
Oliver Burkeman: A lot of very grand predictions about AI, either very optimistic utopian ones, or especially the really doomer predictions, are best understood as coping mechanisms that the person holding them is engaging in to deal with the fact that they don’t have a clue what’s coming. The controversial opinion there is that anybody who says with a strong degree of certainty that they know what this future is going to look like is dealing with their own emotions, but they don’t have any reason for that certainty.