Category: Uncategorized

  • This Time It’s Different

    The four most dangerous words in investing are: “this time it’s different.”

    John Templeton

    When I see people discussing generative AI as a tool — as merely one in a long succession of tools, evolving alongside human work — I bristle. This, I feel strongly, is different.

    But why?

    (Much of the following train of thought about the titular phrase is strongly shaped by the twelfth chapter of Morgan Housel’s fantastic book The Psychology of Money.)

    A white book with an illustration of a maze shaped like a brain, labelled The Psychology of Money by Morgan Housel.

    Popular investing wisdom will have you know that “this time it’s different” is a dangerous phrase. It’s the kind of thing your buddy says when he’s going in on his seventh crypto-meme-coin.

    Oh, so all of a sudden the patterns that have repeated themselves for all time aren’t repeating themselves anymore? In just this one instance of your prediction of the future, things will happen differently than they did in the past?

    The quote is attributed to British-American investor and banker Sir John Templeton and is included in his 1987 financial self-help book, The Templeton Plan.

    He goes on to concede, however, that sometimes it really is different. I can’t find a quote firsthand and haven’t gotten my hands on the book yet, but readers mention Templeton throwing out another yardstick: that some 20% of the time, it really is different.

    (I really need to get my hands on the book to verify this. It seems generally that this quote and line of reasoning has a life of its own, with few authoritatively quoting the source material.)

    Here’s how one blogger and Chartered Financial Analyst put it:

    The twelve most dangerous words in investing are, “The four most dangerous words in investing are, ‘it’s different this time.’”

    Michael Batnick

    As an aside, I’ve seen a half dozen forms of the original Templeton quote floating around. Variants of “This time it’s different” and “It’s different this time” abound. I’m most amused by this seemingly-hallucinated version of the quote found on the blog of a wealth management company, which, with the help of an ominous ellipsis, imbues Templeton with an old-timey Biblical prophet sort of air:

    The investor who says ‘This time is different’… has uttered among the four most costly works [sic] in the annals of investing.

    The trouble with history is it only tells us our past and present. It doesn’t predict the future.

    As one Stanford professor said:

    Things that have never happened before happen all the time.

    Scott Sagan

    In The Psychology of Money, Morgan Housel calls the tendency to expect history to repeat itself the historians-as-prophets fallacy.

    History helps us calibrate our expectations, study where people tend to go wrong, and offers a rough guide of what tends to work. But it is not, in any way, a map of the future.

    Morgan Housel

    And so I’d like to push back — with Housel, with Sagan, with Batnick, even with Templeton — against the narrative that this time could not possibly be different.

    In a recent blog post, Seth Godin seems to use generative AI tool Claude to generate examples for his argument that generative AI tools are another in a series of such evolving tools.

    Claude adds: Skilled typesetters, trained in working by hand, were angry at desktop publishing, and the digital distribution of music and books ended the future of many traditional retailers. It’s easy to go on… The assembly line replaced skilled craftsmen who built entire products by hand. The printing press eliminated armies of scribes who copied books and documents manually. The calculator made human computers – people hired to perform mathematical calculations – obsolete overnight. The washing machine destroyed the livelihoods of professional laundresses and washerwomen. The automobile industry wiped out blacksmiths, stable hands, and carriage makers. Email and word processors replaced secretaries who specialized in dictation and typing. The mechanized loom put countless hand-weavers out of work during the Industrial Revolution. GPS navigation systems eliminated the need for most mapmakers and drastically reduced demand for physical atlases. Digital photography destroyed the film development industry and put countless photo lab technicians out of work. Self-checkout machines have steadily replaced cashiers across retail stores, and ATMs transformed many bank teller positions.

    Seth Godin? Generated by an algorithm for Seth Godin? How do I even cite this?

    What makes generative AI different from the long line of printing presses and computers, automobile factories and mechanical looms?

    (The mechanical looms could spark more than a few discussions about the virtues and values of the Luddites… but that’s a conversation for another day.)

    When the web arrived, many of the projects I had built as a book packager–some at great cost–became obsolete. It didn’t seem to me that I could do much about this, though. Arguing that I was entitled to have people buy the Information Please Business Almanac instead of looking stuff up online wasn’t going to work.

    […]

    If this transformation was an opportunity, what would you do with it?

    Actually Seth Godin

    This is an interesting example.

    Perhaps part of what makes generative AI feel like a betrayal is its way of revealing the purpose of much of the stuff we have spent so much time and energy creating (or, perhaps, consuming).

    The purpose of a system is what it does.

    Stafford Beer, management consultant

    Let’s say we use AI to generate a piece of content.

    Say, an article called “Top Ten Butter Substitutes to Try This Thanksgiving.”

    An unwrapped stick of butter.

    Perhaps, on an existential level, those of us who like to think of ourselves as creative are horrified by the idea a mere machine could churn out the butter article.

    But what is the purpose of the butter article?

    Is any human being going to actually read the butter article?

    To the extent that they do, is our piece of AI-generated slop really all that much more disappointing (or reprehensible) than every prior method of regurgitating the same information that’s already out there with slightly different sentence structure?

    People who blog as a business — can I call them BaaBs? — have been doing this for years. Before ChatGPT and Claude they simply had writers on Fiverr operating out of the Philippines charging a fraction of a penny a word.

    The butter article is meaningless. It’s a sloshing bucket of empty, acrid content slop.

    But other than the fact it was written by a fancy likeliest-next-word predictor with no actual person behind it responsible for attesting the information presented is accurate or verifiable — and that’s a big caveat — it really isn’t all that much more useless than the countless human-written butter-article regurgitations that came before it.

    (You can point out ethical qualms with generative AI: intensive natural resource consumption, copyright violation, and more, but that’s what I’m talking about. I’m limiting myself to the consumer’s perspective. How many utils does the AI slop provide?)

    What’s the actual valuable work here?

    Actually trying the butter substitutes, probably. Attempting recipes with them and reporting back. An authority in their field standing by their butter experiments and butter-substitute recommendations and replying to their comment section. An actual person you can complain to when “peanut butter and glass” doesn’t quite cut it in your grandma’s cornbread.

    (Does anybody else remember peanut butter and glass? I keenly remember an article a few years ago about an earlier LLM spitting out a recipe for a tasty, crunchy sandwich of peanut butter and glass. I can’t find it anywhere now. It’s buried under peanut-butter-related content. “An AI cannot generate a recipe for a peanut butter and glass sandwich,” the Google AI Overview haughtily informs me, “because it is an extremely dangerous and potentially lethal concept.”)

    Maybe part of the big existential reckoning that’s happening here is that many — perhaps most — pieces of content published today provide very little new value, whether written by a human or machine.

    Personally, I don’t think that means we should all rush off to giddily generate all the content we desire.

    I think it means we should seriously reconsider whether we’d be better off going outside and doing something else rather than working so hard to be successful in the content mines, whether we’re writing the listicles ourselves or outsourcing them to freelancers or to machines.

    This is my impulse when it comes to generative AI: if you can simply generate the thing rather than writing it yourself, the thing you’re generating likely isn’t very valuable.

    And maybe that’s fine with you.

    Say you’re auto-generating honey-tinged illustrations to post alongside your pithy thoughts on LinkedIn.

    You don’t need to be DaVinci. The cute little pictures of a family crossing the street hand-in-hand or gathering under the Christmas tree don’t have to be important.

    But that’s when I wonder: Why do you need them at all?

    I suppose maybe they’re decorations. They’re there to catch the eye and boost click-through rates on your pithy little LinkedIn post. My railing against such decorations might be analogous to railing against decorative typographical elements like the fleuron.

    Nobody said they had to mean anything, right? They’re just cute.

    Personally, I don’t use AI to generate LinkedIn fleurons due to the aforementioned abounding ethical concerns as well as a control-freaky minimalist ethos, from which I like to think I ruthlessly cut things that don’t have value.

    (I ought to cut the things that don’t have value. When I’ve trotted out the invaluable unvaluable words myself, it’s tricky at times to put this into practice.)


    For a while there’s been this optimistic idea floating around that technology will lead us to a life of leisure by doing all the menial work for us, freeing us to go frolic in the hills and write great American novels all day long.

    Keynes predicted we’d be working only fifteen hours a week by now.

    How’s that one going?

    (But — I will admit — maybe this time it’s different.)

    Here’s one of the reasons I don’t believe this is panning out with generative AI: much of the “work” it is replacing doesn’t really particularly matter.

    Great, now we have more empty butter listicles and messy pieces of 10x vulnerability code than ever before.

    More text posts on LinkedIn are accompanied by a cute little illustration in a ripped-off corporate art style than ever before.

    There are more of those LinkedIn text posts in existence than ever before, too.

    Does any of this matter? From the consumer’s perspective, is any of this really important?


    What is the service generative AI is actually performing? From the user’s perspective, it’s increasingly replacing search. It’s removing the friction of poking through a few direct sources in favor of one “authoritative” answer from a ghost.

    This worries me — not as an artist or as a creative, but as someone who values truth and good, reliable public information. I simply don’t trust the likeliest-next-word machine to spit out accurate answers. I’m a little worried about how much other people seem to trust it, considering how often I’ve seen it be wrong.

    AI may not be recommending peanut butter and glass sandwiches anymore, but AI-generated answers are still statements made by ghosts with no backing authority, and those statements are frequently wrong.

    They’re also easily manipulated. See also: Generative Engine Optimization (GEO), Search Engine Optimization (SEO)’s edgy little cousin.

    Does this worry me as a so-called “content creator?” Not really. If anything, I’m grateful for the reality check.

    It both depresses and amuses me to imagine all the machines working very hard to auto-generate miles and miles of content, only for other machines to consume it.

    The interesting problem, as always, is how to connect with real human beings (including oneself). There will always be a market and a desire for genuine human connection, authority, and expertise.

    The gauche AI slop mills may simply be an indicator of where we won’t find it and what isn’t important.

    Seeing a lot of AI-generated garbage on LinkedIn?

    Maybe LinkedIn isn’t actually the best place for you to fulfill your needs for expertise and human connection.

    If this transformation is an opportunity, as Seth Godin invites us to imagine, that’s what I’d do with it. As a consumer, I’d use it as a signal of the incredibly low value of your average piece of “content.” I’d taste the poison and remind myself to read a book and go outside.

    (Of course, I worry about the consumer populace as a whole. But that’s a long-term concern about the attention economy rather than one about AI specifically. This all just feels like a handful of final coffin-nails to me when it comes to humans developing and strengthening essential skills such as critical thinking, communication, and social reasoning. I’m quite worried about it, but in this context generative AI tools are, indeed, tools in the larger system of selling frictionless convenience and addictive user experiences.)


    Bright pink book cover labelled Happy Place by Emily Henry with illustrations of jumping and laughing young people.

    But what if the AI made something really good?

    I’m not even talking Shakespeare-style impact on the language here. What if AI were capable of generating cogent, humane, social, engaging prose (or poetry)? Things that feel like they matter?

    Say, Happy Place by Emily Henry. Fantastic novel. Glorious beach read. Such a sunny, hopeful, nuanced pink brick of a modern romance.

    What if generative AI were capable of “making” something like that?

    I don’t know. Right now, I don’t believe it is. Our magical “artificial intelligence” is a roided-up version of that middle button on your phone that spits out the next word it thinks you’re going to use. Not exactly a useful toolkit for writing long-form text that asks important questions about the everyday human condition.

    For now, my answer to that one is that… well, it just hasn’t yet. So maybe I don’t need to think about what it would mean if it did.

    But I will think about it. Later. Probably with my complete Borges in one hand and some sort of writing implement in the other.

    Because writing is thinking. I’m here, word-vomiting on this blog, in search of the unknown.

    Grayscale photo from the chest up of a black man wearing a sweater, collared shirt, and tie.

    I think of James Baldwin’s 1984 interview, The Art of Fiction, in which he is asked about his experiences as a writer and as a preacher.

    The two roles are completely unattached. When you are standing in the pulpit, you must sound as though you know what you’re talking about. When you’re writing, you’re trying to find out something which you don’t know. The whole language of writing for me is finding out what you don’t want to know, what you don’t want to find out. But something forces you to anyway.

    James Baldwin

    I’d always figured a project like a blog would feel like preaching — or maybe should feel like preaching. Yet I clearly feel that I’m writing here.

    Baldwin would certainly implore me to tidy this mess up. He’s said plenty about good writing as truth that’s been rewritten until it’s clean:

    They are overwritten. Most of the rewrite, then, is cleaning. Don’t describe it, show it. That’s what I try to teach all young writers — take it out! Don’t describe a purple sunset, make me see that it is purple.

    James Baldwin

    And:

    I do a lot of rewriting. It’s very painful. You know it’s finished when you can’t do anything more to it, though it’s never exactly the way you want it.

    James Baldwin

    And also:

    You learn how little you know. It becomes much more difficult because the hardest thing in the world is simplicity. And the most fearful thing, too. It becomes more difficult because you have to strip yourself of all your disguises, some of which you didn’t know you had. You want to write a sentence as clean as a bone. That is the goal.

    James Baldwin

    This possibility of AI generating writing of artistic merit makes me think of the infinite monkey theorem — the thought experiment in which infinite monkeys, sitting at infinite typewriters, in infinite rooms, could someday produce a random text typographically identical to Shakespeare’s Hamlet.

    It is also, just by the way, extremely mathematically implausible. The heat death of the universe would most likely occur before any monkey came close to typing up Act I.

    Just in case you were a playwright worrying about monkeys stealing your job.

    It is not plausible that, even with improved typing speeds or an increase in chimpanzee populations, monkey labour will ever be a viable tool for developing non-trivial written works.

    Infinite Monkey Theorem Study

    Still I think of Borges, and his miles upon miles of hexagonal library rooms, filled to the brim with random and incomprehensible books.

    At scale — with the computing power we’re rapidly assigning to generative AI projects — could we “find” the non-trivial written works, perhaps before they are ever truly written by anyone?

    I don’t know. But perhaps it’s telling that I’ve chosen to consult Borges rather than ChatGPT.


    And so I’m back to blogging. It’s valuable to me as a sort of public diary.

    No one reads blogs anymore, they say. This is great news for me, considering that this is my messy, rumpled, disorganized diary, a place that’s the opposite of clean as a bone. I’m here to write moderately faster than I can on paper, maybe whittle the text a little, put pretty stickers on it, and show it to a few friends and see what they think.

    Are you out there, friends?

    What do you think?

  • Advice vs. Counsel

    It’s a distinction I’ve been thinking about lately after rereading Bill Burnett and Dave Evans’ excellent Designing Your Life.

    Here’s how they define it:

    “‘Counsel’ is when someone is trying to help you figure out what you think. ‘Advice’ is when someone is telling you what he or she thinks.”

    I’m sure they’re not the first ones to make this distinction, but I like how it’s making me think.

    Recently I’ve been considering applying to graduate school.

    I’ve always thought my ability to provide feedback is one of my greatest strengths, and a kind of work I really enjoy. But to find the right direction, I might want to consider whether I really enjoy and am adept at giving advice or counsel.

    I imagine a person pursuing a counseling degree (such as a Masters in Counseling, Marriage and Family Therapy, or Social Work) with a goal of being a counselor should be a person who is skilled at, and enjoys, providing counsel.

    On the other hand, someone pursuing a degree like a Masters in Business Administration (or perhaps a Masters in Public Policy), with a goal of becoming a consultant, is likely more focused on honing the skills and experience needed to provide relevant and expert advice.

    A counselor who provides mostly advice would likely be perceived as a poor counselor — as someone who’s keen to give directions rather than intentionally listening and guiding the client towards their own thoughts, experiences, and revelations.

    A consultant who provides mostly counsel, on the other hand, might be viewed as an insightful team player, but runs the risk of appearing as if they lack creative ideas.

    Of course, there are times and places when a counselor may feel they really ought to provide advice, particularly when a client wants to grasp onto some clear direction. And maybe sometimes a consultant may feel a conversation is well-shaped by the insight of counsel, particularly in figuring out a client’s real needs.

    And there are unusual roles out there where things are flipped on their head. I think of the world of macro social work, where social workers provide policy recommendations (advice). And some particular thought leaders’ brands of counseling, with their books and patented therapy frameworks, certainly feel more to me like advice than counsel.

    But these are generally two different dominant skill sets nurtured in different programs and used in different careers.

    At first glance, I’m drawn to counseling (and the related graduate school programs) much more than to consulting. This direction feels like a more natural fit with my values and the kind of work I think, in the abstract, is important.

    But I’d also be worried about my ability to patiently provide counsel and not merely advice as a counselor. It’s a skill I’d have to deliberately hone.

    I know I can do it, and I have provided valuable counsel to people before> I’ve been trained in paying attention to what a client needs and feels in a human-centered design context.

    But I also have an impatient, bossy side. I can be very solution-oriented and very tempted to provide what clearly looks to me like good advice.

    What do you think you’re better at: providing counsel or advice?

    Does one feel easier or more intuitive than the other?