Much that you have foretold has come to pass, at least with regards to my journey of artificial intelligence. Let me know when you’re around this week or next and want to catch up… We could have tea and I can relate the whole saga.
This text message came from a neighbor who runs a small business. Thankfully by the time we met for tea, his saga had turned into triumph. A late adopter to AI with a long-buried software engineering background, he took to the tools with far more fluency than anyone else I’ve seen. They did initially throw him for a loop with slop and hallucinations, which is what I foretold, but he recovered and reoriented swiftly. He overflowed with enthusiasm about everything he achieved with AI in the preceding six weeks from early November to mid-December, a crucial window even for those of us who were already knee-deep in this stuff.
I’ve read many accounts of engineers using AI-enabled coding tools to blast through their backlogs of programming tasks. They’d end up in a suspended space of confused bliss as the weight of stale to-do lists and technical debt evaporated. With the updates of November and December of last year, I’ve been having that feeling myself with knowledge work. Scores of “look into,” “update,” “review,” “organize,” and “evaluate” tasks evaporate with as little as one sentence of direction and access to the right collection of files. I feel like I’m working with the computer in Star Trek, and it feels awesome.
My neighbor was creating marketing copy and running complex financial scenarios. Our conversation focused on concrete methods for creating sustained context for complex work and leveraging Claude Code skills for repeatedly used tasks and standards, with a fair bit of heaviness as he grappled with how to tell people he employed that he would be hiring them for far less work this coming year, and if that fate was also coming for us in other places.
Your Move 78
My neighbor’s learning curve was remarkably fast because he had good mental models for how computers work and what to expect of them. I’d like to share a non-technical mental model to help encourage your curious and willing engagement with AI. It involves flipping an oft-cited anecdote. This feels important to share now because if you’ve been reluctant or tepid about AI, it’s an especially good time to change course. The models and tools have gotten markedly better over the past couple of months. It is a fantastic time to dive in or re-engage.
Anyone who has played the game Go understands why a computer beating a Go world champion is a significantly greater achievement than a computer beating a world champion at chess. Go has more possible board positions than atoms in the observable universe. I love Go for the mind-quieting feeling of letting intuition and complex pattern recognition guide the flow of play. This is markedly different than chess, where cerebral brute-force calculation, be that human or computer, can work through all likely possibilities. Go is comparatively much older than chess but more niche globally despite its deep cultural roots.
In March 2016, AlphaGo, an AI specifically designed for the game, went up against Lee Sedol, an 18-time world champion. They played a five-game series watched by 200 million people. Lee Sedol lost the first game. In the second game, the computer made a perplexing move, Move 37, that commentators thought was a mistake at the time. But it proved pivotal in AlphaGo winning once again. By game four, Lee Sedol, having lost the previous three games, played a move that similarly baffled commentators and experts, Move 78. AlphaGo itself calculated the probability of a human playing Move 78 at 1 in 10,000. Lee won that game, the only game he won in the tournament.
Move 37 was baffling in the moment that it was played. It was a unique product of the alien computer intelligence that successfully undermined its human opponent. Move 37 is discussed with hype and adoration by AI enthusiasts today. Much less discussed is Move 78. Lee Sedol had been put on his back heels by losing three games in a row to the AI, effectively losing the tournament. The new intelligence he was up against forced him to think differently about how he played and marshal his own prowess differently. Another Go champion described Lee’s Move 78 as a “Divine Move” (“신의 한 수” in Korean). A Divine Move is one of career-defining brilliance that happens rarely for even the best players, if it happens at all. Being well matched by the AI in a field he was already world-class in made Lee even better at what he did, producing an innovative move that he is revered for among his peers. Immediately after the match, Lee said, “I have grown through this experience. I will make something out of it with the lessons I have learned. I feel thankful and feel like I have found the reason I play Go.”
I think the hype over Move 37 is misplaced. What I am looking for in my friends, colleagues, and the organizations I care about is their Move 78 and their deeper sense of purpose for it. How will their engagement with AI allow them to reach new heights not because of what the AI does, but because of what they do in response?
It Might Not Matter How Smart You Are
The example of my neighbor stands in stark contrast to the example of an especially smart friend of mine. This friend is objectively brilliant. She has the most prestigious degrees in the world and works with grace and focus on the ground in the defining foreign conflicts of our era. She just started a new round of rigorous training that includes paid subscriptions to various AI models as a benefit. When I ask her if and how she’s using them, she says a bit dismissively that she’s already good at all the things that the AI is good at, like reading and summarizing large amounts of material, extracting insights, et cetera. And besides, she doesn’t want to be seen as potentially cheating, even though the program itself provides them. I was surprised by how plain and stark her stance was. She was treating the tools provided to her as contraband. Her ability to stick to the letter of the law is part of why she excels in complex bureaucratic systems. But seeing others in her program use these tools in ways she looked down on in my mind didn’t fully validate her abstinence from them.
Chances are, there’s more people in my network like my brilliant foreign policy friend than my neighbor who runs a small business. The new AI tools are wrapped in deceptively simple interfaces that offer few hints on how they can be leveraged well. Even a software engineering background like my neighbor has, buried under decades of disdain and avoidance, gives you a substantial leg up. Using them well takes the right mental models about how they work, a fair bit of curiosity and tenacity, a willingness to have unseemly sagas, and the aplomb to climb out of the troughs of those sagas to try again. And all of that needs to seem beneficial enough to risk being seen as a “cheater.”
My Ida API
While AI pushed Lee Sedol to be even better at something he was already world-class in, AI can also help fill in the gaps where you may struggle. Unlike my brilliant foreign policy friend, I read quite slowly and labor to summarize insights. That’s because I have dyslexia and inattentive ADHD.
I’ve taken various tests to measure my ingrained aptitudes so I can understand the nature of my cognitive disabilities and work with them smartly. (These tests have blissfully revealed a range of cognitive superabilities that explain why I am able to compensate as well as I do.) The test results are a pile of numbers, percentages, and scores. The generative AI models know exactly what these results mean. It’s baked in through their training data.
I gave my test results to an AI and asked it to boil them down into the most concise possible format so other AIs could read the data and understand the kind of brain they are talking to when they talk to me. I call it my Ida API. Like most APIs, it’s not particularly readable to a layperson. It’s also very compact, at less than 350 words. I make sure the Ida API is front-loaded into the chatbots I use. As a result, the bots proactively anticipate what’s going to work for my brain and what might trip me up. When assessing various opportunities or interests, it points out where I might sail through common challenges with ease and where I might get overloaded by things that others don’t bat an eye at. It suggests visual-spatial or conceptually holistic approaches to tasks to leverage my strengths.
I describe this dispassionately as a pragmatic thing, but it is also deeply emotional. The first time Claude unexpectedly used my Ida API to make a pragmatic suggestion on how to approach a task, I felt a visceral emotional release lurch through me. The thought crossed my mind, “Oh my god! Someone understands! Someone cares!,” knowing full well that is not what is going on technically. But the emotional response points to a deep need being fulfilled. I can relate so deeply to this kid bursting into tears the first time he puts on color blindness correction glasses. I sobbed intensely when I first watched this unassuming, amateurish video.
For me, my AI setup is those glasses. Even momentary relief from an invisible disability lightens the persistent grief and distress we carry by having these conditions.
Will my Claude Code setup with the Ida API lead to a personal Move 78? Maybe! But does it matter? Do I need to be world-class? Or do I just need to enjoy being me? Working this way feels different in a resonant way. It’s healing over old agonies, creating excitement, and seeding trepidations about the negative externalities all at once.