Will AI save us from ourselves?
Maybe. Some of us, at least.
gm, what-ho and hello!
In this museletter we’ll explore:
why exponential advances in artificial intelligence (AI) beckons a new paradigm (something that cannot be understated)
why systemic collapse is inevitable (already happening) yet mitigable (potentially)
why AI mirrors our collective mediocrity—and how it might make esoteric and poetic perspective more valuable
why there’s never been a better time to reassess where you invest your attention—what quest art thou upon?
why you should gift yourself this 2023 wall planner the dangerlam made ✨
As ever, I am grateful for your attention.
This is the museletter of Dr Jason Fox, wizard-philosopher. If a friend forwarded this to you, you can join the 11,000+ folks who subscribe. I write on matters pertaining to the mythical ‘future of leadership’, amongst other things.
Exponential AI beckons a new paradigm
So: a few weeks ago Open AI released the beta for their chatGPT model, which is currently freely available to use. I recommend trying it out. You could be forgiven for dismissing the exponential advances in AI as rather benign, as many of the more readily available examples are conversational in nature. But the implications of this technology are huge and, potentially, world breaking.
I’m not one to watch videos on YouTube (I prefer to read longform 💅 #flex)—but I have friends who seem to devour content on the platform. One friend, whom I admire, and who runs a family office for an undisclosed but very wealthy client, suggested I give this video a look. In this conversation investor Raoul Pal interviews Emad Mostaque, the founder and CEO of Stability AI (one of the companies behind Stable Diffusion). Whilst the video is two weeks old now, the insights remain relatively fresh. What AI represents—and what we are witnessing unfurl—is a paradigmatic shift.
I know that we are pretty numb to the meaning of hype-words these days, but allow me to spell this one out. Actually, bah, I’ll just ask the AI.
A paradigmatic shift is a fundamental change in the way that something is understood or approached. It is not simply an incremental change, but rather a change in the underlying assumptions or theories that form the basis of a particular field or discipline. Paradigmatic shifts can have far-reaching implications, not only within a particular field, but also in related fields and society as a whole.
An example of this is science. The standard scientific method of hypothesis testing is far too slow, cumbersome and tedious compared to the ability of AI to generate and crunch through null hypotheses and explore avenues beyond what us rationally-bound humans can do. Other examples include agriculture, education, disaster responses… verily: there’s not one facet of life that AI cannot conceivably touch. (I’m not one prone to hyperbole, but in this context the statement seems reasonable).
This will have significant implications for society.
Systemic collapse is inevitable
AI is already violating many things that we once held sacred. Some artists who have willingly uploaded their work onto social media platforms so as to attract attention to themselves (in terms of vanity metrics) and to the platform owners (who monetise the attention via their advertising models) are now shocked to discover that the work they gave away is now being used by AI. I feel for all artists (and anyone who lives in the precariat—poets, philosophers, scholars); yet at the same time I am surprised that so many are surprised.
AI emulating your signature style is is not the only violation we will see. Intellectual property, inference, influence, identity; very few things will remain sacred in this profane new epoch. Not that I mind a good bit of profanity—but it behooves us to consider the second and third order impacts of this technology. If you aren’t feeling insecure yet—if you aren’t blessed with the anxiety that oft comes part-and-parcel with cognitive complexity—then I highly recommend The Wisdom of Insecurity by Alan Watts.
But if you don’t have time to read—and I mean, who does? We are all so busy. Amirite? Omg yes, so busy. And besides there are so many good shows on at the moment. Did you watch season two of The White Lotus? Everyone says its even better than the first and don’t get me wrong we loved it but still found the characters in season one more iconic and memorable—just ask the AI to read it for you.
Maybe we will go the path that Iain M Banks writes of in The Culture space opera series, wherein the AI “Minds” have developed into highly intelligent, sentient, benevolent and ethical beings that hold a deep sense of responsibility to humanity.
Maybe. I don’t think so, but maybe.
And yet: I suspect we are due for a few crisis events before then. There will be a systemic collapse of sorts, and perhaps this collapse will trigger a cascade of collapses? A full on Moloch collapse event? Mass unemployment, food shortages, refugee crises, energy crises and civil unrested to the backdrop of exponential climate change and the sixth mass extinction event already happening? Hoho, maybe. Perhaps we are already in it.
Many have known that the current path of civilisation is unsustainable, and that the civilisation project itself is self-terminating (merry xmas btw). This is amplified by exponential tech coupled with old and opaque systems that favour just a privileged few whilst externalising ecological and societal devastation.
No need to get all doomer, though. Hoho, perish the thought! That would be terrible for engagement.
Besides, even if we are living within the great collapse, we might as well act as if there is cause for hope. Because hey: who knows? (This is the metamodern stance)One area that gives me great cause for hope, as you know, is web3 and much of the active research and experimentation in (decentralised) regenerative finance. If web2 is about centralised platforms where wealth and power are concentrated, and where legitimate concerns are green-washed or purpose-washed away—web3 is about public, shared, open source, transparent, verifiable and ultimately: (a progression towards) a decentralisation of wealth and power.
I attended an event put on by the RMIT University’s Blockchain Innovation Hub last week, and I was not only impressed by how astute and en pointe the academics were (it is evident they have participatory knowledge)—but also just how the whole event was so naturally orientated towards collective coordination at scale.
I feel as though the general public—the vast majority whom are yet to even interact with a smart contract—fail to make this connection. Instead, their sense of web3 is usually filtered through mainstream (centralised, web2) media—and thus web3 tends to be cast and perceived in a dim light. Events like the collapse of centralised exchanges like FTX are seen as an issue with ‘crypto’—and not yet another example of the failures of opaque centralised systems that allow for immense fraud at scale.
Whereas Google’s (in)famous “don’t be evil” motto had the hidden implication that being evil is an option for any centralised player—with web3, protocols are built so that one can’t be evil. The open and transparent code-based nature of web3 protocols mean that we don’t have to rely on trusting corporations, governments or oligarchies to make moral choices—‘evil’ choices are ‘impossible’.
Of course, there are still smart contract risks, and protocols can still be hacked. But due to the composable nature of web3, any such hacks are rapidly (and openly) diagnosed by several naturally incentivised entities—the system is patched and then made stronger, all antifragile-like.
There’s a living quality to this, which is no surprise why there seems to be an overlap of interest between Indigenous scholars, complexity practitioners and those with a penchant for infinite play.
I was watching a presentation that Venkatesh Rao gave at a recent Ethereum Developers’ Conference titled There Are Many Alternatives: Unlocking Civilizational Hypercomplexity with Ethereum. In this presentation Venkatesh introduces some wonderfully complexity-congruent principles such as expressivity over explainability, thoroughness over efficiciency, inconsistency over incompletedness, exaptation over acceleration and carrier-bag lore over hero’s journey epics.
But perhaps the thing that most struck me—this metamodern-solarlunarpunk-infinite game-b-rogue-scholar-and-archwizard-of-ambiguity-questing-to-cocreate-a-world-more-curious-and-kind—was Venkatesh’s articulation of planetary mutualism over sovereign individualism <— an apt beacon to affix your sextant to. And, hearteningly, this very much seems to resonate with many of the developers in Ethereum and web3, too.
In his presentation, Venkatesh explores the notion of ossification—something that most complexity practitioners and infinite players are concerned about. When a system remains static and unchanging for too long, the likelihood of collapse or decay increases. After all, ‘only that which can change can continue’ as philosopher James P. Carse states.
But can a complex system ~phase-shift or ‘leap’ into higher orders of complexity? Yes, of course (if the conditions manifest such). Okay cool. But can we eliminate crisis pathways in this whole process? As Venkatesh—and other sage folk—suggest: no. No we cannot.
A complex system cannot attain metastability at higher orders of complexity via incrementalism. I’d suggest there’s a similar parallel to adult development; you cannot develop and grow as a person (in terms of cognitive complexity) without exposure to significantly destabilising events (death, divorce, disaster, displacement—themes I discuss in The ‘Choose One Word Ritual’ of Becoming).
We need entropy and support in order to process and integrate what happens after the inevitable dis-integration and ontological collapse such life events ensue.But whilst these crisis pathways are unavoidable, it may well be that we can mitigate the negative effects of any systemic collapse. That’s the hope, at least.
And so, on the topic of hope: back to AI.
AI mirrors our collective mediocrity
I was recently very close to giving up; I was very tempted to simply play the game of getting paid to say the things the market wants to hear. When I get booked to speak on the future of leadership I usually need to check if they actually want me to talk about the future of leadership—or to just use fancy-yet-vague thematics that reaffirm the current state of leadership. I’m not really good at playing the part of the propaganda puppet, though, so even whilst the folks may be saying ‘the future is human’ as though that means something significant—I’d usually do my best to sew some seeds of doubt.
(Note: if I was working with an audience blessedly full of doubt, my role would be the inverse. I’d help them to see the threads they don’t realise they hold, and I’d generally do what I can to imbue some sense of encouragement and enchantment).
Something has happened to me wherein I seemed to have developed ethics, taste, and a bothersome sense of intellectual honesty. These attributes are incredibly disadvantageous in the cut-throat mercenary world of ideas, wherein publishers award deals based upon ones ability to capture ideas that will resonate (commercially) with the masses.
I remember listening to my friend Amantha interview Adrian Zackheim—the founder and publisher of the business book imprint Portfolio and conservative political imprint Sentinel (of Penguin Books)—the guy who published Sinek’s books. In this interview Adrian casually and candidly describes what publishers look for when investing in authors. It’s not necessarily the most important ideas that get published—it’s the ones that resonate with the masses; the conversation that is anticipated (crafted?) to emerge in the two years it takes to publish a book. Which makes sense.
Part of me loves the noöspheric attunement that publishes must have. To sit at the nexus of intersecting ideas and trends, and to have enough memetic acuity and nous to have a sense of where the conversational tide will flow. It’s quite profound, and it’s what I do in my profession as a wizard. And yet… I find I don’t always concur with the mainstream flow. And I resist getting swept up and into prevailing narratives. (Or do I?) It thus makes sense to be the countercurrent at times—even if it is much less commercially viable to do so. Sometimes we must join the artists, poets and philosophers in the precariat, if that is what we are called to do. I am not even sure if we even have a choice but to. (Even better than this either/or framing is the both/and—though it is perhaps a little more perilous).
The one solace I have now though is that: AI is going to make it easier than ever for anyone to publish mediocre work. This is wonderful news. For the past 5-10 years mediocrity was the competitive advantage in the thought leadership arena. Mediocre ideas resonate with more people, are more familiar (less alien) and more likely to be accepted—and are thus more popular and profitable.
But AI has achieved mediocre human parity. Which begs the question: why hire a mediocre consultant/artist/speaker/manager/engineer/whatever when you can work with the AI to produce equivalent (or potentially better) results?
(Hoho maybe I just want everyone to come join me in this long existential crisis I have been enjoying, it’s fun!)
I asked the AI to come up with a linkedin profile for someone with no qualifications and relevant experience—but make it seem like they are successful, hitting on all the corporate buzzwords without being specific.
“As a dynamic and innovative thought leader, I specialize in driving positive change and fostering a culture of growth and success within organizations. With a strong focus on leadership and culture, I have a proven track record of helping businesses reach their full potential. My ability to think outside the box and challenge the status quo has allowed me to consistently deliver results and drive positive change. I am passionate about empowering others and helping organizations thrive in today's rapidly evolving business landscape. Let's connect and see how I can help your business reach new heights!” 🤢
Jokes aside, AI is already helping in very meaningful ways. The wonderful Aletheia—an artist I admire—shares an optimistic take on how AI can improve an artist’s creative process. Here’s an example, mid-thread.

![a gptchat generated follow up email asking for payment that is more forceful in tone. it says:
I am writing to follow up on my previous email requesting payment for invoice [number]. as of today, this invoice remains outstanding and I have not received a response from you.
I understand that things come up and sometimes payment can be delayed, but this invoice is now significantly past due. I would greatly appreciate it if you could settle the balance as soon as possible.
if you have any questions or concerns, please let me know. I would be happy to discuss them with you. However, I do expect to receive payment for the outstanding balance on this invoice in a timely manner.
thank you for your prompt attention to this matter,
best regards,
[your name]](https://substackcdn.com/image/fetch/w_600,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fpbs.substack.com%2Fmedia%2FFj_WruvVsAANTNp.png)
A friend of mine—who runs an enterprise working on new tech primitives—had the AI set to debug code and suggest modules. “It can 50x an engineer’s productivity”, he said. “It’s suggesting better ways to do things, or techniques we were unaware of, and it’s right. My CTO is in love. It’s way better than googling problems.”
He even genuinely asked it for advice in management domains. This is helpful and useful!
Thankfully I don’t manage a team—I simply advise other leaders on how to manage theirs (from the lofty safety of my ivory wizard tower). But I do enjoy having the AI provide me with humorous responses to the leading and loaded questions I feed it.
Questions like: “what do mediocre thought leaders talk about these days?”


Which brings me to my point: AI is scarily good at mediocre operations and increasingly good at technical operations that require expert knowledge. (I had Kim ask it some veterinarian surgery questions for specific medications and breeds—and it did reasonably well). What does this mean?
My selfish hope is that it’s going to be increasingly difficult to get away with mediocre ‘thought leadership’—but that it also means we can more efficaciously expand our collective noösphere (our shared expanse of knowledge), too. That, in concert with AI, we can somehow cultivate an ever-more vibrant knowledge commons.
This may be a pipe dream, though. In the interview I originally linked to, Emad Mostaque talks of his concerns that since Elon Musk’s multi-billion dollar donation, Open AI seems to be becoming more closed. The ideal would be something truly decentralised, collectively owned, and open-sourced—but the infrastructure of daos (decentralised autonomous organisations) is not quite ready to support such endeavours (or so Emad claims). Instead, he intends to do an IPO “in every single country”. I worry that the nation state is not an appropriate coordination-container here, and would rather it be something more planetary.
We are witnessing the genesis of a trillion dollar industry, and I suspect Apple, Meta, and many others are already well advanced in their own development. I do not want to see and AI arms-race, yet that may be what is already at play. I also do not want the majority of humanity to be priced out of accessing AI. Nor do I want ‘freemium’ models wherein those who own the platforms get all the power, profit and control (as is the case in web2). I guess we’ll see. The next version of Siri will be fun, though.
Where will you invest your attention in 2023?
At this time of the year I like to step back from things, so as to attain some distance and perspective. From this vantage, I tend to notice and clean up the some of the entropy and life clutter that my passage through time seems to have had me accrete.
In terms of attention: I’m very much looking forward to a refresh and a reset. The town square of Twitter is now degraded, Meta still feels like a simulation where your every move is creepily watched and recorded, and the wax museum of LinkedIn remains as shallow, mawkish and cringe as ever. Substack will likely give in to the corruption of venture capital demanding a return on investment, but for now I appreciate that it is not yet terrible. I am genuinely considering moving my writing to the independent not-for-profit open-source platform that is ghost—but that’s a distraction for another day.
I intend to give more attention to the periphery/edge/fringe/penumbra. The scholars, shamans, tricksters, rogues and those whose perspective the AI cannot (yet) easily assimilate. I hope to synthesise such insight, and then—like some sort of metamodern public (pseudo-)intellectual—share what insight I accrue with you here, in The Museletter (honouring all sources, of course). From there you can contrast what dubious insight I might accrue with that of the (AI-)accepted convention and your own sensibilities, making of it all what you will.
It may be too early for you to know what quest you are upon, if any. And it may be too soon for you to have a gauge on what meaningful progress looks like for you, in the year ahead. If so, give it some time.
I personally feel I have had enough of a hiatus from the ritual of becoming that I can return to it afresh. I look forward to the reflection, introspection and projection the program offers, and the hopes of perhaps finding/fabricating new motivation, meaning and myth in the year ahead. (You can still join for free with the hidden code on this page; idk might delete this too one day).
Finally, here’s something you might want to gift yourself for the year ahead—a 2023 wall calendar, as made by the dangerlam. 🥰

For years the dangerlam and I have been using a large calendar on our bedroom wall to capture gratitudes before bed each evening, and to track keystone habits (like exercise). I’ve loved this—but there were qualities to a wall planner I really craved.
Things like:
Minimalism (not the cold brutal kind—but the warm artful kind)
Faint, low-contrast lines (so you aren’t boxed in—you can cross boundaries)
A hand-drawn feel (so that our relationship to time is less clinical—more organic)
A place for qualitative theming (open boxes to note contextual intent)
The freedom to track progress in whatever way works for you (we use a star-sticker system and coloured dots)
A sense of flow (days and months in an intermittently-continuous continuum; like our experience of time and ‘the illusion of selfhood’ itself)
For the first time in a very long time, I find myself actually excited to Make Plans
—and I’d love for you to have this excitement, too.A digital download is available to purchase on Kim’s website. Once acquired, you can have your local printer conjure your very own 2023 wall calendar in A1 or A2 format.
And on that note, I genuinely wish you a wonderful solstice, restorative quiescence and end-of-year revelries with those you love. Thank you, so very much, for staying with me as I navigate my own conflicting values and quandaries. Whilst I have never quite had a grasp on social media platforms, I have been writing The Museletter haphazardly for over a decade now. That eleven thousand of you are with me today (sans any marketing or ‘growth hacking’) is something of a marvel. I am so un-ironically hashtag blessed. (Genuinely, I am immensely grateful; thank you.) 🧡🦊🧙🏻♂️
Hope is a profitable optic. People will pay good money to be reassured that Everything Is Fine.
On this topic I commend to you Charles Eisenstein’s essay: Neither Hero nor Journey.
I should add that, whilst it is not something I have experienced, I have very smart and wise friends who have made tremendous leaps in their own development and perspective via psychedelics. And likewise, it is possible to experience developmental leaps via genuine STEM pathways (as David Chapman elucidates)—but even here the path still takes us through the nihilism of The Abyss. Is this a necessary threshold? Alas: I suspect so.
Kind of like that Ernest Hemingway passage from A Moveable Feast that converted me to oysters (hat tip JJ):—
“As I ate the oysters with their strong taste of the sea and their faint metallic taste that the cold white wine washed away, leaving only the sea taste and the succulent texture, and as I drank their cold liquid from each shell and washed it down with the crisp taste of the wine, I lost the empty feeling and began to be happy and to make plans.”