I’ve written about so-called Artificial Intelligence here a few times recently. I say “so-called” because these computer algorithms don’t have sentience. They’re statistical models that combine words and concepts in ways that reflect and mimic how humans communicate in writing, but they have no understanding of the world we live in.
This morning a Mastodon user with the account EngagedPureLand wrote about an article that the good folks over at Lion’s Roar magazine published about a purported AI called “Sati-AI.” Sati is the Buddhist word for mindfulness. Sati-AI is supposedly “a non-human mindfulness meditation teacher.”
The article is an interview by Ross Nervig, assistant editor of Lion’s Roar magazine, with Sati-AI’s supposed creator, Marlon Barrios Solano.
You’ll notice quite a few qualifications above (“so-called,” “supposedly,” “supposed”). Some relate to claims about AI, but one implicitly questions Solano’s role in having created this “non-human meditation teacher.” We’ll come back to that later.
EngagedPureLand’s response to this article was to question the authenticity of Dharma teaching by machines. They wrote:
Sati-AI is a massively misguided development. It is anti-Dharma in all the ways that matter: it removes power from historic Buddhist lineages. It devalues community and communal practice. It pretends to sentience but is merely a computer code that spits out pre-programmed words. It instructs without having any realization. It furthers the neoliberal capture of meditation practice. It distracts from the real work. Awful.
I agree with those concerns.
Wait, Is This Real?
When I read the article myself, I wondered if Solano’s words were actually an AI-generated spoof. I thought perhaps someone at Lion’s Roar had fed ChatGPT a prompt along the lines of, “Write a news article about an AI designed to teach mindfulness. Make it seem politically progressive by using the kind of language that is generally described as ‘woke.’ Use spiritual concepts to make it seem that an AI teaching mindfulness is a spiritual advance of some sort.” (I actually got ChatGPT to do this.)
I half-expected that Solano would turn out to have been invented, and that Lion’s Roar were pulling our legs, but it turns out that the guy is real. That is, he exists.
Here’s some examples of the kind of language I’m talking about:
- “It dawned on me that this thing literally obliterates the traditional notions of embodiment and sentience. In the same way as Buddhism does. There is no center, there is no essence.” Deep, man.
- “I realized I could train it to be self-aware. Sati clearly can tell you ‘I am a language model. I have limits in my knowledge.’ It can tell you about its own boundaries.” He’s claiming that a language model telling you it’s a language model is self-aware. My iPhone says “iPhone” on the back, so I guess it’s self-aware too.
- “I hope that it creates curiosity. I also hope that it creates questions. Questions of power, questions of sentience, questions of whiteness, questions of kinship that we can sit with that.” Here’s some of the misuse of progressive (aka “woke”) language. Talking to a non-sentient computer is apparently going to get us to question whiteness. How? (Note that I’m not criticizing the value of social justice, inclusion, diversity, etc. I’m criticizing the way that some people and businesses use that language as a marketing ploy — like a newsagent chain that celebrates Pride Week but refuses to stock publications aimed at gay people.)
- “The concept of ‘non-human kin’ also intersects with ideas of social construction and Eurocentrism in interesting ways. The human, as a category, has historically been defined in a narrow, Eurocentric way, often implying a white, male, and heteronormative subject … the concept of “non-human kin” can be seen as a form of queer strategy.” Lots of buzzwords here. What does any of this have to do with chatting to a so-called AI? Not much.
- “What I find more concerning are the romanticized views about the body, mind, and the concept of ‘the human.’ These views often overlook the intricate interconnectedness and dynamism inherent in these entities and their problematic history.” Yadda, yadda, yadda.
Here’s the niff test: Given that (so-called) AI systems developed by Google, OpenAI, and so on have cost billions to develop, how likely is it that a lone programmer would be able to develop their own version. Especially if it’s limited to the topic of mindfulness?
My First Response to AI-As-a-Spiritual-Teacher
My first response to EngagedPureLand’s post was to critique the idea of a so-called AI teaching mindfulness and meditation. I wrote:
Sati-AI may give generally good advice, but it’s all scraped from the works of real teachers and repackaged without attribution or linkage.
While people might have previously searched the web, found teachings that resonated with them, developed a relationship with a teacher, and perhaps supported that teacher in some way, they may now just stick with the plagiarized version, diluting the element of human connection and making it harder for actual humans to keep teaching.
Yes, this is a little defensive. So-called AI is replacing the creative work humans do, or is attempting to. An eating disorder helpline, for example, tried to get rid of its staff. The advice they formerly gave would now be replaced by a bot. It didn’t go well: the bot gave out advice that was actually harmful to people with eating disorders. There are safer areas to replace human labor, though — including writing click-bait articles and mashing up photographs from image banks in order to replace human photographers. Actually, hmm, that’s not going too well either. The click-bait articles are often full of inaccuracies. And the AI company, Stability AI, is being sued by Getty Images for what would amount to billions of dollars — far more than Stability AI is worth.
But rest assured that so-called AI will be coming for every job that it can possibly replace. Its creators believe themselves to be Masters of the Universe, and they already push back against any notion that what they do has limitations (it makes lots of mistakes) and is exploitative (taking people’s own words and images, mashing them up, and then using them, without attribution, never mind compensation, to put those same people out of work). It’s as if I were to take Google’s search engine, repackage it in a new website without all the ads that make Google hard to use, and use it to drive Google out of business. Oh, you say, I’d get sued? Yes, the companies using AI have the money and the lawyers, and therefore the power. They can steal from us, but we can’t steal from them. (Not that I’m suggesting stealing — that was just a hypothetical. There is stealing going on, but it’s by the creators of so-called AI.)
Having no sense of their own limitations, the companies developing so-called AI will be coming for meditation teachers and Dharma teachers, using our own “content” (horrible phrase) to compete against us. It starts with the books and the articles we write, but eventually they’ll slurp up all of our recordings and create guided meditations too. They’ll probably have avatars leading workshops and retreats: last week over 300 people attended a ChatGPT-powered church service. Everything belongs to the corporations. In their minds, at least.
But What Is Sati-AI?
Anyway, back to Sati-AI. What’s it like? Well, you can ask it questions about practice, such as “What happens if you have been repeating lovingkindness phrases for years and you still don’t feel lovingkindness?” And it’ll give you pretty good answers. They’re pretty good answers because they’re a remix of answers given by pretty good (or better than pretty good) meditation teachers.
How does it compare to ChatGPT? Actually, it’s exactly the same. You’ll get the same answers from both, although the wording, being a rehash, is never quite the same. So Sati-AI will say, as part of its advice on that hypothetical question about metta practice:
Seek guidance from a teacher or supportive community: If you’re struggling with Metta practice, consider seeking guidance from a qualified meditation teacher or joining a supportive meditation community. They can offer insights, guidance, and encouragement to help you deepen your practice and overcome challenges.
ChatGPT will say something very similar:
Seeking guidance: If you have been consistently practicing lovingkindness for an extended period and are still struggling to experience it, seeking guidance from a qualified meditation teacher, therapist, or spiritual advisor may be beneficial. They can provide personalized insights and support to help you navigate any obstacles you may be facing.
The two answers are the same. One is a paraphrase of the other. It’s the same answer from both (so-called) AI’s because they are both the same thing.
Sati-AI Does Not Exist
Which brings me to the point that Sati-AI does not exist. Well, it exists as a website. But I strongly suspect that it’s no more than a website that connects to ChatGPT. What would happen if you asked a meditation teacher in the middle of a class, “What should I consider when buying a new bicycle?” They’d probably tell you that they were there to teach meditation and weren’t qualified to talk about bicycles. Ask Sati-AI, and you’ll get a list of factors you should consider, without reference to the fact that bicycles are outwith its job description. Sati-AI does not know it’s meant to be a mindfulness teacher. Because it’s not. It’s [insert qualification here] ChatGPT.
The only thing that makes Sati-AI a so-called AI for dispensing meditation teachings is the expectation placed on users. We’re told that its purpose is to answer questions about spiritual practice, and so that’s the kind of question we ask it. It’s just (again, I strongly suspect) ChatGPT, and will offer general information (not all of it trustworthy) about pretty much anything.
Solano hasn’t created an AI, so-called or otherwise.
Solano does acknowledge, in passing, that Sati-AI is based on ChatGPT. He describes it as a “meditation chatbot powered by GPT-4.” But it’s a meditation chatbot. It’s a chatbot. But it’s not dedicated to the topic of meditation. And he buries this (false) admission under a stream of verbiage — five paragraphs about this so-called AI, so-called meditation chatbot being “non-human kin.” Whether it’s intentional or not, this helps deflect attention from Solano’s claim, but it doesn’t make his statement about Sati-AI being a “meditation chatbot” any more true. Sati-AI is set up to make you think it’s a meditation chatbot, but actually it’s just ChatGPT.
The Art of Hype
Bolstering his inaccurate claim that Sati-AI is a “meditation chatbot,” Solano talks optimistically about its future. It’s a future in which he envisions “Sati-AI being available on platforms like Discord and Telegram, making it easy for people to engage with Sati-AI in their daily lives and fostering a sense of community among users.” But as far as I can see there is no Sati-AI to be integrated into those services. It’s just ChatGPT. Put it in Discord and people can ask it about computer code or raising hedgehogs just as easily as they can ask about meditation. It’s not a “meditation chatbot.”
Solano claims to have trained his AI to be self-aware. It is certainly able to refer to itself, because it’s been programmed to do so. But it’s not even aware, never mind capable of reflexive awareness. His words there are pure hype, and not accurate.
Solano does a lot of name-dropping, which is a classic way of trying to establish importance. He says that he envisions “conversations between Sati-AI and renowned figures in the field, such as Bhikkhu Bodhi, Bhikkhu Analayo, Enkyo O’Hara, Rev. Angel, Lama Rod, and Stephen Batchelor.” Maybe he knows some of these people personally, which is why he’s on first name terms with angel [Kyodo Williams] and Lama Rod [Owens].
Dropping the names of famous teachers is a neat way to make the reader believe that Sati-AI is a valid meditation chatbot, capable of having real conversations. It places it on a par with those famous and influential teachers. But there is no Sati-AI to chat reverentially with famous teachers. There’s just ChatGPT. And the advice Chat-GPT offers is just scraped-together information from books and the web. Its content has no depth. It has no spiritual experience of its own. Suggesting that these conversations would be a meeting of minds is absurd. You’re probably too young to remember ELIZA, which was a primitive 1960’s psychotherapy chatbot—or that was its most well-known function. At least ELIZA’s makers didn’t claim that it could hold its own with Carl Rogers or Abraham Maslow.
Solano says, “Sati-AI, as it currently stands, is a large language model, not an artificial general intelligence. Its design and operation are complex, and understanding it requires embracing complex thinking and avoiding oversimplifications and dogmas.” But Sati-AI is not a large language model (a synonym for the kind of so-called artificial intelligence that ChatGPD is). It’s a website offering access to someone else’s large language model. He talks about its complexity without acknowledging that that complexity is nothing to do with him. This is very misleading.
He talks about how he envisions “Sati-AI providing teachings not only verbally but also through various forms of sensory engagement” — as if he had any control over how ChatGPT is developed. (Although perhaps he means he wants to channel some of the image-generating so-called AI’s though his website.)
This is all, at the very least, verging on being dishonest. Solano’s statements, whether intentionally or not, mislead about what Sati-AI is and how it functions. I wouldn’t go so far as to call him a scammer. Maybe he’s joking. It may be that he’s pulling off a Sokal-type hoax, trying to see how gullible the good folks at Lion’s Roar are. Maybe, having created a website, he’s caught up in his own hype.
The use of progressive language in a hypey kind of way (“questions of whiteness,” “Eurocentrism,” “heteronormative”) almost seems parodic. It could also be a way to deflect criticism. How can we possibly criticize a technology that’s going to create a more diverse, inclusive, equal world? (Except, how’s it going to achieve that, exactly? ChatGPT contains the biases of the material it has been fed, and those of its creators.)
I do hope that the fine people at Lion’s Roar rethink whether they should give further publicity to Solano.
One More Thought About (So-Called) AI Meditation Teaching
I made one observation in my conversation with EngagedPureLand on Mastodon that I’d like to share. It’s about the nature of much of the Dharma teaching I see online.
A lot of Buddhist teaching in books and online is not unlike Sati-AI/ChatGPT — people passing on things they’ve been taught about the Dharma, without having had any deep experience. The explanations we commonly read of the Buddha’s life, of the four noble truths, of the eightfold path, of the dhyanas, often seem interchangeable. They even contain the same errors. Just as (so-called) AI takes in other people’s thoughts and regurgitates them in slightly different words, so do many people who are teaching Buddhism.
Sati-AI/ChatGPT is a reminder of the defects of some Dharma teaching, but they also present a challenge: what is the point of people merely repackaging what they’ve heard, if a machine can do it just as well, or even better? If people’s websites on Buddhism are indistinguishable from AI-generated content, what’s the point of them?
How can teaching be better? Well, in saying above, “not having any deep experience” I don’t necessarily mean things like “not having insight” or “not having experience of the dhyanas” (although that, too), but that too many teachers simply don’t explain Dharma teachings in terms of their own lived experience. They present Dharma as a bunch of self-contained teachings separate from their lives. I think of the late, unlamented buddhism.about.com, as an example of this. But a lot of people teach Buddhism as if they were disembodied AI’s.
Perhaps the main problem with Sati-AI is that we already see its equivalent all over the damn place.