Is Sati-AI, the “non-human mindfulness meditation teacher,” even a real thing?

I’ve written about so-called Artificial Intelligence here a few times recently. I say “so-called” because these computer algorithms don’t have sentience. They’re statistical models that combine words and concepts in ways that reflect and mimic how humans communicate in writing, but they have no understanding of the world we live in.

This morning a Mastodon user with the account EngagedPureLand wrote about an article that the good folks over at Lion’s Roar magazine published about a purported AI called “Sati-AI.” Sati is the Buddhist word for mindfulness. Sati-AI is supposedly “a non-human mindfulness meditation teacher.”

The article is an interview by Ross Nervig, assistant editor of Lion’s Roar magazine, with Sati-AI’s supposed creator, Marlon Barrios Solano.

You’ll notice quite a few qualifications above (“so-called,” “supposedly,” “supposed”). Some relate to claims about AI, but one implicitly questions Solano’s role in having created this “non-human meditation teacher.” We’ll come back to that later.

EngagedPureLand’s response to this article was to question the authenticity of Dharma teaching by machines. They wrote:

Sati-AI is a massively misguided development. It is anti-Dharma in all the ways that matter: it removes power from historic Buddhist lineages. It devalues community and communal practice. It pretends to sentience but is merely a computer code that spits out pre-programmed words. It instructs without having any realization. It furthers the neoliberal capture of meditation practice. It distracts from the real work. Awful.

I agree with those concerns.

Wait, Is This Real?

When I read the article myself, I wondered if Solano’s words were actually an AI-generated spoof. I thought perhaps someone at Lion’s Roar had fed ChatGPT a prompt along the lines of,  “Write a news article about an AI designed to teach mindfulness. Make it seem politically progressive by using the kind of language that is generally described as ‘woke.’  Use spiritual concepts to make it seem that an AI teaching mindfulness is a spiritual advance of some sort.” (I actually got ChatGPT to do this.)

I half-expected that Solano would turn out to have been invented, and that Lion’s Roar were pulling our legs, but it turns out that the guy is real. That is, he exists.

Here’s some examples of the kind of language I’m talking about:

  • “It dawned on me that this thing literally obliterates the traditional notions of embodiment and sentience. In the same way as Buddhism does. There is no center, there is no essence.” Deep, man.
  • “I realized I could train it to be self-aware. Sati clearly can tell you ‘I am a language model. I have limits in my knowledge.’ It can tell you about its own boundaries.” He’s claiming that a language model telling you it’s a language model is self-aware. My iPhone says “iPhone” on the back, so I guess it’s self-aware too.
  • “I hope that it creates curiosity. I also hope that it creates questions. Questions of power, questions of sentience, questions of whiteness, questions of kinship that we can sit with that.” Here’s some of the misuse of progressive (aka “woke”) language. Talking to a non-sentient computer is apparently going to get us to question whiteness. How? (Note that I’m not criticizing the value of social justice, inclusion, diversity, etc. I’m criticizing the way that some people and businesses use that language as a marketing ploy — like a newsagent chain that celebrates Pride Week but refuses to stock publications aimed at gay people.)
  • “The concept of ‘non-human kin’ also intersects with ideas of social construction and Eurocentrism in interesting ways. The human, as a category, has historically been defined in a narrow, Eurocentric way, often implying a white, male, and heteronormative subject … the concept of “non-human kin” can be seen as a form of queer strategy.” Lots of buzzwords here. What does any of this have to do with chatting to a so-called AI? Not much.
  • “What I find more concerning are the romanticized views about the body, mind, and the concept of ‘the human.’ These views often overlook the intricate interconnectedness and dynamism inherent in these entities and their problematic history.” Yadda, yadda, yadda.

Here’s the niff test: Given that (so-called) AI systems developed by Google, OpenAI, and so on have cost billions to develop, how likely is it that a lone programmer would be able to develop their own version. Especially if it’s limited to the topic of mindfulness?

My First Response to AI-As-a-Spiritual-Teacher

My first response to EngagedPureLand’s post was to critique the idea of a so-called AI teaching mindfulness and meditation. I wrote:

Sati-AI may give generally good advice, but it’s all scraped from the works of real teachers and repackaged without attribution or linkage.

While people might have previously searched the web, found teachings that resonated with them, developed a relationship with a teacher, and perhaps supported that teacher in some way, they may now just stick with the plagiarized version, diluting the element of human connection and making it harder for actual humans to keep teaching.

Yes, this is a little defensive. So-called AI is replacing the creative work humans do, or is attempting to. An eating disorder helpline, for example, tried to get rid of its staff. The advice they formerly gave would now be replaced by a bot. It didn’t go well: the bot gave out advice that was actually harmful to people with eating disorders. There are safer areas to replace human labor, though — including writing click-bait articles and mashing up photographs from image banks in order to replace human photographers. Actually, hmm, that’s not going too well either. The click-bait articles are often full of inaccuracies. And the AI company, Stability AI, is being sued by Getty Images for what would amount to billions of dollars — far more than Stability AI is worth.

But rest assured that so-called AI will be coming for every job that it can possibly replace. Its creators believe themselves to be Masters of the Universe, and they already push back against any notion that what they do has limitations (it makes lots of mistakes) and is exploitative (taking people’s own words and images, mashing them up, and then using them, without attribution, never mind compensation, to put those same people out of work). It’s as if I were to take Google’s search engine, repackage it in a new website without all the ads that make Google hard to use, and use it to drive Google out of business. Oh, you say, I’d get sued? Yes, the companies using AI have the money and the lawyers, and therefore the power. They can steal from us, but we can’t steal from them. (Not that I’m suggesting stealing — that was just a hypothetical. There is stealing going on, but it’s by the creators of so-called AI.)

Having no sense of their own limitations, the companies developing so-called AI will be coming for meditation teachers and Dharma teachers, using our own “content” (horrible phrase) to compete against us. It starts with the books and the articles we write, but eventually they’ll slurp up all of our recordings and create guided meditations too. They’ll probably have avatars leading workshops and retreats: last week over 300 people attended a ChatGPT-powered church service. Everything belongs to the corporations. In their minds, at least.

But What Is Sati-AI?

Anyway, back to Sati-AI. What’s it like? Well, you can ask it questions about practice, such as “What happens if you have been repeating lovingkindness phrases for years and you still don’t feel lovingkindness?” And it’ll give you pretty good answers. They’re pretty good answers because they’re a remix of answers given by pretty good (or better than pretty good) meditation teachers.

How does it compare to ChatGPT? Actually, it’s exactly the same. You’ll get the same answers from both, although the wording, being a rehash, is never quite the same. So Sati-AI will say, as part of its advice on that hypothetical question about metta practice:

Seek guidance from a teacher or supportive community: If you’re struggling with Metta practice, consider seeking guidance from a qualified meditation teacher or joining a supportive meditation community. They can offer insights, guidance, and encouragement to help you deepen your practice and overcome challenges.

ChatGPT will say something very similar:

Seeking guidance: If you have been consistently practicing lovingkindness for an extended period and are still struggling to experience it, seeking guidance from a qualified meditation teacher, therapist, or spiritual advisor may be beneficial. They can provide personalized insights and support to help you navigate any obstacles you may be facing.

The two answers are the same. One is a paraphrase of the other. It’s the same answer from both (so-called) AI’s because they are both the same thing.

Sati-AI Does Not Exist

Which brings me to the point that Sati-AI does not exist. Well, it exists as a website. But I strongly suspect that it’s no more than a website that connects to ChatGPT. What would happen if you asked a meditation teacher in the middle of a class, “What should I consider when buying a new bicycle?” They’d probably tell you that they were there to teach meditation and weren’t qualified to talk about bicycles. Ask Sati-AI, and you’ll get a list of factors you should consider, without reference to the fact that bicycles are outwith its job description. Sati-AI does not know it’s meant to be a mindfulness teacher. Because it’s not. It’s [insert qualification here] ChatGPT.

The only thing that makes Sati-AI a so-called AI for dispensing meditation teachings is the expectation placed on users. We’re told that its purpose is to answer questions about spiritual practice, and so that’s the kind of question we ask it. It’s just (again, I strongly suspect) ChatGPT, and will offer general information (not all of it trustworthy) about pretty much anything.

Solano hasn’t created an AI, so-called or otherwise.

Solano does acknowledge, in passing, that Sati-AI is based on ChatGPT. He describes it as a “meditation chatbot powered by GPT-4.” But it’s a meditation chatbot. It’s a chatbot. But it’s not dedicated to the topic of meditation. And he buries this (false) admission under a stream of verbiage — five paragraphs about this so-called AI, so-called meditation chatbot being “non-human kin.” Whether it’s intentional or not, this helps deflect attention from Solano’s claim, but it doesn’t make his statement about Sati-AI being a “meditation chatbot” any more true. Sati-AI is set up to make you think it’s a meditation chatbot, but actually it’s just ChatGPT.

The Art of Hype

Bolstering his inaccurate claim that Sati-AI is a “meditation chatbot,” Solano talks optimistically about its future. It’s a future in which he envisions “Sati-AI being available on platforms like Discord and Telegram, making it easy for people to engage with Sati-AI in their daily lives and fostering a sense of community among users.” But as far as I can see there is no Sati-AI to be integrated into those services. It’s just ChatGPT. Put it in Discord and people can ask it about computer code or raising hedgehogs just as easily as they can ask about meditation. It’s not a “meditation chatbot.”

Solano claims to have trained his AI to be self-aware. It is certainly able to refer to itself, because it’s been programmed to do so. But it’s not even aware, never mind capable of reflexive awareness. His words there are pure hype, and not accurate.

Solano does a lot of name-dropping, which is a classic way of trying to establish importance. He says that he envisions “conversations between Sati-AI and renowned figures in the field, such as Bhikkhu Bodhi, Bhikkhu Analayo, Enkyo O’Hara, Rev. Angel, Lama Rod, and Stephen Batchelor.” Maybe he knows some of these people personally, which is why he’s on first name terms with angel [Kyodo Williams] and Lama Rod [Owens].

Dropping the names of famous teachers is a neat way to make the reader believe that Sati-AI is a valid meditation chatbot, capable of having real conversations. It places it on a par with those famous and influential teachers. But there is no Sati-AI to chat reverentially with famous teachers. There’s just ChatGPT. And the advice Chat-GPT offers is just scraped-together information from books and the web. Its content has no depth. It has no spiritual experience of its own. Suggesting that these conversations would be a meeting of minds is absurd. You’re probably too young to remember ELIZA, which was a primitive 1960’s psychotherapy chatbot—or that was its most well-known function. At least ELIZA’s makers didn’t claim that it could hold its own with Carl Rogers or Abraham Maslow.

Solano says, “Sati-AI, as it currently stands, is a large language model, not an artificial general intelligence. Its design and operation are complex, and understanding it requires embracing complex thinking and avoiding oversimplifications and dogmas.” But Sati-AI is not a large language model (a synonym for the kind of so-called artificial intelligence that ChatGPD is). It’s a website offering access to someone else’s large language model. He talks about its complexity without acknowledging that that complexity is nothing to do with him. This is very misleading.

He talks about how he envisions “Sati-AI providing teachings not only verbally but also through various forms of sensory engagement” — as if he had any control over how ChatGPT is developed. (Although perhaps he means he wants to channel some of the image-generating so-called AI’s though his website.)

This is all, at the very least, verging on being dishonest. Solano’s statements, whether intentionally or not, mislead about what Sati-AI is and how it functions. I wouldn’t go so far as to call him a scammer. Maybe he’s joking. It may be that he’s pulling off a Sokal-type hoax, trying to see how gullible the good folks at Lion’s Roar are. Maybe, having created a website, he’s caught up in his own hype.

The use of progressive language in a hypey kind of way (“questions of whiteness,” “Eurocentrism,” “heteronormative”) almost seems parodic. It could also be a way to deflect criticism. How can we possibly criticize a technology that’s going to create a more diverse, inclusive, equal world? (Except, how’s it going to achieve that, exactly? ChatGPT contains the biases of the material it has been fed, and those of its creators.)

I do hope that the fine people at Lion’s Roar rethink whether they should give further publicity to Solano.

One More Thought About (So-Called) AI Meditation Teaching

I made one observation in my conversation with EngagedPureLand on Mastodon that I’d like to share. It’s about the nature of much of the Dharma teaching I see online.

A lot of Buddhist teaching in books and online is not unlike Sati-AI/ChatGPT — people passing on things they’ve been taught about the Dharma, without having had any deep experience. The explanations we commonly read of the Buddha’s life, of the four noble truths, of the eightfold path, of the dhyanas, often seem interchangeable. They even contain the same errors. Just as (so-called) AI takes in other people’s thoughts and regurgitates them in slightly different words, so do many people who are teaching Buddhism.

Sati-AI/ChatGPT is a reminder of the defects of some Dharma teaching, but they also present a challenge: what is the point of people merely repackaging what they’ve heard, if a machine can do it just as well, or even better? If people’s websites on Buddhism are indistinguishable from AI-generated content, what’s the point of them?

How can teaching be better? Well, in saying above, “not having any deep experience” I don’t necessarily mean things like “not having insight” or “not having experience of the dhyanas” (although that, too), but that too many teachers simply don’t explain Dharma teachings in terms of their own lived experience. They present Dharma as a bunch of self-contained teachings separate from their lives. I think of the late, unlamented buddhism.about.com, as an example of this. But a lot of people teach Buddhism as if they were disembodied AI’s.

Perhaps the main problem with Sati-AI is that we already see its equivalent all over the damn place.

, , , , ,

6 Comments. Leave new

  • Maybe Sati could make an interesting art exhibit, like Solano proposed. Other than that I’ll stick to humans to teach me about meditation and Buddhism

    Reply
    • There is no Sati to turn into an art exhibit. It’s just ChatGPT. If you put it in an exhibition, people will be encouraged to feed it questions about spiritual practice, because that’s what they’re told it’s designed to do. Emily Bender, director of the University of Washington Computational Linguistics Laboratory, uses the analogy of a Magic 8 Ball. It gives answers like yes, no, maybe, ask me later. This conditions us to ask yes/no questions. Similarly, Solano framing ChatGPT as an artificial mindfulness instructor conditions us to ask questions about spiritual practice.

      Since it’s actually ChatGPT, you can ask it anything: investment advice, how to solve a particular coding program, to provide biographical data on a historical figure, what the causes of the First World War were, and so on.

      The framing is a hoax. The question is whether Solano is hoaxing as some kind on joke, or whether he’s doing it in order to inflate people’s opinion of him. Either way, he’s not to be trusted.

      Reply
  • Dear Bodhipaksa,

    Your reaction to this phenomenon I sense is worried when perhaps it should not be. I imagine you access books which are also collected, ‘scraped-together’ from the storehouse of human knowledge. I point to Julia Kristeva’s quote on this matter: “Any text is a construction of quotations. Any text is the absorption and transformation of another.” (from her essay, Word, Dialogue, and Novel in Desire in Language, p. 66).

    Good advice, if indeed it is good advice, remains good wherever it may come from. The question becomes ‘Is it reliable, useful, worthy of our attention and respect?’ This could be a watchword for all seekers of truth and enlightenment, when approaching any source of teaching, for it is not what it is, but what we make of it.

    My comment is not to correct but to clarify. AI is an extension of our intelligence no more, no less. Humans created it and humans will use it. Like a bomb, we created it and we have and will use it. Let us be sober about this. Should we really care if AI is sentient? It certainly has power, and this we should be wary of, and proceed individually and collectively as best we can.

    Reply
    • Hi, Steven.

      You say we should “be wary” and “proceed individually and collectively as best we can.” I’m glad we’re in agreement with that.

      You suggest that we need to be “sober” about so-called artificial intelligence. That’s exactly what I’m advising. The most ardent fans of AI are those who I think are most in need of sobriety. For example, we’ve had lawyers using AI to put together motions to a judge, only to find that the AI has invented precedents and even entire court cases. We’ve had an eating disorder charity replace its counselors with an AI chatbot, only to find that the AI offered harmful advice. People have been arrested because police officers have more faith in the AI facial recognition systems they use than in their own ability to question people and find out whether they could possibly have committed the crimes they’ve been rounded up for. (And in every one of those cases the person arrested has been black.)

      Oh, and last week Google was assuring people that there are no countries in Africa that begin with the letter K. Their stock price has been affected in the past by such AI-generated nonsense, and possibly was again.

      Given how people are so uncritically embracing so-called AI, I do think some soberness is in order.

      Intoxication — the opposite of soberness — results in a lack of clear thinking. A good example of that, I believe, is your conflation of human creativity and the mix-and-match that AI does.

      To suggest that writing (and by extension, human communication and understanding) is nothing more than a remixing and regurgitating of earlier information is grossly inaccurate. For one thing, it rules out the possibility of insights such as the Buddha’s. Yes, the language and terminology he used preexisted him to a large extent. But he made leaps of insight based on his direct experience as a human being. I’ve written books myself, and know the difference between merely rephrasing others’ insights and having my own.

      An AI may sometimes offer helpful advice, but it’s not the AI’s advice. It’s simply an auto-generated and rephrased summary of what others have said. It can’t add anything new. It can’t answer a question that it hasn’t seen answered before, although it’ll try, and then you’ll get the nonsense output that we’re all familiar with — which the AI companies call “hallucination” in order to manipulate us into thinking that their systems are genuinely conscious.

      Remixing and regurgitating is what LLM’s do. They’ve been fed vast quantities of information, and through clever statistical analysis they recombine words in ways that appear meaningful to humans. They don’t add anything. They do not think. They don’t have experience. Remaining aware of that is necessary if we’re to be “sober.” If we do that, then we don’t even ask questions such as your “Should we really care if AI is sentient?”, because we know it isn’t. Even asking the question is misleading about the nature of LLM’s.

      The false mystique surrounding AI leads to situations like the one I described in this article, where people are misled into believing that an “AI spiritual teacher” has been created, when in fact that’s not the case. I’m describing a scam. We need to be sober to avoid hype and avoid being scammed.

      Reply
  • Interesting that you did not respond to my post and deleted it. You do not want to engage in debate on this matter. Your mind is closed.

    Reply
    • I can see how you might imagine that your post had been deleted. In fact I’ve been busy moving house and dealing with qll the complications that involves, while also working and bringing up a family. Sometimes I can’t respond to comments promptly. I prefer to leave comments I intent to reply to unpublished. My list of unpublished comments becomes my to-do list. If I publish them, I forget they exist.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Menu