Is Seth Godin right – are we devoid of self as ChatGPT?

Seth Godin, back with another banger on his 7,794th consecutive day of publishing: https://seths.blog/2023/05/our-homunculus-is-showing/ This is an usually weighty and opinionated Seth Godin post. If you like cognitive science, advanced AI, Talmudic neo-Platonism, Jungian philosophy, and Buddhist thinking – let’s just call it non-dualism, then you’ll want to read this. What follows below is my LinkedIn comment on it, verbatim: I loved this nuanced take on AI; it’s neither for nor against it. Speaking for myself, I crave this right now, as I am just trying to understand better. One thing I get out of this is that AI should enhance our thinking but cannot think for us. Also, that we can incorporate AI into our work, or products or office tools, but we have to be aware that there’s no thinking person doing any work – and if we make products/works, we hve to be transparent with users/audience about this. There’s also a spiritual / philosophical angle in this article that I am not sure how I feel about. The idea seems to be that we anthropomorphize an ego onto AI – because we cling to our own ego. That I get, but does that mean our ego/self and free will doesn’t exist? _”We’re simply code, all the way down, just like ChatGPT. It’s not that we’re now discovering a new sort of magic. It’s that the old sort of magic was always an illusion.”_ Is that true? I’m agnostic on this point but it’s interesting and it’s something that could influence how I integrate gen AI into the products I work with. If both we and AI are just code all the way down, do we just merge our codebases?

Art of message – subscribe

Is Seth Godin right – are we devoid of self as ChatGPT?

May 19, 2023

Seth Godin, back with another banger on his 7,794th consecutive day of publishing: https://seths.blog/2023/05/our-homunculus-is-showing/

This is an usually weighty and opinionated Seth Godin post. If you like cognitive science, advanced AI, Talmudic neo-Platonism, Jungian philosophy, and Buddhist thinking – let’s just call it non-dualism, then you’ll want to read this. What follows below is my LinkedIn comment on it, verbatim:

I loved this nuanced take on AI; it’s neither for nor against it. Speaking for myself, I crave this right now, as I am just trying to understand better.

One thing I get out of this is that AI should enhance our thinking but cannot think for us.

Also, that we can incorporate AI into our work, or products or office tools, but we have to be aware that there’s no thinking person doing any work – and if we make products/works, we hve to be transparent with users/audience about this.

There’s also a spiritual / philosophical angle in this article that I am not sure how I feel about. The idea seems to be that we anthropomorphize an ego onto AI – because we cling to our own ego. That I get, but does that mean our ego/self and free will doesn’t exist?

_”We’re simply code, all the way down, just like ChatGPT.

It’s not that we’re now discovering a new sort of magic. It’s that the old sort of magic was always an illusion.”_

Is that true?

I’m agnostic on this point but it’s interesting and it’s something that could influence how I integrate gen AI into the products I work with.

If both we and AI are just code all the way down, do we just merge our codebases?

(This was originally published on Art of Message – subscribe here)