Regarding the introduction: "I think you could answer this concern by saying that those ‘reasonable grounds‘ aren’t applicable to AI systems. We are morphologically fundamentally different."

I don’t think that "being morphologically different" is a good reason to answer that sentience could not apply to AI systems. This would be theory-laden in the sense that you are implying an answer to the ontological constitution of consciousness.

Even though the physical processes in our brain seem to be necessary for our brain to create consciousness, that might not apply to other substrates that could constitute consciousness, like AI systems.

If, in turn, the constitution of consciousness turns out to be morphologically bound to brains, then you are totally right in arguing that we do not have reasonable grounds to ascribe sentience to an AI system.

This is an interesting point in any case. I think I will write a post on this problem. It seems like there is a tendency to ascribe something special to the neurobiological substrate because it looks as if only it can create consciousness.

Expand full comment