Interesting to read these different philosophical takes, some of which I was not familiar with. As a critique of AI, though, I don’t find a recapitulation of this type of thinking all that persuasive, much as I agree with the proposition that AI has limitations that place a hard limit on what it can achieve. Many different thinkers have looked at the phenomenology of first person experience and all sliced and diced it into different concepts. These philosophical categories may be personally illuminating or rich, but they also seem idiosyncratic — each philosopher creates a new conceptual system — and therefore don’t persuade me they are cutting reality up by its joints as it were. The most we can say out of all this thought in terms of relevance to AI is that AI lacks a rich direct experience of the physical world. This is true, and important. But does it in itself imply that it cannot achieve problem solving abilities equal to or greater than humans? There is a distinction between intelligence and consciousness. Could AI extract patterns of logical problem solving that will allow it to solve difficult conceptual problems that humans can’t? I think that is highly possible, even in the absence of consciousness, especially since it already exceeds me in coding ability in many ways (and I’m a principal software engineer). Would such focused super-intelligence constitute AGI? I don’t know, but the social impacts would be dramatic nonetheless. Whether it experiences what Pierce called Firstness may then not matter all that much.