![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
My friend
fdmts wrote a short philosophical essay on AI yesterday. It's interesting stuff. One of the phrases that he used caused me to reflect on the nature of assessments of intelligence and self-awareness, of which the Turing test is probably the most notorious.
Quoting from my comment to his post:
Personally, I'm inclined to not try very hard to define "intelligence" as a binary proposition, e.g., "A is intelligent and B is not"; it implies a specific choice of threshold that's really hard to define meaningfully. Long discussions on this have, at least for the time being, brought me around to the conclusion that I'm much more comfortable with relative statements: "A is more intelligent than B in the following sense..." (And "self-aware"? Fugeddaboudit.)
*ponder*
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
Quoting from my comment to his post:
[I'd propose] an interesting inversion of the Turing test: perhaps we could consider a _computer_ to be intelligent and self-aware when it can reliably tell the difference between a person and a computer pretending to be a person . . . I'd argue that intelligent, self-aware beings are the only judges that intelligent, self-aware beings will accept as to what constitutes an intelligent, self-aware being.
Personally, I'm inclined to not try very hard to define "intelligence" as a binary proposition, e.g., "A is intelligent and B is not"; it implies a specific choice of threshold that's really hard to define meaningfully. Long discussions on this have, at least for the time being, brought me around to the conclusion that I'm much more comfortable with relative statements: "A is more intelligent than B in the following sense..." (And "self-aware"? Fugeddaboudit.)
*ponder*
Thanks!
Date: 28 January 2006 15:45 (UTC)W00t!