What kinds of problems do you think AI will eventually master? And when?
NEWS | 27 February 2026
My view is that an LLM is noting but an insanely complicated indexing system, with no awareness of the meaning/reality of the words it uses or itself as an entity, and can Not be intelligent. The fact that it usually answers questions with statements that sound reasonable, or even are correct is testimony to the extent of the indexing! The fact that it seems to act/say things as if it WERE a being with an agenda, but no morality, worries me in several ways. Even makes me rethink my own definition of conscious intelligence. That is also worrisome. But fun anyway.
Author: Joseph Howlett.
Source