Thoughts
Hofstadter's problem with regards to AI is that he thought there would be one jump from unintelligent computers to intelligent computers,
and we've had several small jumps. And we're not 100% at intelligence, but we're not where we were either.
Hofstadter thought that we would invent a method of modeling complex thought and that that computer would be the first computer to solve chess and the first computer to pass the Turing test and the first computer to be conscious. And we've solved chess; we have a computer better at chess than a human, which wasn't the case when GEB the written. And we have a computer (GPT-3 and other LLMs) that can generate text at a human-level (I'm being careful with my wording here, but text generated by GPT-4 is at an adult-human writing-level). And that wasn't the case when GEB was written. (That wasn't the case 3 years ago!)
But Hofstadter thought that those problems and AGI (as we would now call it) were the same problems and it's very clear now that they are not.