=> https://mobile.twitter.com/3blue1brown/status/1599200613488676866 Makes me wonder if the goal is to create a generally-intelligent AI or
if the goal is to pass the Turing test. Hofstadter and Turing and Asimov understandably equate the two, since computers in their time were dumb compared to humans. Right now it seems like our current AI is neither smarter or dumber than a human. ChatGPT knows more trivia than any Jeopardy contestant, and can also instantly write hundreds of words of prose or computer code. Sometimes it’s too literal and sometimes it’s too trusting and sometimes it’s not creative enough. But those things are knobs that we’re trying tune to match a human. Like, the right amount of trust is the amount of trust a human would have. But human trust is a function of our entires lives and every interaction we’ve had. It’s dumb goal to make an AI assistant that is as gullible as a human. And yet, how else would you define intelligence other than the ability to communicate and understand conversations with humans? The computer has *no* inherit intelligence, we as humans have to be the ones defining intelligence.
I have a reoccurring dream where I can hold a large exercise ball and jump and kick my feet like I’m swimming, and fly.
A couple of quick Thoughts about Hofstadter's AI predictions.
He makes three predictions on page 678 of GEB that I want to talk about. This is written in '79, before computers had beaten humans at chess. Hofstadter predicts that computers will not beat humans at chess until computers achieve general intelligence. General intelligence is a milestone in AI referring to an AI program that mimics the level of sentience of a human. It's not been trained to do one task, but it is generally-intelligent enough that it can learn anything that a human can learn. This also implies the ability to formulate distinct sub-goals. So more than just being able to pass a Turing test or carry on a conversation, a general AI is able to, seemingly, create goals and desires for itself, and presumably express them. Hofstadter makes the distinction between algorithmic thought, and pattern-recognizing thought. He claims that chess can't be beaten without pattern-recognizing thought. This prediction was wrong, only in the last year or two have the chess engines started to incorporate machine learning. Computer were able to beat humans at chess using purely algorithmic solutions and a lot of computing power. However the point Hofstadter is making holds very true for Go. Go is an open-ended enough game that pure algorithms aren't enough to solve it, and you need a computer that has learned to recognize patterns and apply them in a creative way. In this sense, Hofstadter is right that there is a level of intelligence above straightforward algorithms. On the other hand, Hofstadter is wrong that this pattern-recognition is enough to create general intelligence. A computer that can win at go has one of the ingredients of general intelligence—the ability to recognize patterns and respond in ways that appear more intelligent than even a human. It's like we've sliced diagonally through a problem that Hofstadter thought of as linear. We're still no closer to creating an AI that has a will, or is capable of being bored of playing chess.
I can’t believe there’s an XKCD about GNU Info (912).
Decided I'm going to do AoC today in Darklang.
They should invent a third foot.
Updated self-diagnosis: I'm allergic to nuts and I have tonsillitis.
Something about the similarities between Plan 9's ACME and Emacs.
Phantom of the Opera extended its closing date! Wired
I think the single most obscure part of my computer use is the process in which I open the app formerly known as iTunes to the list of all
songs, hit command+a then command+c to copy all the songs, and then paste them into Google Sheets, so that I can run SQL queries on my music library.