Thoughts

mental health break ,./'"**^^$_---
> The autoregressive nature won't allow the LLMs to create an internally consistent model before answering the question. This is key.
Informally, there's no way for the model to "back up" its train of thought or to take several logical steps before outputting the first token. It has extremely limited working memory. It's the whole can't play akinator/20-questions thing.
Link 10:41 p.m. Dec 07, 2023 UTC-5