It’s not. It’s reflecting it’s training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.
Tabula rasa, piss and cum and saliva soaking into a mattress. It’s all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I’m saying?
Okay, now you’re definitely protectingprojecting poo-flicking, as I said literally nothing in my last comment. It was nonsense. But I bet you don’t think I’m an LLM.
Maybe that’s intelligence. I don’t know. Brains, you know?
It’s not. It’s reflecting it’s training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.
Tabula rasa, piss and cum and saliva soaking into a mattress. It’s all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I’m saying?
Magical thinking?
Okay, now you’re definitely
protectingprojectingpoo-flicking, as I said literally nothing in my last comment. It was nonsense. But I bet you don’t think I’m an LLM.You could say our brain does the same. It just trains in real time and has much better hardware.
What are we doing but applying things we’ve already learnt that are encoded in our neurons. They aren’t called neural networks for nothing
You could say that but you’d be wrong.