Note: this is likely hallucinations thanks to the AI being fed with cyberpunk novels that likely covered this kind of topic, but there’s also a non-zero chance that the AI is being taught from user interactions (I need some verification on that from people more knowledgable on LLMs than me), and Elon would be reckless enough to not turn it off.
Grok is closed source, I believe, so it’s hard to say. But, ignoring unknown architecture or latent space details, this could be a lot of things. The way you seem to be using the term hallucination effectively applies to EVERY output of a GPT. They effectively reason probabilistically across a billion dimensioned space mapping language components, with various dimensions taking on various semantic values due to a sort of mathematical differentiation during training. This could be the result of influence from any number of things tbh.
Note: this is likely hallucinations thanks to the AI being fed with cyberpunk novels that likely covered this kind of topic, but there’s also a non-zero chance that the AI is being taught from user interactions (I need some verification on that from people more knowledgable on LLMs than me), and Elon would be reckless enough to not turn it off.
Grok is closed source, I believe, so it’s hard to say. But, ignoring unknown architecture or latent space details, this could be a lot of things. The way you seem to be using the term hallucination effectively applies to EVERY output of a GPT. They effectively reason probabilistically across a billion dimensioned space mapping language components, with various dimensions taking on various semantic values due to a sort of mathematical differentiation during training. This could be the result of influence from any number of things tbh.