You’re being pedantic—and confidently ignorant. The product is called “ChatGPT” and through that you can access multiple models. Like ChatGPT 3.5, or DALL•E.
ChatGPT is just a front-end that maintains a session that gets fed to an LLM each time you add a reply, and now has access to image gen, too, so I was wrong.
I beg to differ.
The llm is executing a function on a diffusion image model. The llm does not generate the image itself
This doesn’t contradict what the OP said. ChatGPT is now an interface to both an LLM and a diffusion-based image generator.
You’re being pedantic—and confidently ignorant. The product is called “ChatGPT” and through that you can access multiple models. Like ChatGPT 3.5, or DALL•E.
ChatGPT is just a front-end that maintains a session that gets fed to an LLM each time you add a reply, and now has access to image gen, too, so I was wrong.
I mean, the GPT model is a LLM and ChatGPT uses DALL-E in the background to create images. So depending on definition you’re both correct :-)
Depending on how I define anything means I’m always correct I guess. 🤷♂️
Yeah, but the model that does the images is actually Dall-e, you are just using gpt’s interface to create them
So, I’m using ChatGPT.
Thank you for agreeing with me.
Sure, sure, was not desagreeing, technically you are using ChatGPT. Just pointing out that the model itself handling the image creation is not chatgpt
Pedantic.
Imbecile.
How so?
Girl on the right probably killed a Spanish swordsmith back in the day.