We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
This is why I have started to really like lmsys.org’s chat bot arena because every time you ask a question you are directly comparing the responses of two separate chat bots. It is much less likely that chatbots will hallucinate in the same way and puts you in the mindset to be a critical reader who is actively evaluating the quality of the response.
(what I am talking about) https://arena.lmsys.org/