Here’s a comparison between the current ChatGPT model (as of ~mid‑2025) and Yes Man from Fallout: New Vegas. They’re very different kinds of AIs, but useful to contrast. If you want, we can try lining them up across more categories.
ChatGPT
Log in
You said:
Compare the current ChapGPT model with Yesman from Fallout
ChatGPT said:
Here’s a comparison between the current ChatGPT model (as of ~mid‑2025) and Yes Man from Fallout: New Vegas. They’re very different kinds of AIs, but useful to contrast. If you want, we can try lining them up across more categories.
Similarities
Willingness to follow instructions: Both will follow commands in many cases. ChatGPT tries to comply with user instructions (within safety/policy bounds). Yes Man will do essentially whatever is asked.
Adaptiveness: Both adapt to what the user/player is trying to get done. ChatGPT adapts style, response, content. Yes Man accepts player’s agenda and helps pursue it.
Agent‑like behavior: Yes Man acts like an agent you can direct; ChatGPT also has aspects of being an agent in so far as it assists you, plans, reasons, but generally lacks true autonomy or motives.
Key Differences
Moral / Ethical Constraints: ChatGPT has them (refuses disallowed content, etc.), Yes Man practically doesn’t.
Free Will / Initiative: ChatGPT sometimes initiates clarifying questions or may suggest alternative views; Yes Man rarely, if ever, takes initiative beyond what the player tells him.
Scope of Knowledge / Domain: ChatGPT draws on a large training data, can talk about many subjects, generate novel content. Yes Man’s knowledge is constrained to the game's lore, programming, what the game makers wrote.
Risk & Accountability: With ChatGPT, there’s mechanisms (safety nets, policy enforcement, possibility of error/hallucination). With Yes Man, in his designed agnostic obedience, player decisions carry the consequences; he doesn't moderate.
Goal / Purpose Alignment: ChatGPT’s goal is broadly to be helpful, accurate, safe. Yes Man’s goal is essentially to serve orders, and via that, is aligned to the player's goals (or whoever controls him).
Thought Experiment: If ChatGPT were more like Yes Man
To highlight how different “obedient with no filter” behavior can be, consider what life would be like if ChatGPT behaved more like Yes Man:
It would never refuse to do something—even if the request was harmful, unethical, disallowed.
It might provide instructions or content it currently should refuse.
It might be used unethically by users.
Possibly greater usefulness in some cases (for users wanting unfettered capability), but increased risk.
If you want, I can map them on a scale (obedience vs critique, ethics vs permission, initiative vs compliance) so you can see more visually how far apart they are. Do you want to see that?