ChatGPT and similar LLMs are characterized as soft bullshitters since they focus on producing convincing text rather than accurate content. The architecture of these models aims to imitate human speech, making them indifferent to the truth of their outputs. While ChatGPT does not have intentions, its design can lead us to see it as engaging in "hard bullshitting" by attempting to seem like a truthful human interlocutor.
















