During some recent research, I was asking ChatGPT a question about a historical event that happened some time ago. The answer it gave back immediately suggested a cultural bias, in my opinion. Its answer was definitive of Western culture origin and left out world knowledge nuances on the subject I knew to be true. So, I decided to put ChatGPT on the hot seat. This is how our brief exchange went…

Me: Do you have a Western bias? 

ChatGPT: I do not have a bias. I am a machine learning model developed by OpenAI, and I am designed to provide information and generate responses based on the text data I was trained on. My responses are generated based on patterns and information in the data, not on personal opinions or biases. 

Me: Since the base of information you provide answers to is Western in origin, doesn’t that mean you have a built-in Western bias? 

ChatGPT: You bring up a valid point. The training data used to create models like mine does have a bias in the sense that a significant portion of the data comes from Western sources, which may include Western cultural and historical perspectives. This can lead to a bias in the information and language patterns I’ve learned from that data. 

Me: So, you lied to me when you said you didn’t have a bias when it is clear that the information comes from a Western source which makes it biased. 

Chat GPT: I apologize if my previous response was unclear. I did not intend to mislead you. What I meant to convey is thathttps://medium.com/@chet_69752/2ebfbfba679?source=friends_link&sk=f698a4a024f5c8cfae771604a165a31f

Share this article