New Scientist Magazine published an article a couple days ago on "The Rise of the Emotional Robot" along with the video shown below. But it's not the robots who are getting emotional. Humans it seems, get emotional about robots.
Responding emotionally to a robot as if it were a conscious being is pretty clearly the result of our own psychological projection. (My bot of course, is an exception to this rule.) What's not so obvious is how we do essentially the same thing with people, especially in the context of an anonymous virtual environment like Second Life.
Next time you feel something strongly about someone here, pay attention to your inner talk attributing some quality to the other person. Then for each attribute, ask yourself if you're sure it's true. Chances are, you're not.
4 comments:
Are you suggesting that that's different from RL? :)
Next time you feel something strongly about someone in RL, pay attention to your inner talk attributing some quality to the other person. Then for each attribute, ask yourself if you're sure it's true. Chance are, you're not!
Great observation! I think the psychological process is the same. But since people usually have comparatively less factual information about others in SL, the associated mental models are more speculative. Also, the lack of auditory and visual cues in conversation increase the likelihood of misunderstanding.
Very true. Is not a difference of kind, but is certainly a difference of degree.
On the other hand, the fact that in SL we have fewer facts means that we also have fewer irrelevant / misleading facts. Seeing someone's self-designed AV may give me a much more accurate picture of their reality than seeing their relatively uncustomized RL body would, and I may avoid making incorrect assumptions. Hard to say... :)
I think you're right. So in retrospect the idea of checking in on the validity of emotional responses would have been clearer without the virtual/physical world differentiation.
Post a Comment