Robots and other AI system interact with people on a daily basis. People seem to be vulnerable to the bullshit (a.k.a. hallucinations) that these machines produce. Robots are being believed even if they make statements that are clearly impossible. There is a clear misalignment between the mental models that people have of robots and the robots’ actual abilities. This project explores what factors, such as persuasiveness, trust, and status, influence the bullshit that robots produce. How gullible are people when interacting with them? How can we use this knowledge to design robots that elicit the exact right amount of trust?
Supervisors
Primary Supervisor: Christoph Bartneck
Does the project come with funding
No - applicants must be self-funded or can apply for relevant scholarships
Final date for receiving applications
Ongoing
How to apply
Please send your CV and academic transcripts to christoph.bartneck@canterbury.ac.nz
Keywords
Psychology, sociology, philosophy, human-robot interaction, human-computer interaction, computer science, robot, ai, hallucinations, perception, trust