Human-Centric Evaluation of NOIR in Real-world Tasks
source link: https://hackernoon.com/human-centric-evaluation-of-noir-in-real-world-tasks
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Human-Centric Evaluation of NOIR in Real-world Tasks
Human-Centric Evaluation of NOIR in Real-world Tasks
2min
by @escholar
EScholar: Electronic Academic Papers for Scholars
@escholar
We publish the best academic work (that's too often lost...
Too Long; Didn't Read
In a series of experiments, NOIR demonstrates its potential in assisting humans with everyday tasks. Human participants engage in various activities, relying solely on brain signals to communicate with the robots. The study sheds light on the effectiveness and possibilities of human-robot interaction in real-world scenarios.People Mentioned
Shreya Gupta
@gshreyaa
audio
element.@escholar
EScholar: Electronic Academic Papers for ScholarsWe publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community
Receive Stories from @escholar
Authors:
(1) Ruohan Zhang, Department of Computer Science, Stanford University, Institute for Human-Centered AI (HAI), Stanford University & Equally contributed; [email protected];
(2) Sharon Lee, Department of Computer Science, Stanford University & Equally contributed; [email protected];
(3) Minjune Hwang, Department of Computer Science, Stanford University & Equally contributed; [email protected];
(4) Ayano Hiranaka, Department of Mechanical Engineering, Stanford University & Equally contributed; [email protected];
(5) Chen Wang, Department of Computer Science, Stanford University;
(6) Wensi Ai, Department of Computer Science, Stanford University;
(7) Jin Jie Ryan Tan, Department of Computer Science, Stanford University;
(8) Shreya Gupta, Department of Computer Science, Stanford University;
(9) Yilun Hao, Department of Computer Science, Stanford University;
(10) Ruohan Gao, Department of Computer Science, Stanford University;
(11) Anthony Norcia, Department of Psychology, Stanford University
(12) Li Fei-Fei, 1Department of Computer Science, Stanford University & Institute for Human-Centered AI (HAI), Stanford University;
(13) Jiajun Wu, Department of Computer Science, Stanford University & Institute for Human-Centered AI (HAI), Stanford University.
Table of Links
Brain-Robot Interface (BRI): Background
Conclusion, Limitations, and Ethical Concerns
Appendix 1: Questions and Answers about NOIR
Appendix 2: Comparison between Different Brain Recording Devices
Appendix 5: Experimental Procedure
Appendix 6: Decoding Algorithms Details
Appendix 7: Robot Learning Algorithm Details
4 Experiments
Tasks. NOIR can greatly benefit those who require assistance with everyday activities. We select tasks from the BEHAVIOR benchmark [69] and Activities of Daily Living [70] to capture actual human needs. The tasks are shown in Fig. 1, and consist of 16 tabletop tasks and four mobile manipulation tasks. The tasks encompass various categories, including eight meal preparation tasks, six cleaning tasks, three personal care tasks, and three entertainment tasks. For systematic evaluation of task success, we provide formal definitions of these activities in the BDDL language format [69, 71], which specifies the initial and goal conditions of a task using first-order logic. Task definitions and figures can be found in Appendix 4.
Procedure. The human study conducted has received approval from Institutional Review Board. Three healthy human participants (2 male, 1 female) performed all 15 Franka tasks. Sukiyaki, four Tiago tasks, and learning tasks are performed by one user. We use the EGI NetStation EEG system, which is completely non-invasive, making almost everyone an ideal subject. Before experiments, users are familiarized with task definitions and system interfaces. During the experiment, users stay in an isolated room, remain stationary, watch the robot on a screen, and solely rely on brain signals to communicate with the robots (more details about the procedure can be found in Appendix 5).
This paper is available on arxiv under CC 4.0 license.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK