I am a graduate student in Computer Science and Cognitive Science at Tufts University. My research is primarily focused on the development of morally competent artificial agents. My long-term research goal is to develop agents that can learn (both from observing others’ behavior and from direct instruction), obey, reason about, and communicate sophisticated (and sometimes conflicting) human moral and social norms.
(29 Mar 2018)|
After reading a new paper arguing that the development of morally competent artificial agents isn’t justified, I started thinking about ethical reasoning capabilities for various physical objects, including the mighty toaster.
(04 Aug 2017)|
This post is Part 2 of a three-part series of posts on AI ethics, beginning with my previous post on Asimov’s “Three Laws of Robotics” and culminating in a post describing a recent paper I wrote about interpretable apprenticeship learning.
(30 Jul 2017)|
We interrupt my three-part series on AI ethics to talk about a little bit of clickbait I came across this morning. I conjecture what may have happened in Facebook’s AI system, and why we shouldn’t be concerned.
(29 Jul 2017)|
When I tell people that my research interests are in AI ethics, they often respond by introducing me to Asimov’s three laws of robotics. This post goes out to them (it’s also the first post of a three-part series that will culminate in explaining my recent paper in the Conference on Decision and Control).