Daniel Kasenberg

PhD candidate at Human-Robot Interaction Lab at Tufts University

Home | About | Publications

CV

GitHub: dkasenberg

Twitter: @dkasenberg

LinkedIn: Daniel Kasenberg

I am a graduate student in Computer Science and Cognitive Science at Tufts University. My research is primarily focused on the development of morally competent artificial agents. My long-term research goal is to develop agents that can learn (both from observing others’ behavior and from direct instruction), obey, reason about, and communicate sophisticated (and sometimes conflicting) human moral and social norms.

Latest posts:

Moral Toasters and an Internet of Ethical Things (29 Mar 2018)

After reading a new paper arguing that the development of morally competent artificial agents isn’t justified, I started thinking about ethical reasoning capabilities for various physical objects, including the mighty toaster.

AI Ethics: Inverse Reinforcement Learning to the Rescue? (04 Aug 2017)

This post is Part 2 of a three-part series of posts on AI ethics, beginning with my previous post on Asimov’s “Three Laws of Robotics” and culminating in a post describing a recent paper I wrote about interpretable apprenticeship learning.

Facebook's AI Invents New Language. End Times? Hardly. (30 Jul 2017)

We interrupt my three-part series on AI ethics to talk about a little bit of clickbait I came across this morning. I conjecture what may have happened in Facebook’s AI system, and why we shouldn’t be concerned.

AI Ethics: Are Asimov's Laws Enough? (29 Jul 2017)

When I tell people that my research interests are in AI ethics, they often respond by introducing me to Asimov’s three laws of robotics. This post goes out to them (it’s also the first post of a three-part series that will culminate in explaining my recent paper in the Conference on Decision and Control).