Skip to main content Skip to secondary navigation

You Might Be a Robot


As robots and artificial intelligence (AI) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots and AI -- not even experts. What's more, technological advances make it harder and harder each day to tell people from robots and robots from "dumb" machines. We've already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you're reading this you're (probably) not a robot, but certain laws might already treat you as one.

Definitional challenges like these aren't exclusive to robots and AI. But today, all signs indicate we're approaching an inflection point. Whether it's citywide bans of "robot sex brothels" or nationwide efforts to crack down on "ticket scalping bots," we're witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign "influence campaigns" by regulating social media bots? Be careful not to define "bot" too broadly (like the California legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road.

In this Article, we suggest that the problem isn't simply that we haven't hit upon the right definition. Instead, there may not be a "right" definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we'll demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing's Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of "I know it when I see it" determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests regulators--not legislators--should play the defining role.


robots, artificial intelligence, regulation

Suggested Citation

Casey, Bryan and Lemley, Mark A., You Might Be a Robot (February 1, 2019). Cornell Law Review, 2019. Available at SSRN: or


Mark A Lemley
Cornell Law Review, 2019
Publication Date
February, 2019