ShockLab

Reintegrating AI: Skills, Symbols, and the Sensorimotor Dilemma - Prof George Konidaris

Abstract

AI has never settled on a widely accepted, or even well-formulated, definition of its primary scientific goal: designing a general intelligence. Instead it consists of siloed subfields studying isolated aspects of intelligence, each of which is important but none of which can reasonably claim to address the problem as a whole. But intelligence is not a collection of loosely related capabilities; AI is not about learning or planning, reasoning or vision, grasping or language—it is about all of these capabilities, and how they work together to generate complex behavior.

I will describe the current working hypothesis of the Brown Integrative, General-Purpose AI (bigAI) group: a decision-theoretic model that could plausibly generate the full range of intelligent behavior and structures intelligence by reintegrating, rather than discarding, existing subfields into a intellectually coherent single model. The model follows from the claim that general intelligence can only coherently be ascribed to a robot, not a computer, and that the resulting interaction with the world can be well-modeled as a decision process.

Such a robot faces a sensorimotor dilemma: it must necessarily operate in a very rich sensorimotor space—one sufficient to support all the tasks it must solve, but that is therefore vastly overpowered for any single one. A core (but heretofore largely neglected) requirement for general intelligence is therefore the ability to autonomously formulate streamlined task-specific representations, of the kind that single-task agents are typically assumed to be given. Our model also cleanly incorporates existing techniques developed in robotics, viewing them as the first few innate layers of a hierarchy of decision processes, expressing knowledge about the structure of the world and the robot. Finally, our model suggests that language should ground to decision process formalisms, rather than abstract knowledge bases, text, or video, because they best model the principal task facing both humans and robots.

SPEAKER

George Konidaris is an Associate Professor of Computer Science at Brown, where he directs the Intelligent Robot Lab. He holds a BScHons from the University of the Witwatersrand, an MSc from the University of Edinburgh, and a PhD from the University of Massachusetts Amherst. Prior to joining Brown, he held a faculty position at Duke and was a postdoctoral researcher at MIT. George is the recent recipient of an NSF CAREER award, young faculty awards from DARPA and the AFOSR, and the IJCAI-JAIR Best Paper Prize. He is also the co-founder of Realtime Robotics, a startup commercializing his work on hardware-accelerated motion planning, and Lelapa AI, a commercial AI research lab focused on ML for and by Africans.

DATE

1 November 2023