A developmental perspective on brains, minds and machines (1:17:15)
Date Posted:
June 7, 2016
Date Recorded:
July 22, 2015
CBMM Speaker(s):
Elizabeth Spelke All Captioned Videos CBMM Summer Lecture Series
Description:
Elizabeth Spelke, Professor of Psychology and Director of the Laboratory for Developmental Studies at Harvard, presents a developmental perspective on brains, minds and machines. Studies of cognitive development in infants can provide insights about questions like, what makes humans smart? How do we develop new concepts and systems of knowledge that are more powerful than our old ones? Where do abstract ideas come from? Dr. Spelke discusses infants’ development of the concepts of objects, agents, and social interactions.
Resources:
ELIZABETH SPELKE: So, I work on kind of everything. I mean, I started studying infants' mechanical reasoning, physical reasoning, kind of origins of intuitive physics. And then moved over to look at number and geometry.
And more recently have been migrating towards social things. And Heather thought people here might be interested in that. So, I'm going to focus on that more than anything else today.
But what I really want to try to talk about for the next hour and a half is not so much the details of cognitive development, or how can we study it, or what do infants know and what have we learned about that. But rather, what kind of perspective can studies of development give on some central questions that anyone who is interested in the brain, or the mind, or in building intelligent machines I think is going to want to have to deal with.
So, here are my three favorite questions that I want to throw out for all of you. First of all, when you look at human perceptual systems or action systems and compare them to those of other animals, you see enormous similarities, right?
And in fact, those similarities are probably more than anything else what has made neuroscience progress so dramatically in understanding perception and action. And they've probably been behind a lot of the more dramatic high-profile successes of deep learning and other kind of computer science efforts to model perception, and action, and so forth, right? You see these huge similarities.
But then when you look at what we do with the world as we perceive it and as we act in it, it seems as if there's this huge gap between how we reason about the world and how any other animals do, right? I mean, every animal has to find food. But we're the only ones who invent agriculture, and cooking, and things like that.
Every animal has to be able to deal with physical objects and figure out how they move and all. But we're the only creatures who refashion our whole world prolifically with tools and have been doing so since prehistory. And the list kind of goes on and on. It seems as if we develop systems of knowledge with flexibility, and a rapidity, and a productiveness that just leaves everybody else, all the other animals, in the dust.
So, that really raises the question, I think, you know, what is it about us that leads us, that propels us on this very different path from the paths that we see any other animals engaged in? And I think we want to answer that question, both because it's interesting in its own right. And also because if we ever really want to understand how the human mind works, or build machines with minds that could at all connect with ours, we're going to want to understand how it is that we do that. Right? So, that is kind of question one that I'd love to see the field make some headway on that I think we can make some headway on.
Second question, over the course of developing knowledge, what we seem we seem to do is not only be able to enrich our base of understanding of the world like fill in lots of blanks, and figure out what people's names are, and what their job descriptions are, and the kind of information that went around the room here. But rather, we also develop new concepts and new whole systems of knowledge.
You see this most dramatically in the history of science and mathematics, where people went from thinking Euclidean geometry was the only logical possible geometry to discovering there were other logically-possible geometries, to discovering that non-Euclidean geometries actually better capture the spatial structure of the world we're living in than Euclidean geometry does. So, you see these really dramatic changes in our concepts over development.
And it looks like those changes have something to do with learning. I mean, it's not just that we wake up one morning, we've been hit on the head, and we have some new way of viewing the world. Rather, we use what we know to arrive at new conceptions of the world.
But that raises the question, how can we use what we know to change what we know? How can we use what we know to discover that what we thought we knew actually was wrong? And so that's, I think, a second question logically unrelated to the first. But I think connected to it. I mean, in that it's also characteristic of human knowledge.
And then the third question is when we look at all of these cases in which we develop new knowledge systems or even cases where we just elaborate knowledge systems that have been around for a long time, it looks like we organize our knowledge around strikingly abstract concepts. And again, I think physics and science and mathematics provide the clearest cases of this. But you see it also in moral reasoning. You see it also in aesthetics, and so forth. That at the center of our knowledge-- of our systems of concepts, and beliefs, and knowledge, are a set of concepts that pick out entities that can't be seen in principle, like dimensionless points or infinitely-thin, infinitely-extended lines can't be produced in practice. And that raises the question, how do we come to have these concepts that go beyond the bounds of any kind of perceptual system or action system we could ever have, right? I mean, how do these concepts arise in our minds? And how do we work with them and apply them to make sense of the world?
They seem utterly central to us. So, utterly central-- Rebecca already talked to you, right? Utterly obvious, and intuitive, and central that you would understand people's actions in terms of abstract notions like their beliefs, and desires, and so forth. Or that we would understand and evaluate aesthetics in terms of abstract notions like beauty, or goodness, or things like that.
Yet, how do we come to have these ideas? And how do they work in our minds?
OK, so those are the three questions. Are there any other burning questions about human cognition that you would add to these three, put on the table? These are my three favorites.
OK, I'm not going to answer them in the next hour. I'm going to try to give you three messages today. The first message is that these questions are outstanding but answerable I think. I truly believe these questions are answerable. I think they're answerable in your lifetimes. I think they might even be answerable in my lifetime. I think we have the tools that we need to answer these very old and longstanding questions about human nature and human knowledge. So, that's question one, point one.
Point two is that I think studies of human cognitive development can help to answer them by asking where our knowledge begins, how we characterize the knowledge of a young infant, and how then how our knowledge changes with experience, with learning, with growth and development of the brain, and so forth.
By studying infants and children, and looking at knowledge development up through adulthood, we can get insight that can contribute an answer to this question. And we can do this both by-- most of the work I'll talk about focuses on people kind of like us living in Boston and places like that. But we can also, I think, learn a lot by comparing the course of human development in our culture to cognitive development in other cultures, in people with and without systematic formal education, and people with different kinds of access to the world.
That third picture down is to Nicaraguan sign language speakers, deaf people who learned their first language at a relatively late age. Or at people with interestingly different cognitive abilities, like the guy at the bottom who has Williams syndrome. I think that these studies can help to answer these questions.
But message three is that studies of cognitive development can't do this alone. And they particularly, I think, provide insight when you can consider their findings in relation to findings from a host of related fields, which, conveniently, CBMM all tends to bring together, including studies of animal cognition, studies of the brain at multiple levels of analysis, and studies of intelligent machines and computations.
So, putting that together with development I think can be really useful. And I think it's-- I'm really glad that CBMM has chosen development as one of the polls to focus on in order to do this. But I'd be interested to see what you think as we go through this.
OK, so to give you a concrete possible answer to all three of those questions, let me tell you where I think research on cognitive development is leading us. I think it's leading us to think that at the starting state of human development-- and by the starting state, I just mean when babies are first born, and open their eyes, and encounter an external world for the first time. Of course, there's been a lot of brain and biological development going on in the womb. But their first encounter with the world out there that they're going to have to come to understand and deal with happens at birth.
That starting state includes a set of rather interesting cognitive systems. They include a system for representing objects and making basic predictions about how objects are going to move and interact with each other as they bump into each other, and so forth.
It includes a system for representing approximate numerical magnitudes. Abstract numbers of objects, or sounds, or actions can apply to different kinds of things and represents number with approximate precision. It includes two systems that eventually are going to contribute to geometry-- one, a system for navigating in relation to the distances and directions of surfaces-- extended surfaces-- in the layout.
And another, a system for capturing the shapes of small-scale forms that we use, especially, I think, we develop and we harness for use in shape-based object recognition. And finally, two social systems that I'll try to focus more on than anything else today-- two systems that both apply to animate beings, one of which analyzes the movements of animate beings as actions with goals that cause changes in the world. Usually the goal of an actor is to change the state of the world, either by changing their own relation to it or by changing the world itself directly.
And the other, a system for reasoning about animate beings as they interact socially with one another, and engage with one another, share information, share states.
Now, I think the evidence suggests that each of these systems operates with considerable independence of the other systems. To a first approximation, they give us a way of carving the infant mind at its joints into different subsystems that are each internally quite coherent and interacting, but quite separable from one another.
I think the evidence also suggests that if you compare what infants know in any of these six domains to what we know as adults, we see that the systems are radically limited. They know far less than we do. And they can do far less with that knowledge than we do. And those limits also interestingly point us to what may be the most kind of fundamental general conceptions of the world that we begin with and that are propelling development forward in these different domains.
The systems are shared by other animals as best we can tell. That is to say when we take the same experimental paradigm and apply it to a human infant and apply it to a monkey, or a rat, or a mouse, or a bird, or even a fish, we're very likely to see the same pattern in performance across these different species, OK?
I think these systems in themselves are not telling us what's special about human cognition because they evolved long before we evolved as a species, and do work for lots of animals
I think these systems continue to function throughout our lives. We can test for them in adults just like we can test for them in infants. And we find basically the same abilities with the same properties. But one of their functions is to support new concepts and new systems of knowledge. So, we use them as building blocks systems for learning other things.
So, that's basically point one. That we can when we look at infants, we see this interesting kinds of cognitive abilities that can serve to be brought together maybe to produce new developments.
But the second thing that I think I keep bumping into in all the different domains I've done work in, from numerical development, to development of geometrical intuitions, to social cognition is that I think we have a further cognitive system which is unique to our species, which also begins to function at birth, and which serves as a medium of communication across these different basic core cognitive systems that I described and that I showed you in the previous slide. And it serves as a common medium of representation for combining together representations that come from these different cognitive systems.
Now, a natural thing to think is that this is a basic language of thought that we would share with lots of other animals and that we and any other animal would use in navigating the world, a common format of representation. But I actually think that the way in which we combine information across different cognitive domains is very different from the way other animals do.
We have their abilities to combine information across domains. But we also have this special system that allows us to form not just new individual concepts, but new systems of concepts rapidly and productively. And it seems, in case after case, to be tied to our faculty of natural language. That the system we use to communicate with other people does double duty as a system that we can use within our own minds to bring together information from different cognitive systems to form new concepts.
So, I think these combinations give rise to things like our prolific tool use, our prolific abilities to form systematic taxonomic knowledge of the natural world, which we, people in urban environments, aren't very good at anymore. But historically, humans were super good at that, inventing agriculture and all sorts of stuff.
I think it leads to our basic natural number concepts, which come into their own at the end of the preschool years. It leads to systems of knowledge of Euclidean geometry and systems of knowledge of persons and mental states, which is what I want to focus on today.
I put these up because they're all cases where there's evidence that links the functioning of the basic core systems that we find in infants to the development of these later emerging knowledge systems at the end of infancy if it's an object taxonomy, or at five years of age if its natural number, or even older if it's Euclidean geometry. And I think also in the second year for the development of uniquely human social cognitive concepts like the concept of a person or concepts of mental states.
Now, I think this combinatorial system is unique to humans. Because it depends on a learned natural language and passes, therefore, from one person to another-- we learn our language from other people-- it singles out the concepts that are most useful for us to learn about in our world.
You can use word frequency or frequency of expressions as really good proxy for what are the important concepts for a young kid to learn, right? Not so important today to learn about crossbows. A lot more important to learn about cell phones, right? Whatever is out in the world today that people are routinely talking about, that's going to be a good marker for what the important concepts are for kids to be learning.
But the thing that's, I think, most magical about natural language that gives us this uniquely human cognitive combinatorial capacity is that language has two-- although languages themselves are variable and vocabularies are variable over historical time and will vary from one culture to another, languages have two general properties.
One is that they have a lexicon of words that can be coined for entities in any domain. So, you can have nouns for people, for places, for kinds of objects, for numbers-- a dozen, OK? All different kinds of things can be captured in the lexicon.
And then it has a set of combinatorial rules that apply to conjoin elements from the lexicon based only on their grammatical properties, not on their meaning. What that means is that language gives you a system where once you learn a set of words and develop a set of concepts that apply to those words, you now have a set of rules that allow you to combine those concepts. And you get the meanings of the combined expressions for free from the meanings of the words and the workings of the rules. So, you have now a productive system that can capture abstract properties of the world by combining together information from very diverse domains, information that's quite separate, I think, in the minds of infants and animals, and to some degree separate in our own minds as adults. We're doing something fancy when we combine this together in accord with a set of abstract concepts. Did that make any sense at all?
AUDIENCE: Yes.
ELIZABETH SPELKE: All right. I shouldn't say that I am very interuptable. You can ask me questions or jump in at any point. And we can turn this into a discussion, and I'll be really happy to do that.
OK, so that's what I think these studies are telling us. What are the actual kinds of research that we can do to learn what our starting state cognitive systems are?
I think the work that I have focused on the most has been studies of cognition in young human infants. I like to collaborate with people who study the same things in other animals, or study the same things in people living in far-flung cultures, and so forth.
But I think our primary information about the systems comes from studies of infants. And then the primary information for how infants go beyond the limits of these systems and form new systems of concepts that combine productively together information from what were separate course systems comes from studies of cognitive development in children.
So, let me give you a few examples of those. First of all, I want to focus primarily on social cognition, but start with a more general question of how can we even find out what infants know, right?
I mean, they're like the most unpromising creature to be studying, right? I mean, if you're studying an adult, you can talk to them. By the way, it's not obvious how we find out what adults know either, right? I mean, we can talk to each other. But we don't necessarily have insight into what what concepts are really doing the work for us as we're engaged in a particular act of reasoning. But at least you can give an adult-- you can explain a task to an adult, you can get them to do it. You can put it online and get gazillions of data points on it, and so forth.
At the other extreme, if you're studying animals, you can push them around. You can bring them into the lab and train them. You can food-deprive them and get them really motivated to perform in your tasks, and things like that.
We can't do any of that with infants. So, they're a real pain to study. So, how can we find out what we know? Well, not by communicating with them, right? They don't start talking until about a year of age. And they don't start making sense until three, or four, or five years. So, you know, we're not going to do that.
Their motor-- I mean, humans are pathetic compared to other animals in terms of our motoric capacities at birth, right? Human infants can hardly do anything. They can't actually reach for an object until they're like five months of age. They can't get up and move from one place to another, so forget doing navigation studies with them until they're, you know, the second half of the first year at the earliest for crawling. And they're not walking until the second year.
Do I have that here somewhere? Yeah, their motor abilities are really, really bad. Attention span is bad, and so forth. So, it's hard to study them.
There's one thing they're really good at. And that is doing nothing, sitting, looking around, and taking information in. There really, really good at that. OK? Most of the studies that I think have actually taught us something about infants have come from simply looking at what they're interested in, what they look at, what they'll take in information about. And I wanted to give you three examples.
So, one thing you can do-- and the most beautiful example of this was done here at MIT-- is you can give babies starting with newborns a choice of two things to look at. You can present them with two things side by side, and simply ask, which of these two things will they look at?
And now, if you're a brilliant psycho-physicist like Richard Held, who was here in BCS, was the chair of BCS back in the '80s and '90s as he was, you can learn really deep and important things about infant vision by doing that.
So, the question that I Held asked was do infants, when they look out at the world, see a structured array organized in-depth? Or do they just see depthless, meaningless sensations? Which is what William James and everybody else thought until sometime not too long before these beautiful experiments that Held did.
So, what he did to answer to that question was he put stereo goggles-- he basically put babies in a 3D movie. He put stereo goggles on a baby. Actually, it wasn't a movie. It was more boring than that-- and showed them two arrays of stripes. OK?
So, they're seeing, through each of their two eyes, these two circular arrays of stripes. And the only thing that differs between the two arrays is where the edges of the stripes are in the two eyes relative to each other.
So, for one of the two arrays-- in this diagram it's the one on the left-- the horizontal stripes are at the same locations in the two eyes, OK? So, the whole thing, when you fuse it, looks flat.
And the other one, the edges are a little further out or further in. So, different stripes either look-- some of the stripes look as if they're standing forward in depth relative to the other stripes, OK? So, if you have functional stereopsis, one of these arrays looks three-dimensional. And the other array looks two-dimensional.
So, he put these goggles on baby's starting at, I think, two months of age. And followed them over time, and looked at what they looked at. And found that initially, if you start them young enough, they're looking equally at the flat and the arrays that look like they're in-depth for us.
But sometime between about eight weeks and 16 weeks, you get this huge preference shooting up for the array of stripes that are varying in depth. OK? So, that's telling you-- well, what is that telling you? Does that say babies have depth perception? What do you think?
AUDIENCE: I think that when you're born, [INAUDIBLE] bigger.
ELIZABETH SPELKE: They could just be seeing double images, right? They could be seeing oh, the edges of these stripes don't line up. That's kind of interesting. And the edges of these stripes over here do line up, right? So, they could be just seeing double images. Yeah?
AUDIENCE: Maybe they're just not interested, [INAUDIBLE] can process it better.
ELIZABETH SPELKE: Yeah, so, right. So, maybe there's something in that array that makes it interesting. But I don't know if it's actually depth or not, right? Yeah, right. OK, good.
So, what did Held do to try to get at this? Well, he went to the psychophysical literature on adults, which goes back 150 years studying stereo vision. And what you see from studies of adults is a number of signature properties of stereopsis.
So, one obvious signature property is this works when stripes are horizontal. If we took these arrays and rotated them so that they were vertical, the whole effect would go away because our eyes are arrayed horizontally, not vertically. So, we care about disparities in depth, the differing views of an object in depth that we get, from the horizontal displacements of its vertical edges, not horizontal edges. Did that make sense?
OK, anyway, if we took this array and rotated on its side, you'd still have double images. But you wouldn't have effective information for stereoscopic depth. So, that's one thing you could test for in an infant.
Another thing you can test for in an infant is that for us as adults, stereoscopic depth perception is extremely-- we have extremely high acuity for stereoscopic depth. If you take a normal US dollar and a counterfeit US dollar and look at them side by side, if it's a good counterfeit, you will not be able to tell whether the second one is a real dollar or not.
But if you put them in a stereogram, that's actually a good way of detecting a counterfeit bill. Because stereoscopic depth perception is so sensitive to very, very tiny displacements of an edge from where it ought to be, side by side. So, you have very high acuity for stereoscopic depth.
And it also only operates only over a relatively limited range of displacements, disparities between the edges and the two eyes. So anyway, Held tested for all three of these things in infants. And he found that when infants start showing this preference for the array that that has binocular disparity in it, they show it only for those arrays that give rise to perception of stereoscopic depth in adults. So, turn it on its side, the preference goes away. The acuities, it works for extremely high acuities and doesn't work when the disparities get too big, and so forth, OK?
So, it really isn't about gee, it's cool looking at these interesting images that are different from each other. It really is about a system that's attuned to just the information that specifies stereoscopic depth.
Now, Held felt this actually doesn't go all the way to telling us that the babies have an experience of depth, that there's something more to our conscious perceptual experiences. That it's not clear any psychophysical experiment is going to get it. I mean, you can you can test someone in the lab in a psychophysical experiment. But will you really be getting at what their underlying experience is? Are they actually seeing death the same way you are?
What it does do, though, is it says we can use a simple indicator response like looking time. And bring ourselves up to the same level of understanding or its limits, depending on how you look at it, about infants that we have for when we test adults. We can do the same kind of psychophysical experiment on an infant that we can do on an adult. So, we can address the same kinds of questions. And of course, they've been hugely productive in the study of vision.
OK, that was probably too much time to spend on this. But it's kind of a cool experiment. I really love those studies.
OK, we can move a little more centrally cognitive. And I'll do that in-- just mentioning a second way in which one can use looking time. Babies don't not only show preferences for some things over others, 3D things over flat things. They also, if you show them the same thing again, and again, and again, like us, they will get bored with it. And they'll be more interested in looking at something new.
So, we can use that tendency to study what they see as new, what they discriminate from what, and also what changes in arrays they care about versus what changes in rays they care about less. So, here's an example of a study.
Where the question is, if and when objects move in and out of the infant's view-- so something moves behind a screen, it's out of sight, then it comes back into sight-- do infants have intermittent experiences just to visible things? Or are they able to perceive an object as persisting and moving continuously over changes in its visibility to them?
And if they can do the second, can they, in particular, use the continuity of an object's motion to figure out when they're confronting an array that involves just one object versus an array that involves two.
So, the basic study is you've got two screens. In one condition for half the infants, you have a single object that moves continuously back and forth behind these two screens. So, it starts out in view, say on the left side of the array, goes behind the first screen, appears between the two, comes out behind the other screen on the right side, and then reverses direction, and goes back and forth.
And you let baby see this until they are bored out of their minds with it. And then ask them, in effect, how many objects do you think participated in this scene by taking the screen away. And in this study, we just give them a choice between one object and two.
So, they see the first of the two-- they see an object begin to move. And it's either alone on the screen or there's a second object there. All right?
And the idea was that if babies have been bored of seeing a single object moving back and forth again, and again, and again, then they should still be bored to see a single object there. But they might look longer if there's a second object in the scene. Because if they didn't represent-- they didn't infer that there was a second object there before, all right?
The comparison condition was one where they see exactly the same pattern of motion with one change-- nothing ever appears between the two screens. Now, when you see this it's not a surprising event. But it looks like there's two objects involved.
So, you've got one thing that goes behind the first screen. It stops there. Then there's a pause. And something else comes out from behind the second screen. And those babies were bored with that. And then got to see the same test events.
And what the study found was that there were opposite-looking preferences for those test events across those two groups. So, the kids who had seen the continuous motion looked longer when there were two objects in the test events. The ones who saw the discontinuous motion looked longer when there was one object. Which suggested that when motion is continuous, the babies are inferring a single object over these changes in visibility.
There's a lot of evidence now that when objects go out of infant sight, under many conditions they're able to continue to track the existence of that object. There's also a lot of evidence that if you strain their patience by leaving them out of sight for very long, they will lose track of the object altogether.
And it'll be gone from their minds. But that seems like a problem-- a limit on attention or working memory, and not part of the basic system by which the object is represented in the first place. As long as the occlusion times are short enough, babies will construct a representation of a single continuously existing object over discontinuous motion.
OK, the third use of looking time, the one that's most often gets reported in the New York Times, reflex kids' tendencies not just to look at things that are new, but to look at things that don't behave in ways that the babies seem to expect or think that they should.
OK, so for these studies, you don't have to bore the babies. You just have to set up an event and then show them something that we as adults take to be the expected thing for objects to do versus the unexpected thing for objects to do.
So, this is a famous example from Renee Baillargeon, where she starts out by just taking five-month-old babies and showing them a screen. It starts out lying flat on a table. And then rotates upward 180 degrees. So, it goes from one position-- around one edge.
So, it goes from one position flat on the table to another position flat on the table. And they just see the screen can rotate like that. And then after seeing that-- I'll actually pretend that's a screen. After seeing that it can rotate the full 180 degrees, then an object is placed behind the screen. And the question for the babies is OK, what's the screen going to do now?
The screen will rotate upward until the object is fully hidden. And then one of two things happens. Either it gets to the point where the object is and stops, and reverses direction. Or it rotates the full 180 degrees that they saw before. And the object has magically disappeared from the scene.
And the babies look longer at that event. Not because they're bored with the alternative event. But just because the interpretation is they expected the thing to stop when it got to-- they expected that object to continue to exist in that location. And they expected, therefore, that the thing would stop before it got there. Those are just the data from it, providing evidence that the babies are representing this object as persisting and also as solid, taking up space, so therefore, constraining the motion of the other object.
OK, now that's all stuff that babies can do. But I actually think the most interesting thing about infants' object representations is how limited they are, I mean, the things that they can't do.
So, one limit that [? Fei Xu ?] and Susan Carey discovered is that although babies will use the discontinuity of motion between two objects in a situation like the one on the left to infer that there are two things there, they won't use the most obvious perceptual properties of objects to do that, even when they're presented with objects that we know are familiar to them.
So, the study is showing the failure we're done with 10-month-old infants. If you give a 10-month-old infant a truck to play with, they'll immediately take the truck start rolling it on a table, and go "vroom vroom." This is a meaningful object to them.
But if, instead, you, simply allow them to see an event where there's a single screen, a truck comes out on one side and goes back behind the screen, and then a duck comes out on the other side and goes back behind the screen, and you ask babies in effect, how many objects are there by boring them with those events, and then taking the screen away, and there's either one object or two, their perception is indeterminate as to the number of objects.
Whether this is a caterpillar that turned into a butterfly or whether this is two separate objects to begin with, they're not using the properties of objects at that point or the kinds of objects t-- the categories of objects that those things are in to determine that there were two things behind that screen, not one.
All right, so that's just one example of many where infants' perception is radically impoverished with respect to that of adults. The range of circumstances in which they can tell you that there are two distinct objects in a scene is tiny compared to the range of circumstances in which we would.
OK, so they're using very limited information for tracking objects. To a first approximation, they're using the way those objects move, and nothing else.
OK, all right. There's also another way in which understanding is limited that a bunch of people, notably Baillargeon, have studied. Which is that although infants expect objects to be solid, objects have all sorts of other physical properties that infants don't know very much about.
So initially, for example, they know very little about the conditions under which an object should be supported. If I-- I'm not going to do it because it's got water in it. If I drop this thing in mid-air, it will fall.
OK, but at a point at which babies expect objects to move continuously and not to pass through each other, when they're like two months old, they don't expect that an unsupported object will fall, as opposed to just standing unsupported in midair.
And then at later ages, they start to get the idea that objects need support. But it takes them forever to work out that, for example, support means it has to be supported from below, not from the side, not from above. And it has to be the center of mass which is supported, not the little tippy end of the object, and so forth.
So, infants are using very limited principles. They're using limited information to track objects, and limited principles to predict what objects are going to do next.
And in general, I think, the basic principles that infants are using are these kind of spatial temporal principles that specify how things move over time in relation to one another. So, they represent objects as cohesive. Shown an object that's all interconnected, they don't expect to break into parts. Shown two separate objects, they don't expect them to merge together. They don't expect two objects to occupy the same place at the same time, or one thing to jump discontinuously from one place to another.
And they seem to have an idea that objects will change their motion on contact, not at a distance from each other. There's data supporting all of those notions in young infants. But basically, just about everything else that we know about objects, they seem not to get initially.
OK, a couple further things about these object representations in infants. Are they unique to infants? No. Run the same experiments on other animals, including controlled-reared chicks, and you get the same kinds of responses that you get in infants.
Are they learned? The infant studies tend not to be done with newborns because newborn vision is so radically bad. But that's where other animals can really be useful. And I think I have an example of one study that was done with newborn chicks, that's the Baillargeon Rotating Screen study.
So, here's how this works. You want to know, do animals represent that objects are solid the very first time they encounter an interaction involving two objects in which they collide with each other, or the very first time they're in a situation where they have to make an inference about the objects? So, to get at this, this is a lab in Italy. They ran a series of experiments on chicks.
So, the basic method is the following. It relies on the fact that-- doesn't use preferential looking. It relies on the fact that chicks who are reared alone with no other chicks around-- no hens, no other animals around-- but are given a single moving object in their environment will imprint to that object. And they'll show their imprinting by preferentially moving so as to be close to that object, approaching that object over other objects.
So, now you can read a study like preferential looking. But the indicator response is approach, not looking, to this object.
So, they rear a chick with a single moving object, say that red cylinder. And then they can show that the chick will subsequently approach that object because if the chick sees the object-- there's two screens-- if the chick sees the object move behind one screen and then is released, they'll go follow the object behind that screen, showing, by the way, that they too represent these objects as continuing to exist when they're out of sight. And they'll do that in either direction. So that's background.
So, then they were able to do the following study. They did one thing different. In the typical imprinting study, the chick is able to move around the object, and touch it, and peck at it. But, of course, that could give information for solidity.
So, what they do is they present the same object. But the chick is in a room that's either completely unilluminated-- it's got a black floor and three black walls, or a front surface that's made of Plexiglas.
So, through the Plexiglas, the chick can see the object on the far side of the Plexiglas. But if he tries to get to the object, he's going to run into the Plexiglas. OK?
So, the idea is the following. This chick is given no information that objects are solid. That if you bump into them, they will resist your touch, OK? The only thing that's experienced as solid is either total blackness if it's the floor or the other walls that the chick bumps into, or else this transparent surface that you can't see at all, that's keeping them away from the object.
OK, so they're being raised with no experience. They only see this one object which moves around. They never see it in relation to any other object. And then they run the following experiment.
There's three phases to the experiment. In the first phase, they get to see that this object can move behind either the screen on the left or the screen on the right. And then when the screens are removed, the object will be wherever it moved. They get to see that, though they don't actually get to go follow it.
That's point one. Point two, in the second phase, the two screens are there. The object starts out midway between them. And then all the lights go off, or a screen comes down, or something. And when it goes up, the object is out of sight.
And then the two screens are removed. And half the time it's on the left. And half the time it's on the right.
So the idea is now they're learning, OK, this object can move in the darkness. And when it does, it will either be behind the screen on the left or the screen on the right. You won't know which one, right?
And then they run the critical experiment. And in the critical experiment, the object, the imprinted object, mom, is midway between these two screens. The lights go off. But when the lights come back on, the screens have been rotated backward to different degrees.
And the question is, will the chick use the degree of rotation to infer that mom could only be behind the one that's rotated an amount that's consistent given her size and position? And, in fact, over a whole series of studies they showed that the chicks do that, and will systematically look for a mom in a place that's consistent with solidity in the Baillargeon Rotating Screen kind of study.
So, how many people think the concept of innateness is a meaningless concept? I would say most people in the field do. They say oh, innateness, this is an incoherent notion.
I think that in the field of cognitive development, claims of innateness are straightforward, empirical claims. What it is to say that knowledge is innate is knowledge is always about something. It's innate if it's present and functional on first encounters with the thing that it's about.
In this case, knowledge of solidity is about the way in which objects interact with one another when they move. If an animal exhibits that knowledge the first time they see one of those interactions, then I think you have evidence that that knowledge is built into this somewhere and in some way into the cognitive system.
Anyway, unique, learned over the first five months? Probably not. OK, here's the Stahl and Feigenson reading. I really wanted to get to this. This came out this year. And I highly recommend it.
OK, one question that's been asked a lot about both studies of animal cognition and studies of cognition in infancy is are we talking about representations that are explicit and accessible as a guide to further reasoning? Or are we talking about representations that are just so encapsulated, like the baby is just looking at these events where the screen rotates and thinking, "doesn't compute?" I mean, there's something wrong. Look longer, something wrong. Or is this more flexible knowledge?
And for the longest time, I thought this was a question we were never going to get to the bottom of. I just didn't know how we could actually figure out whether babies understanding of objects is at all continuous with our own communicable, stable, explicit understanding of objects.
But I think Stahl and Feigenson close in this reading. And since it got posted so late in the day, I'll go through it.
What they did is the following. They took two classic looking time studies, one from Baillargeon's lab, a study of infant sensitivity to support relations. And what they did in this study is you've got an object that's on sitting on a little pedestal. And it gets pushed, either from one location on the pedestal to another location on the pedestal, or from a location on the pedestal off the pedestal, where it hangs in midair, violating support. And earlier studies had shown babies look longer at the support violation by, like, five months of age, I think, or seven months of age, something like that. In these studies, the babies are all 11 months old. So they're way beyond the point at which you'd expect to see this.
And then the other study was the solidity study that we had run a long time ago, similar in logic to the Baillargeon ones. But a different physical setup, where you've got an object that's moving across a stage. It goes behind a screen, and there's a barrier-- there's two barriers there.
And the question is, where is it going to come to rest? And the correct answer would be in front of the first barrier because it can't go through a solid object. And babies either see that or they see the event that violates that. And they look longer at the violation.
OK, that's all the earlier work. Now, in this study they didn't use looking time as an indicator measure. Because they were dealing with 11-month-olds who can actually reach for objects and manipulate them, they didn't have to do that.
So, what they did instead is they presented each of these events for a relatively short period of time. And then they gave infants an opportunity to explore the objects. And they presented both the object whose behavior the baby had seen, and a novel object whose behavior the baby hadn't seen. And the question is how much time they'll spent exploring those objects.
So, here is exploration time. In blue is the case where the object did the expected thing. In red are the cases where the object did the unexpected thing, violating solidity on top, violating gravity on the bottom. And bars to the right are cases where the infant preferentially explored the object they had previously looked at, [? the ?] scene in the event. Bars to the left are cases where they explored the new object.
So, for the solidity case where the object acted in the consistent manner, they weren't so interested in that object. They went for the new one. But in the case where it behaved in the inconsistent manner, they went for the old object over the new one. So, their active exploration is being guided by what they had taken in the earlier events.
My favorite finding came from where they did this further analysis where they asked OK, what did they do? What did the babies do with these objects? And they specifically coded for two behaviors.
One of them was picking up an object and banging it on the table. And the other was picking up an object and releasing it, OK? And what they did dependent on what violation they had seen.
So, the ones who saw the solidity violation banged the thing on the table. And the ones who saw the gravity support violation picked it up and released it. So, this really looks like kind of guided rational exploration of a set of events, right? Now, they're there 11-month-olds. That doesn't tell us that two-month-old infants are ready, are processing events in this way. But at least by the end of infancy, it's looking like the kind of knowledge that the babies are exhibiting in these studies is kind of continuous with what we would expect we would probably do.
I'm going to skip this business about cornerstone of mature reasoning. There's lots of evidence that this object system actually captures numerical information, and that when kids learn about natural number, they do so by bringing that system together with the system of approximate number representation. But I will not get to social cognition at all if I tell you that story. So, take it on faith.
I talked about object representations both because they're so simple and fundamental, and because there's lots and lots of work that bears on them and that provides evidence that there is this early developing system that has these properties. I think they're similarly rich evidence, actually, for three other systems-- two that capture geometric spatial information about the world, and two, the object system and the approximate number system that capture numerical information.
Each of these systems is limited. In each case, what we find is some abilities that we have as adults that are shared by young infants and by inexperienced animals. And then a huge number of abilities that we have as adults that these systems don't capture. The systems of geometry, there are basic geometric properties that aren't captured by each of these systems.
And anyway, so they're highly limited systems. They're shared by other animals, as I said before. Each system is distinct from the others, and you can see that in three different ways. I don't have time to go into details. But just to give you an overview of the kind of evidence that would show you this, they capture different properties from one another.
So, you can't ask a question like, "are babies sensitive to distance?" You have to ask-- you have to specify which subsystem you're talking about. If you give babies a shape discrimination test with objects, they will be insensitive to absolute distance. But if you give them a navigation test, they'll be sensitive to it.
So, the systems are capturing different properties. They're operating on different features of the environment. And most deeply, they're solving different problems. For these two geometry systems, one of them is solving the problem, where am I? How do I get where I want to go?
And the other system is solving the problem, what is this? What is this thing that I'm looking at? And can I use information about its shape to answer that question?
The systems are all innate in the sense I tried to exhibit before-- present, and functional, and first encounter with the entities that they serve to represent. Sometimes you can show this with newborn infants. In studies of number, you can. Sometimes you have to go to controlled-growth animals.
They continue to function throughout life. And so, you find them in people in all cultures of the world with and without education. And we construct new concepts by combining together their outputs. And very roughly, as I said, I think we combine the outputs of the objects system with the number system to get to exact, large representations of exact, large numbers-- the counting numbers or integers. And we combine the two core systems of geometry to get to abstract geometry, which serves to represent both distance and angle, and crucially, the relationships between them that young infants seem to be quite blind to.
And the evidence is that you see kids making this step toward more abstract systems of knowledge as they master language and other symbol systems for geometry. Spatial symbols are also really important.
But I want to get to core social cognition. And I want to ask what's going on in that domain. Now, I think almost everybody believes that humans have unique , innate talents for social cognition.
OK, this belief comes in many forms you get. Mike Tomasello arguing that what distinguishes our species is our predisposition for sharing and cooperation, which we have that no other animals have.
There are people who argue that what distinguishes us is the complex social networks that we develop, where we track individuals and their relationships to one another over much larger social networks than any other animal does.
There are people who argue that what's special about us is the way in which we learn from others, our kind of-- and Tomasello has argued this as well. Our predisposition for socially-guided learning, or even predisposition for pedagogy, for teaching other people. And for taking in information from other people on the assumption that their intention is to teach us. And that we use what we can infer about their pedagogical intentions in learning things.
And a precursor to all of these ideas is the work of David [? Primack, ?] who died two weeks ago or three weeks ago. A great psychologist who was really interested in this question, what's unique about human cognition. And thought that the answer was going to be found in our social nature.
Now, there's no question there's been lots of research showing that by the time kids are about a year of age, they're doing all kinds of things that other animals aren't doing, including pointing at things, sharing attention with people. So, they see an object and you're there, they want you to see that object. They look back and forth between you and the object, and try to get you to look at that object.
If you're Mike Tomasello and you're running this really cool, simple experiment where all you do is you're really friendly with the baby. But you refuse to look at the object that they're trying to get you to look at, they go crazy. You know, they really want you to look at that object. There's no question all of that is going on.
The thing is, though, it all starts in the second year of life. None of that is happening in year one.
All right, so, for me, the question is where are all these abilities coming from? And what we will find when we look harder at the first year of life and ask what kinds of social cognitive abilities do we see there?
OK, so that's what I want to do. Oh, and by the way, if you think-- as I'm inclined to-- that what makes us special is our combinatorial capacity and that that, in turn, is connected to natural language, then you really need to look in the first year of life. Because of all of the recent evidence showing that language learning gets going extremely early, OK?
So, the old story is kids get into the business of language learning when they're about a year of age. That's when they start saying things that other people can understand. But we now know that long before they start saying things that other people can understand, they're processing not only the sounds of language, but also as critical aspects of the meanings of the speech that they're hearing.
So, at three months of age they have this notion that words connect to objects. If you show, as Sandy Waxman has done, if you show three-month-old infants a succession of pictures of dinosaurs, and then you show them a new dinosaur and a fish, they will look equally at the new dinosaur and the fish.
But if, each time you show them a dinosaur, you say look, a dax, they will form the category of dinosaurs, and look longer at the fish. And if, instead of saying look at dax, you play the speech in reverse or you play tones or music, you do not get the effect. It's specific to speech.
So, they're connecting speech to objects very early. She doesn't think they're learning the meaning of the word dax. But they are predisposed already at three months of age to connect the speech to the object.
By six months they've learned a bunch of word meanings, more and more, it's turning out as experiments are continuing. And by nine months they're starting to respond in distinctive ways to expressions. This is a point at which if you-- I told you in the Xu, Carey kinds of studies, if a truck comes out from behind a screen and a duck comes out, the babies don't know if it's one object or two. But if you say, "look, a truck," "look, a duck," or if you say "look, a dax," "look, a blick," now they expect two objects. If you say "look, a blick," "look, a blick," they expect one object.
OK, so, they're using the speech already to connect to objects. And by 12 months of age, this is the development that I get most excited about. They're starting to combine social speech with talk about objects. This is where they're starting to understand when people say things to them like "look baby, a duck." Combining the social-- addressing you with the talk of the object.
The question then is, what are the social cognitive abilities of these infants as-- you know, before their language has entirely come together. What do we see? What I think we see is an interesting mix of abilities and disabilities, islands of competence and in a big sea of incompetence.
So, one much discussed study shows-- I'm not going to go through it in detail-- shows that babies who are presented with an agent who acts on objects will take account of what's visible to that agent, what's potentially perceptually accessible to that agent? What could be present in that agent's visual field in making predictions about what that agent is going to act on.
All right, so, if the agent can see that an object has left the scene, they'll make different predictions about his action than if he couldn't see that that object had left the scene. There's also evidence that infants as young as three months do something that sometimes gets called social or moral evaluation.
So, show them events in which one character, that red guy, is trying to climb a hill. And a nice yellow guy helps him up the hill, kind of pushes him up the hill, versus a mean blue guy, knocks him down the hill. The babies will show a preference for the yellow guy over the blue guy if they're given a test where they get to either selectively look at one of the two or they get to reach for one of the two. They'll go for the guy that we would describe as helpful. And they do that in a whole bunch of different situations.
So, this makes it seem as if babies have these really fancy social cognitive abilities. But in other situations, babies do things that seem to fly in the face of that. So, let me give you some examples.
I said babies are sensitive to the perceptual accessibility of an object quite early on. But they don't seem to understand that other people see objects. So there's all kinds of tasks that they fail. They're sensitive to direct gaze. If somebody is looking directly at them, they will look back relative to somebody who is looking away.
However, if somebody turns and looks at an object, they're just as likely to follow their gaze if their eyes are closed or they're wearing a blindfold as if their eyes are open. So, they don't seem to be sensitive to gaze in that situation.
What's more, if there's two objects present and a person turns to look at one, they don't encode that person's action in relation to the object. If the object moves, they don't expect their gaze to move along with it, for example. They don't get that right until 12 months of age.
And finally, studies we did that really surprised me, we were sure that if you presented babies with two objects and you turned to look at one and emoted positively to that object, the babies would be more surprised if you picked up the other one than if you picked up that one.
We could not get the result until 14 months of age. The younger babies equally expected you to pick up either of these two objects despite the fact that all of your attention and focus was on one of them and not on the other.
So, it looks like there's just this really limited notion of seeing as visual access. But not a kind of full blown notion of seeing objects, which I think is showing a real limit to infants' mental state attributions in the first year.
Now in the helping situations, a problem here is that it's looking in the situation where one person is helping another as if they're evaluating the behavior of that person with respect to that person's second order goal. I mean, their goal was to foster the goal of the first character.
But babies can't reason about second order goals even for their own actions or a single person's actions. So, if they see somebody open a box to get an object that's inside-- they open the box and then they reach for the object that's inside, you ask the baby, in effect, what was the goal of that person by varying which object is inside versus where the two boxes are, they represent that action as directed to the box, not to the object that's inside. They get the first order goal. They don't get the second order goal.
So, this kind of raises this question, where babies are looking really smart in a few of these studies. And then they're looking incredibly dumb in a whole bunch of other studies. And I think it really-- this is an opportunity.
When you see this kind of uneven pattern of performance, it can be a real clue that you're dealing with a system that's a highly impoverished version of our own social cognitive abilities. And maybe it gives us a position to learn what our deepest, most fundamental social cognitive notions are by taking apart where are the infant succeeding and where are they failing.
So, let me see if I can do that. Here's a first hypothesis. I think babies have two separate systems for representing other people in their actions. At one system represents people as agents who act on objects. And the other system represents people as social beings who engage with other social beings, OK?
Now, I'm not saying they think there's two kinds of people in the world. There's one kind of person out there. And sometimes they're acting in a way that the baby can analyze as the actions of an agent. Sometimes they're acting in a way that the baby can analyze as the engagement, social engagement, the social gestures of a social being, OK? That's the hypothesis.
And, like the other core systems, my hypothesis will be that these systems are innate and universal. They are radically limited compared to our notions of agents and social beings as adults. They're distinct from one. Another they're shared by other animals. And they don't, by themselves, account for our unique cognitive achievements.
But at somewhere between around 10 months and 12 months, babies may to combine these two systems together into a new notion of beings who are both agents and social beings and whose actions, at one in the same time, are directed to objects and our social. Call that persons, that combine these two abilities.
And I think this productive system may be unique to humans. We may be the only creatures who have a concept of persons whose actions are both social and instrumental at the same time and whose mental states are combining information from these two systems as well.
I'll do one slide on agents, OK? What do we know from studies of infants about agents? We know a lot from studies of infants who are about six months of age. There is less work on younger infants and less work on controlled-reared animals. But what I've put on the slide is stuff that I think is empirically supported, actually, in younger infants as well, though it hasn't been studied as much.
First, that agents engage in self-propelled goal-directed motion. So, they move on their own and they direct their motion to goals. They act on things that are perceptually accessible to them, not things that are inaccessible.
They act efficiently. So, if there's an obstacle in the path, you'll go around it. But if the obstacle is removed, you will take a straight path to that object. They make things happen. So, these beautiful studies by Sabina Pauen where first she shows babies side-by-side a ball and an inert stuffed animal. And babies look about equally at the two.
Then she puts them together. And this is actually a trick toy that kind of moves around in this jerky, agenty, animate-looking way. There and they're both undergoing exactly the same motion because they're connected to each other. And then she takes them apart again.
And after seeing the two moving together, when she takes them apart, now the babies look primarily at the animal. Now, in fact, the mechanism that caused the motion was inside the ball. But you see it, and the babies attribute the motion to the guy that's got the animate features, and also attribute to him the power to cause the motion in the other object.
So, when you take them apart, you don't think the object is going to move on its own. But you do think that the animal could move on its own.
And, of course, my favorite study of infants' attribution of causal powers to agents is work that Rebecca [? Sachs ?] did some 10 years ago showing that even if you don't show babies an agent, you just show them that an inert object has moved across the scene, they'll infer that there's an agent out of sight somewhere who set that thing in motion.
What about the social system? Well, I told you babies are sensitive to direct gaze. So are monkeys. The same experiments have been done in monkeys. And they show the same sensitivity to direct gaze that babies do.
Babies like to engage in face-to-face interaction. So do infant monkeys with their mothers and chimpanzees-- those are monkeys. Yeah, it's been done both with chimpanzees and with monkeys. They all like direct gaze.
Direct gaze falls off later in development for non-human primates. But initially, you compare young human infants to non-human primates, you see the same patterns of sensitivity to gaze in both species.
Infants also imitate other people. So, if Andy [INAUDIBLE] sticks out his tongue at a baby, babies will tend to produce that same tongue gesture. The imitation is subtle. It's sometimes hard to get. But it's now been shown for, like, 40 years. And I think the field is pretty much convinced that this is a real phenomenon. It exists.
It's not unique to humans either. Infant monkeys and chimpanzees both engage in the same patterns of imitation that human infants show. So, we're social. But so are they.
And the final thing that I think we know about infants in their spontaneous interactions with other people and in laboratory experiments where that's been brought into the lab, is that infants will move their attention in the direction in which another face moves its eyes.
So, in these studies, the babies see a photograph or a schematic drawing of a face that's looking directly at them. And then discontinuously, the eyes of the face move either to the left or to the right. And then the face disappears, and an object appears either on the left or on the right.
And the babies will get their eyes to the object faster if it appears on the same side that the face's eyes had moved to. So, the baby is shifting their attention in the same direction that the person is moving their eyes.
Now, this isn't sharing attention. It doesn't happen if the face remains visible. It also, interestingly, doesn't happen if the eyes don't start by looking at the infant. The eyes have to start by looking at the infant in order to get this effect in the first place, OK?
So, I mean, what I think is going on here is something like when the eyes of this other person are looking at the infant, the two of them enter a state of engagement. And then, when the movement of the other person's eyes indicates a shift of the other person's phenomenal attention, the baby undergoes the same shift. They move with the-- they move their attention with the shift of attention of the other person.
So, that leads me to the following hypothesis. That the core system has these three components to it. First of all, there are beings in the world who will engage with me. And they'll signal their engagement by direct gaze.
When they're engaged with me, we will align our actions. So, they'll do what I do. I'll do what they do. And the most speculative part-- but I believe it-- is the notion that when social beings engage with one another, they share phenomenal states of attention or emotion.
The literature on empathy is kind of a mess. But as far as you can make sense of it, it looks like it's there as early as you can look for it in young infants. And I think they're just in this sense of emotion spreading between social beings in states of engagement, that infants experience this themselves and expect it of other social beings.
So, that's the hypothesis. We're trying to test it my favorite work to test it is work that Lindsey Powell has been doing for a number of years now, looking at infants' expectations for imitation in third party contexts, where they're just looking at animated events in which guys interact with each other socially and then do or don't imitate each other, or events in which guys imitate each other and then either approach someone that they've imitated or approach someone that they haven't imitated.
Looking to see do infants have-- using looking time methods to ask, do infants expect social beings who are engaging with one another to imitate each other? And conversely, do they expect social beings who imitate one another to engage with one another in other ways by approaching each other, and moving together, and so forth.
And so far, I think the evidence is consistent with the idea that there is this system for reasoning about social beings, and also evidence that it's distinct from the system for reasoning about agents and their actions on objects.
One reason to think it's distinct is that Lindsey can do exactly the same experiments with the same animated characters and just swap out whether the character's motions are responding to another social being versus responding to an inanimate object. And gets completely different patterns of inferences.
If it's another social being, then what you've done is seen as imitation, and you expect approach to that creature. If it's not another social being, then it's seen as an instrumental action. And you expect maybe the person will do the same instrumental action again. But they're not going to show signs of a social engagement with that creature.
The summary would be something like this. There's two different ways that babies have of construing the actions of other people, either as agents who act on the world or as social beings who engage with one another and share phenomenal states. OK?
And the idea is that these systems are separate from one another initially. They're both present and functional. They're both highly limited. And they're both shared by other animals.
So, my hypothesis is there's nothing special about the social-- our starting state social cognitive abilities, relative to the starting state social cognitive abilities, at least of other primates. We don't really know what's going on with more distantly-related animals at this point.
But I wanted to make this point. I think each of these systems supports a different notion of mental states. So, the agent system supports the notion of mental states as intentional relations to objects. They're about objects. And I think the social system supports the notion of mental states as shared phenomenal experiences.
Now, for us as adults, mental states are both these things. And our intentional relations to objects, our beliefs, and hopes, and desires about objects have phenomenal content. And our phenomenal experiences incorporate information about the world, their experiences of things in the external world.
These really go together for us as adults. My hypothesis is they are separate for infants. And there are little hints. Sometimes when you discover things in infants that aren't true of adults, you then discover there's still traces of that infant system in us as adults.
And I think some of the work in experimental philosophy points to this, in this case of these two notions of mental states. So, there's this work that asks, for example, is it OK to say Microsoft is planning to introduce a new product? That should sound OK. What about, Microsoft feels sad about the way people reacted to its new product. It sort of feels less good, right?
I mean, I think we kind of have this notion that there can be agents who act intentionally on the world, and plan, and cause changes in the world, but don't have shareable, phenomenal mental states, right?
And you can sort of maybe also get the intuition that you can have the second but not the first. Like, you might think that a newborn baby can-- you know, that when you see a baby and they start to cry, you might feel like you share their phenomenal state. Or if you smile and they smile back, they share yours without necessarily attributing to them any capacity to plan, change, or affect the world in any instrumental way, and so forth.
So, my disc is almost full. It's OK, I'm out of time here anyway.
OK, so these notions may stay somewhat separate for adults. But if that's right, then I think there really-- there's another way in which we could account for the uniquely human things that emerged at the beginning of the second year. And there really are dramatic things that emerge when kids start sharing attention, communicating about objects.
When kids start giving-- around the beginning of the second year, kids start understanding what it is to give a gift or receive a gift. They start giving objects back and forth to other people. And it's like an object-directed action. But it has a social meaning, right?
We've done these studies where we wanted to know what babies' social preferences were. So, we present them with two people who maybe speak with different accents or who behave differently in different ways, who each, then, offers the infant an object.
If a baby is under 10 months of age, here's what they will do. They've got the two people. They've got the two objects being offered. They will look back and forth between the two people. Then they will look back and forth between the two objects, and they'll take whichever object they want.
Starting at around a year of age, 10 months to a year of age, you see a very different pattern. They don't go back and forth between the two people or between the two objects. They go from an object to a person, back to the object, back to the person. And then they decide whether they're going to take that gift that this Trojan stranger is bearing.
So, it's like the action on the object has a social meaning. And conversely, they start imitating actions on objects. They start acting jointly with other people and cooperating with other people, which, again, is combining instrumental and social actions. Though now, with cooperation, often it's the social action that has an instrumental consequence. You and someone else can do something together that neither of you could accomplish alone.
We don't know what causes the changes at the end of the first year. I'm giving you a hypothesis. We don't know if it's correct. I do think we know that language is developing at the right time to be playing this role. That kids are learning words early that can be mapped either to the agent system or to the social system for representing social beings.
And then, they're starting to combine them together into expressions and understand other people's combining them into expressions at about the time that we're seeing the emergence of these new social cognitive abilities.
So the timing is right. But that doesn't mean that the hypothesis is right. I mean, you could come up with a zillion others, including the reverse hypothesis, that there's some maturational reason why these social cognitive abilities are coming in. And that, in turn, is allowing the new steps of language learning to take place. Or there's some third factor. Or it's just a coincidence.
So, I think this is an area that needs to be studied in the same kinds of ways that we've studied numerical and geometrical cognition. Except that the studies would have to happen much, much earlier. And it's going to be way more fun to do them.
Associated Research Thrust: