Using Chatbots and Ancient Writing to Simulate the Cultural Attitudes of Ancient CivilizationsNEWS | 22 December 2024Social psychologists could turn artificial-intelligence-powered tools like ChatGPT on to writings from past cultures. Will this help us study ancient civilizations?
Rachel Feltman: There’s been a lot of hype around artificial intelligence lately. Some companies want us to believe that machine learning is powerful enough to practically tell the future. But what about using AI to explore the past—and talk to members of long-dead civilizations?
For Scientific American’s Science Quickly, I’m Rachel Feltman. My guest today is Michael Varnum, social psychology area head and associate professor at Arizona State University. He’s one of the co-authors of a recent opinion paper that proposes a somewhat spooky new use for tools like ChatGPT.
Michael, thanks so much for joining us today.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Michael Varnum: My pleasure. Thanks for having me on.
Feltman: So you have this new paper, kind of a “ghost in the machine” sort of vibe [laughs]. Tell us a little bit about the problem you’re setting out to solve.
Varnum: Yeah, so I’ve been interested in thinking about cultural change for some time, and I’ve done a lot of work in that area. But we run into some limitations when we’re trying to get insight into the mentality or behavior of folks who are no longer with us. We obviously don’t have time machines, right? We can’t bring the dead back and ask them to participate in our experiments or run them through economic games.
And so typically what folks like me have to do is use rather indirect proxies, right? Maybe we get archival data on things like marriage and divorce or crimes, or we look at cultural products like the language folks used in books and we try to infer what kinds of values people might’ve had or what kinds of feelings they might’ve had towards different kinds of groups. But that’s all kind of indirect.
What would be amazing is if we could actually get the kind of data we get from folks today, just from, say, you know, ancient Romans or Vikings or medieval Persians. And one thing that really excited me in the past year or two is folks started to realize that you could simulate at least modern participants with programs like ChatGPT and surprisingly, and I think excitingly, replicate a whole host of classic effects in the behavioral sciences.
And so we thought, “Huh, if we’re able to do this based on these models created from the writings of modern people, maybe we could do this based on the writings of ancient people. This might open up a whole new world of possibilities.”
Feltman: Yeah, could you tell me a little bit more about some of those experiments that have replicated psychological phenomena using language learning models?
Varnum: One of the more powerful ones set out to replicate 70 different large-scale survey experiments with simulated participants from ChatGPT, and they found that the results correlated at about 0.9 from what folks had observed with real human beings. And of course, this isn’t what anyone designed Llama or ChatGPT to do ...
Feltman: Mm-hmm.
Varnum: But in the process of making these models that are able to converse with us in very natural kinds of ways, they seem to have captured quite a bit of human psychology.
Feltman: And you mention in the paper that some folks are already using historical texts to train large language models, so what kind of stuff are they doing so far?
Varnum: So far these are just baby steps.
Feltman: Mm-hmm.
Varnum: Folks are just trying to see, “Okay, if we train a model based on medieval European texts, what’s its understanding of the solar system or, you know, medicine or biology?” And they have an incorrect number of planets. They believe in the four humors of the body.
So so far, to my knowledge, no one has run these kind of fine-tuned models through modern experiments or surveys, really, but I’m guessing that’s going to start happening soon, and I’m really excited to see what folks find.
Feltman: Yeah. So one thing that came to mind when I was reading your paper was the inherent bias we see in the fossil record. You know, our sense of what life looked like in the past is influenced by what gets preserved, and that’s influenced by all sorts of factors, like climate and the bodies of the organisms we’re talking about. And, you know, I imagine that in most times and places in history, certain people were way overrepresented in written text. So how are you proposing that researchers navigate that to make sure we don’t get this really, you know, biased sense of what people were like?
Varnum: That’s a really vexing challenge for this kind of proposal ...
Feltman: Mm-hmm.
Varnum: Because, well, for most of human history no one was literate, right? Writing is relatively recent ...
Feltman: Right.
Varnum: And for the period in which some societies had writing, very few people actually knew how to read and write. Even fewer of them wrote things down that survive into the modern era. And so what you’re getting is data that’s gonna skew towards folks who are more elite, who are more educated.
Feltman: Mm.
Varnum: And we think there may be a couple ways to address this, and they’re imperfect, right? But maybe if we use them in combination, we can still deal a bit with the bias that’s gonna be baked into these models.
One way is we know quite a bit about how things like social class affect the psychology of modern populations ...
Feltman: Mm.
Varnum: So potentially we could fine-tune those models a little bit, or we could run them through experiments and surveys and then kind of wait their responses to try to account for that bias. You know, in some cases, we have other sources of historical record and analysis. To the extent that those capture more broadly, perhaps, a mentality or behavioral patterns of past populations, we could see if the results of these historical large language models align with those kinds of conclusions. But it is tricky, for sure. That’s gonna be a, a real challenge to overcome.
Feltman: Yeah, and of course, that wouldn’t be a challenge that’s unique to using historical data. It’s a challenge we also see in training LLMs with modern data.
Varnum: Oh, absolutely, right? And, you know, one thing that inspired this idea is some work by folks like Mohammad Atari and Yan Tao showing that current large language models really look kind of WEIRD in the sense of they’re more closely aligned with the psychology of folks in Western and Anglophone populations than in many other parts of the world, and I mean, hey, that makes sense, right, given the training data is overrepresenting these societies. But it’s also kind of exciting because it suggests that if you had a different kind of corpus, then you would capture some of that cultural zeitgeist and some of the culturally specific mentality of the folks who produced it.
Feltman: Yeah, could you just tell people what WEIRD stands for in this context? ’Cause I think it’s a really good acronym [laughs], so ...
Varnum: Yeah, so this is an acronym that Joe Henrich developed about a decade and a half ago, and it stands for Western, educated, industrialized, rich [and democratic]. And so it turns out the minority of current human beings live in such kinds of societies.
Feltman: Mm-hmm.
Varnum: But depending on how you slice it the vast majority of participants in behavioral science come from these samples.
And this matters because it turns out culture affects how we think and act in a wide variety of ways, from the values that we hold to how much interpersonal distance we like when we’re in public, basic patterns of visual attention and cognition, rates of cooperation. It’s a very long list.
Feltman: No, and I mean, I can definitely imagine—you know, obviously it’s very evocative to talk about ancient history, but I can definitely imagine researchers trying to use, you know, some, like, 19th-, 20th-, even 21st-century text from, like, underrepresented groups to sort of reexamine, you know, these psychological studies that maybe left large swaths of the population out.
Varnum: Yeah, I, I think that’s an incredibly good idea. And in some ways sort of the more recently we go back in the past, the easier it will be to do this kind of research.
Feltman: Yeah.
Varnum: So while it’s exciting to think about pushing the envelope very, very far back, probably the start—you know, the initial starting point will be: “Let’s look back 100 years or 150.”
Feltman: Mm, yeah—well, and speaking of which, you know, imagining for a moment that this idea totally takes off and we’re getting a bunch of sort of, you know, undead psychology projects going, what are some of your, like, dream use cases for this?
Varnum: So I, I do a lot of research that’s informed by evolutionary psychology.
Feltman: Mm-hmm.
Varnum: And sometimes we will run an experiment or a survey, and we’ll try to get data from, you know, every continent in the world, right, to see if some part of human psychology might be universal. And when we find it, it’s really exciting, but we’re making an inferential leap from saying, “It’s, you know, universal, and it makes adaptive sense,” to, “This is how people thought in the past—and especially in the deep past.”
And so being able to push that temporal window back ...
Feltman: Mm-hmm.
Varnum: You know, robustly, folks like [Douglas] Kenrick and [David] Schmitt and others have found differences between men and women in preferred sexual strategies: You know, do you want a large number of partners and uncommitted relationships, or do you prefer to have more exclusive relationships and fewer partners? And that seems to hold all across the globe, but we could have a lot more confidence in these things really being a core part of human nature if we started to see them from societies that lived hundreds or thousands of years ago, I think.
Feltman: Totally.
Varnum: The idea is forward-looking and kind of speculative, right? I, I don’t have one of these things on my computer ready to run, but Sachin Banker and colleagues recently published a paper where they had GPT-4 generate dozens of new hypotheses for social psychology research and then had actual social psychologists generate new hypotheses.
Feltman: Mm.
Varnum: And it turns out other social psychologists thought that the AI was coming up with way more compelling and probably true ideas.
Feltman: Mm, interesting.
Varnum: So we may in the future see AI used not just to simulate participants or code data but even to generate ideas, and you can imagine weird kind of closed loops like that, where folks like me might be out of a job.
Feltman: [Laughs] Well, hopefully not. I think, you know, there will always be room for that unique human factor. But I think it’s, it’s great to think about the ways that AI could, you know, really be an interesting tool for us, so thank you so much for taking the time to come and chat with us today.
Varnum: Oh, thanks, Rachel. This was my pleasure. I enjoyed the conversation.
Feltman: That’s all for this week’s Friday Fascination. We’ll be back on Monday with our weekly news roundup. And on Wednesday we’re chatting about something almost as spooky as AI ghosts: the psychology of Black Friday shopping.
Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.
For Scientific American, this is Rachel Feltman. Have a great weekend!Author: Jeffery Delviscio. Rachel Feltman. Fonda Mwangi. Source