| | | | | Axios Science | By Alison Snyder · Jun 22, 2023 | Thanks for reading Axios Science. This edition is 1,578 words, about a 6-minute read. | | | 1 big thing: Social scientists look to AI... | | | Illustration: Natalie Peeples/Axios | | Social scientists are testing whether the AI systems that power ChatGPT and other text and image generating tools can be used to better understand the behaviors, beliefs and values of humans themselves. Why it matters: Chatbots are being used to mimic the output of people — from cover letters to marketing copy to computer code. Some social scientists are now exploring whether they can offer new inroads to key questions about human behavior and help them reduce the time and cost of experiments. How it works: Large language models (LLM) that power generative AI tools are trained on text from websites, books, and other data and then find patterns in the relationships between words that allow the AI systems to respond to questions from users. - Social scientists use surveys, observations, behavioral tests and other tools in search of a general pattern of human behavior and social interactions across different populations. Studies are conducted on a subset of people that is meant to represent a larger group.
Details: Two recent papers look at how social scientists might use large language models to address questions about human decision-making, morality, and a slew of other complex attributes at the heart of what it means to be human. - One possibility is using LLMs in place of human study participants, researchers wrote last week in the journal Science.
- They reason that LLMs, with their vast training sets, can produce responses that represent a greater diversity of human perspectives than data collected through a much more limited number of questionnaires and other traditional tools of social science. Scientists have already analyzed the word associations in texts to reveal gender or racial bias or how individualism changes in a culture over time.
- "So you can obviously scale it up and use sophisticated models with an agent being a representation of the society," says Igor Grossmann, a professor of psychology at the University of Waterloo and co-author of the article.
- "[A]t a minimum, studies that use simulated participants could be used to generate hypotheses that could then be confirmed in human populations," the authors write.
Zoom in: In a separate article, researchers looked at just how humanlike ChatGPT's judgments are. - "At first I thought experiments would be off limits — of course, you couldn't do experiments," says Kurt Gray, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill and a co-author of the paper published last week in Trends in Cognitive Sciences.
- But when the researchers gave the ChatGPT API 16 moral scenarios and then evaluated its responses on 464 other scenarios, they found the AI system's responses correlated 95% with human ones.
- "If you can give to GPT and get what humans give you, do you need to give it to humans anyway?" he says.
Other experiments have begun to explore the use of "homo silicus." One study — not peer-reviewed — found an LLM could replicate the results of humans in classic experiments like the Ultimatum Game. Yes, but: Gray and his co-authors are quick to point out their result is "just one anecdote." - And, "it is literally just an AI system and not the people we are trying to study," Gray says. "The responses of people have been collected and somehow averaged in a very opaque way to respond to a particular prompt you give it — at any stage, there is room for distortion or misrepresentation."
| | | | 2. ...but AI answers could have pitfalls | Some researchers say current LLMs are just parroting back what they are told. - Gray argues people do the same thing: "You get your talking points from the media you consume, the books you read, what your parents tell you — you put it in context and elaborate and apply it to a situation. GPT does it on a large scale."
Algorithmic fairness and representation are also concerns for social scientists. - Today's LLMs are supposed to represent the average across everyone, Grossmann says. But they leave out fringe and minority opinions — "especially people who don't engage in social media, are less vocal, or are using different platforms." Many languages also aren't represented so key cultural differences aren't captured.
- AI systems run the risk of becoming echo chambers, Gray says. "GPT gives you an average and social psychologists are also very concerned with variability across groups, cultures, identities."
The fidelity of the algorithm itself is also key, Grossmann says. - Algorithmic fidelity is a measure of how well the patterns of relationships among ideas, attitudes and contexts represented in models reflect those in human populations.
The big picture: Those concerns about using AI as a proxy for human participants in experiments echo longstanding questions baked into social science itself. - Social scientists have always struggled with bias and representation in the data they study. "What is your ground truth marker? How do you know if the sample is representative of the population or human behavior writ large?" Grossmann says.
- "These are questions for any social science study."
What to watch: Engineers spend time and energy trying to root bias out of large language models to represent "the world that should be," Grossmann writes. - But social scientists are interested in "the world that is" — full of nuances and biases that frame and shape any determinations and predictions about humans and how they might act.
- Grossmann and others are calling for researchers to be able to access raw models before they have been tuned to reduce bias. Models are expensive to train and many are moving behind commercial walls, sparking a big debate about access and transparency — and the emergence of more open-source LLMs.
The bottom line: Grossman says silicone sampling won't be used for everything tomorrow, and Gray and his colleagues write that AI systems may never fully replace humans in social science studies. - There is also likely to be some handwringing about "the purpose and meaning of it all," while more junior researchers are "just going to start using it," Gray says. "Then those things are going to collide."
| | | | 3. India joins Artemis Accords for Moon exploration | | | Illustration: Annelise Capossela/Axios | | India has signed the Artemis Accords, a U.S.-led agreement governing peaceful uses of the Moon and its resources, prime minister Narendra Modi announced today, Axios' Miriam Kramer reports. Why it matters: "India has long seen itself as a counter-balance to geopolitical dyads," between the U.S. and Soviet Union in the past and today with the U.S. and China, the Secure World Foundation's Victoria Samson tells Axios. - China and the U.S. are both aiming to send crewed missions to the Moon in the coming decade.
- By agreeing to the Artemis Accords, "India has indicated that it leans more towards the U.S. approach to efforts on the Moon," Samson added.
- Twenty-six other nations have signed on to Artemis Accords.
Driving the news: "We have taken a big leap forward in our space cooperation," Modi said via a translator during a press briefing today with President Biden. - NASA and the Indian Space Research Organization are also "developing a strategic framework for human spaceflight cooperation this year," senior administration officials said.
- And the ISRO and NASA are planning to send an Indian astronaut to the International Space Station next year.
- The news comes after the White House announced this year that NASA would train an Indian astronaut at the Johnson Space Center in Houston. India has largely relied on Russia to help train its astronauts.
Go deeper. | | | | A message from Axios | Keep up on the go | | | | Catch up on the biggest stories of the day and why they matter with the Axios Today podcast. Host Niala Boodhoo is joined by journalists from Axios' newsroom to unpack the stories shaping your world and the trends shaping our time. Listen for free. | | | 4. Worthy of your time | | | Illustration: Annelise Capossela/Axios | | Light pollution raises human health concerns (Miriam Kramer — Axios) Here's how we could begin decoding an alien message using math (Matthew Hutson — Science News) Frozen in time: Cryopreserved tissues, organs and even whole organisms come back to life (Warren Cornwall — Science) | | | | 5. Something wondrous | | | An umbrella cloud generated by the underwater eruption of the Hunga Tonga-Hunga Ha'apai volcano on Jan. 15, 2022. Credits: NASA Earth Observatory image by Joshua Stevens using GOES imagery courtesy of NOAA and NESDIS | | The underwater eruption of the Hunga Tonga-Hunga Ha'apai volcano last year generated a supercharged storm — and the most intense lighting event on record. Why it matters: Observations of the eruption-induced storm could help efforts to one day monitor the hazards from volcanoes using real-time lightning data, the researchers who led the study write. Details: The underwater eruption created a plume of ash, water and volcanic gas that reached 58 km, or about 36 miles, above sea level and into the stratosphere. - It then expanded as an umbrella cloud and created powerful gravity waves that rippled outward from the plume, followed by flashes of lightning in a similar ring-shaped pattern that the researchers detected by combining data from sensors for radio waves and light.
- Lightning occurred high in the volcanic cloud, about 20 to 30 km (about 12 miles to 18.5 miles) above sea level, researchers report this week in Geophysical Research Letters.
- The storm lasted 11 hours and generated more than 2,600 lightning flashes per minute at its peak intensity, "an astonishing rate of volcanic lightning" and "the most intense lightning rates ever documented in Earth's atmosphere," they write.
The researchers propose a combination of a very large eruption rate, a fast-expanding umbrella cloud and vaporized seawater in the plume contributed to the unique event. - It "eventually formed electrifying collisions between volcanic ash, supercooled water and hailstones," per a press release.
The intrigue: The massive lighting event suggests lightning can be triggered in conditions beyond those typically seen on Earth. - The volcanism in this eruption involved magma erupting through water — what's called phreatoplinian. Until now it had been known to exist only through geological records, according to the release.
- "It was like unearthing a dinosaur and seeing it walk around on four legs," Alexa Van Eaton, a volcanologist at the United States Geological Survey who led the study, said in the press release.
| | | | A message from Axios | Keep up on the go | | | | Catch up on the biggest stories of the day and why they matter with the Axios Today podcast. Host Niala Boodhoo is joined by journalists from Axios' newsroom to unpack the stories shaping your world and the trends shaping our time. Listen for free. | | Big thanks to managing editor Scott Rosenberg for helping to edit this week's edition, to Natalie Peeples on the Axios Visuals team and to copy editor Carolyn DiPaolo. | | Are you a fan of this email format? Your essential communications — to staff, clients and other stakeholders — can have the same style. Axios HQ, a powerful platform, will help you do it. | | | | Axios thanks our partners for supporting our newsletters. Sponsorship has no influence on editorial content. Axios, 3100 Clarendon Blvd, Arlington VA 22201 | | You received this email because you signed up for newsletters from Axios. To stop receiving this newsletter, unsubscribe or manage your email preferences. | | Was this email forwarded to you? Sign up now to get Axios in your inbox. | | Follow Axios on social media: | | | |
No comments