Need a hypothesis? A I has one

Machine-learning algorithms seem to have insinuated their way into every human activity short of toenail clipping and dog washing, although the tech giants may have solutions in the works for both. If Alexa knows anything about such projects, she’s not saying.

>> Benedict CareyThe New York Times
Published : 24 Nov 2020, 09:28 AM
Updated : 24 Nov 2020, 09:28 AM

But one thing that algorithms presumably cannot do, besides feel heartbreak, is formulate theories to explain human behaviour or account for the varying blend of motives behind it. They are computer systems; they can’t play Sigmund Freud or Carl Jung, at least not convincingly. Social scientists have used the algorithms as tools, to number-crunch and test-drive ideas, and potentially predict behaviours — like how people will vote or who is likely to engage in self-harm — secure in the knowledge that ultimately humans are the ones who sit in the big-thinking chair.

Enter a team of psychologists intent on understanding human behaviour during the pandemic. Why do some people adhere more closely than others to COVID-19 containment measures such as social distancing and mask wearing? The researchers suspected that people who resisted such orders had some set of values or attitudes in common, regardless of their age or nationality, but had no idea which ones.

The team needed an interesting, testable hypothesis — a real idea. For that, they turned to a machine-learning algorithm.

“We decided, let’s try to think outside the box and get some actionable ideas from a machine-learning model,” said Krishna Savani, a psychologist at Nanyang, Technological University’s business school, in Singapore, and an author of the resulting study. His co-authors were Abhishek Sheetal, the lead author, who is also at Nanyang; and Zhiyu Feng, at Renmin University of China. “It was Abhishek’s idea,” Savani said.

The paper, posted in a recent issue of Psychological Science, may or may not presage a shift in how social science is done. But it provides a good primer, experts said, in using a machine to generate ideas rather than merely test them.

“This study highlights that a theory-blind, data-driven search of predictors can help generate novel hypotheses,” said Wiebke Bleidorn, a psychologist at the University of California, Davis. “And that theory can then be tested and refined.”

The researchers effectively worked backward. They reasoned that people who choose to flout virus containment measures were violating social norms, a kind of ethical lapse. Previous research had not provided clear answers about shared attitudes or beliefs that were associated with ethical standards — for example, a person’s willingness to justify cutting corners — in various scenarios. So the team had a machine-learning algorithm synthesize data from the World Values Survey, a project initiated by the University of Michigan in which some 350,000 people from nearly 100 countries answer ethics-related questions, as well as more than 900 other items.

The machine-learning programme pitted different combinations of attitudes and answers against one another to see which sets were most associated with high or low scores on the ethics questionnaires.

They found that the top 10 sets of attitudes linked to having strict ethical beliefs included views on religion, views about crime and confidence in political leadership. Two of those 10 stood out, the authors wrote: the belief that “humanity has a bright future” was associated with a strong ethical code, and the belief that “humanity has a bleak future” was associated with a looser one.

“We wanted something we could manipulate, in a study, and that applied to the situation we’re in right now — what does humanity’s future look like?” Savani said.

In a subsequent study of some 300 US residents, conducted online, half of the participants were asked to read a relatively dire but accurate accounting of how the pandemic was proceeding: China had contained it, but not without severe measures and some luck; the northeastern US had also contained it, but a second wave was underway and might be worse, and so on.

This group, after its reading assignment, was more likely to justify violations of COVID-19 etiquette, like hoarding groceries or going maskless, than the other participants, who had read an upbeat and equally accurate pandemic tale: China and other nations had contained outbreaks entirely, vaccines are on the way, and lockdowns and other measures have worked well.

“In the context of the Covid-19 pandemic,” the authors concluded, “our findings suggest that if we want people to act in an ethical manner, we should give people reasons to be optimistic about the future of the epidemic” through government and mass-media messaging, emphasising the positives.

That’s far easier said than done. No psychology paper is going to drive national policies, at least not without replication and more evidence, outside experts said. But a natural test of the idea may be unfolding: Based on preliminary data, two vaccines now in development are around 95% effective, scientists reported this month. Will that optimistic news spur more-responsible behaviour?

“Our findings would suggest that people are likely to be more ethical in their day-to-day lives, like wearing masks, with the news of all the vaccines,” Savani said in an email.

One common knock against machine-learning programs is that they are “black boxes”: They find patterns in large pools of complex data, but no one knows what those patterns mean. The computer cannot stop and explain why, for instance, combat veterans of a certain age, medical history and home ZIP code are at elevated risk for suicide, only that that’s what the data reveal. The systems provide predictions, but no real insight. The “deep” learners are shallow indeed.

But by having the machine start with a hypothesis it has helped form, the box is wedged open just a crack. After all, the vast banks of computers already running our lives may have discovered this optimism-ethics connection long ago, but who would know?

For that matter, who knows what other implicit, “learned” psychology theories all those machines are using, besides the obvious ad-driven, commercial ones? The machines may already have cracked hidden codes behind many human behaviors, but it will require live brains to help tease those out.

c.2020 The New York Times Company