Here is part two of Forskningspolitikk’s interview with Dr. Jack Stilgoe, Senior Lecturer in Social Studies of Science at University College London. We talk about the negative and positive effects of research and innovation, how researchers and policy makers understand their role in society, and what this means for policy.
Per Koch, Forskningspolitikk
There is a strong focus on scientific excellence in research and innovation policy developments. Some university scientists, in particular, are strongly arguing that scientific excellence – most often understood as publication in top tier academic journals – should be their main way of measuring the effects of science. So the best researchers, as defined by this way of thinking, would get the funding they needed. To me that is a little bit like disconnecting science and society completely. But it’s very hard for policymakers to disagree.
‘Excellence’ purports to be a discussion about quality as measured by publication in high impact factor journals and all the rest of it, but actually excellence is just the contemporary way in which we talk about that old social contract, you know, excellence is code for «give me money and freedom and I will deliver good stuff.»
That’s the sort of the old contract, and as you quite rightly put it, I will deliver truth and I will deliver progress.
That’s fine, because we know that that’s an effective way to fund certain sorts of science. But we can’t pretend that that’s the only way to fund certain sorts of science, and also we need to be honest about it. You know when scientists say that this is inevitably the best way to support any particular approach to a problem, whether that’s climate change or transport or whatever, we have to say: «Okay, as with any other group we can see how you would say that, but this is our interest as policymakers or as business or as civil society. And therefore let’s negotiate these sorts of things and work out what a sensible well-rounded approach looks like.»
Excellence is a set of political claims about the interests and the power of one particular group. And it’s proven to be a very persuasive set of political claims. It’s the basis for the foundation of organizations like the National Science Foundation.
It runs through almost all science research policy documents that come from policymakers: This language of excellence or the idea that the old social contract is the best way to support science and innovation, and all I would say is that it’s just one way, and that it needs to be complemented with other approaches. Otherwise you risk narrowly reinforcing existing trajectories rather than trying to pluralise them.
I can see that this contract is coming under increased pressure, and you have pointed out some of the reasons for why that is the case. People see that science – or at least technology – is partly to be blamed for climate change, to give one obvious example. You have also pointed to the moon and the ghetto problem, which is really about inequalities that follows from the funding of research, both in the public sector and in private companies looking for profit. To put it bluntly: Rich white people get more out of this than poor black people.
Do you think that this awareness of the problems that follows from science and technology might be key to changing the way we look at it?
«The moon and the ghetto»: It was Richard Nelson who came up with that question in the 1970s at the same time as we saw a sort of growing awareness of the downsides of technology-led progress.
Silent spring had happened a few years before. We’d seen the growth of environmental movement and the creation of the Environmental Protection Agency. So people sensed that we needed more than just «invest in science and technology and it will deliver the good life».
I think awareness of those downsides is certainly a key. There is a problem though that comes with any new set off of innovations, which is that they emerge with a set of claims that this time it’s going to be different.
Along comes agricultural biotechnology and says: «Well, we won’t have those problems because this is a sustainable technology that will help the world’s poorest to develop crops that are reliable and that will benefit those groups.»
And we have nanotechnology emerging with a grand set of claims about being able to restructure matter in whatever way we like and how this will enable the radical rethinking of our economic models and leading to improvements in inequality. Each time we have a new set of large exaggerated claims made about science and technology. There’s the sense that this time is going to be different.
And yet the same issues come up again and again, which is that technologies tend to benefit those people that are powerful and the risks of technologies tend to be imposed on those people that are powerless. And unless we take seriously those big sorts of analyses, then we risk just repeating our same mistakes again and again.
And I think that’s a very difficult message for innovators to hear: That there are systemic things that may be beyond their control. That may mean that the dream cannot be realized as it’s been constructed.
Can we become too pessimistic about this? I am thinking about how the mobile phones and texting have liberated third world farmers from their local value chain and given them increased incomes. In general poverty is decreasing and health conditions improving. These statistics tell a story about improvement and it is caused by science and new technologies.
A quite reasonable response to the analysis about innovation and inequality would be to say that, OK, powerful people might benefit but, actually, the world’s poorest are benefiting as well. So even though the gap between rich and poor might be getting bigger, still the world’s poorest are benefiting from innovation and maybe that’s a price worth paying.
And that would be a conventional response and an opinion that might be held on the political right and I can completely see how that’s a reasonable point of view.
My own sense is that we should care about inequality as well or at the very least stop kidding ourselves that science and technology are inherently emancipatory. We should say yes, they do enable the build up of power by some groups and how you think about that depends on your political persuasion. But at least let’s be honest about that. Let’s not kid ourselves.
Let’s therefore get better, say, at anticipating the growing power of companies in Silicon Valley and ask, before we get to the stage of Facebook coming in front of the Senate committee: How could we have anticipated that earlier, and could we have asked: «Well, how might the build up of Facebook power their control over news, the control over the advertising industry, control over information that people see?» And: «Do we like that?»
I think in terms of optimism or pessimism my approach might be characterized as pessimistic because it challenges some of the inevitabilities that are imagined by powerful people.
Still, in critically analyzing and holding things accountable, I would say we can still do better. I’m not saying that things haven’t been good for the people worst off, but I would still want to counter the arguments of people like Steven Pinker who would say that substantial things have got better in the world, therefore we should carry on doing what we’re doing.
You have talked about there being certain social groups that in the face of technological progress feel that they are left out. There are certain truths they do believe in and certain truths they do not believe in, and they might even come to the point where they say that they do not care about the truth:«I care about my own future.» That is a choice of values. And at that point you stop listening to scientists if the scientists tell you that you just have to lay down and surrender because your former life is over. We were not prepared for this. Should we have been?
I think we definitely should have been, as social scientists. I feel an acute sense of embarrassment that I and my discipline were taken by surprise about the Brexit referendum and about Trump’s election, because I feel like that exposed a gap in our understanding of society, which should be what our job is.
I also think that we as academics need to be more reflexive and to say how are we implicated in the last couple of decades of progress such that some people have benefited and other people have really not.
Are we part of an elite that is not just making claims about truth but is also benefiting in other ways from technological progress? I think the discussion of truth and post truth is an easy way for elites to escape some of those responsibilities: They can say that if only people understood the truth then they would hold Donald Trump and Rudy Giuliani to account and therefore everything would be fine.
I would personally like that to be the case, but I don’t want to blame powerless people for misunderstanding things. I want to blame powerful people for misleading people and making false claims in order to enact their own agenda. And that’s where I want to locate accountability.
I am more interested in the social contract about progress as a way of explaining that, than I am about the social contract about truth and post truth. It’s the old familiar question we face when it comes to climate change. «If only everybody understood climate change as we scientists do then we would take the correct action and everything would it would be better.» And I think that’s a misunderstanding of the nature of that problem.
You could see situations in which you could get people to act on climate change without believing whether or not it’s true. You also need to understand what threat people are trying to interpret when they say climate change isn’t happening. The threat that people are trying to interpret is a threat to their way of life. It’s about progress. It’s not about proof. It’s dressed up in the language of truth in ways that we understand quite well.
You just need to look at people like Naomi Oreskes and her work on how doubt was manufactured around climate change, to understand actually the fragility of the language of truth. But that doesn’t necessarily mean that therefore in order to win that argument you just need to assert the truth even more forcefully. You need to understand what is actually being argued about. It’s about threats to people’s way of life. And there were a lot of people in America who saw climate change as a socialist plot to increase government control, rightly or wrongly: “The more Al Gore says it the more convinced we are that actually this is about the government trying to control our lives.”
So, this is truly about both the policy makers and scientists to try to put themselves into the heads of people to see the world from their perspective and then engage in a dialogue based on that?
They should definitely do more of that. To understand citizens’ needs and aspirations rather than try and dismiss them, because I think it’s very easy for elites to find reasons to dismiss things.
And the oldest trick in the book has been what we and the social sciences call the deficit model, which is to say the reason people don’t trust science and technology is because they don’t understand it.
Even though all the evidence suggests that that’s a very flawed understanding of people’s trust. It’s actually a very unscientific way of thinking about society, but it’s still a very dominant means of justification for universities and certainly academic elites.
Main photo by Per Koch.