'Earth Experience Design' course with Gerry McGovern
Date: February 19 2022
Where: Online
Tickets: https://www.thisisdoing.com/courses/earth-experience-design-january-2022
About this episode
Is technology a good thing? That was my first question I asked Dr Sharon Richardson, who is a senior scientist and lecturer in geocomputation at the University of Zurich, where she applies data-intensive methods to improve understanding of and assist in human, societal and environmental challenges. Sharon also has a keen interest in exploring the potential and limits of AI in real-world decisions.
This transcript was created using the awesome, Descript. It may contain minor errors.
Note: This is an affiliate link, where This is HCD make a small commission if you sign up a Descript account.
This is an auto-generated transcript and may contain minor-errors.
00:00:07
S1: Welcome to Worldwide Waste, a podcast about our digital is killing the planet and what to do about use technology a good thing. That was my first question for Dr. Sharon Richardson, who is a senior scientist and lecturer in Geo Computation at the University of Zurich, where she applies debt intensive methods to improve understanding of and assist in human, societal and environmental challenges. Sharon also has a keen interest in exploring the potential and limits of A.I. in Real-World decisions.
00:00:40
S2: Yeah, it's an interesting one, though, isn't it? Because I think there's a statistic. This could be completely wrong. You know, I'd need to check it afterwards. But if we want to revert back to say Hunter-Gatherer ERA, then we can only sustain around about is a quarter million people on the planet. And that lifestyle, you know, technology has enabled us to grow as a population. And why is that an interesting debate to be had? Is that a good thing or not? You know, because generally, well, I'm here, and I wouldn't be if we'd said no to that. So I think technology ultimately is is a good thing. It's enabled us to get to the billions which we otherwise would not be possible on this planet. And the frustration is, is that used? Well, there shouldn't be the massive downsides that we're currently seeing. So it's not that we can't. It's a little bit like the example of electricity. You know, you can use electricity to make toast or to kill people. It's just do we then say, Oh, well, electricity is too dangerous, we should never have invented it. And of course, that would not be particularly sensible. So it's how we end up using these things. There's a great book I read a few years ago in physics for future presidents, and it talked through kinds of scenarios relating to physics that a future president should think about. And so is nuclear climate all the different scenarios? But there was a statistic in it, and again, I come back off the top of my head so I can look at a is to confirm it. But it went through the fact that every decade going forward, the actual energy savings on technology were enabling us to do more and that if we continued the progress of the last 100 years, by the end of this century, the planet could support a population of nine to 11 billion, all living at the European standard of living if we followed that progressive track. And one of the examples given was quite a simple one, which was if we still all use the types of refrigerator that were around in the nineteen seventies, I think in America alone, it would require something like an additional 50 gigawatt power plants to run the refrigerators if they were still using 1970s technology refrigerators instead of the modern ones, which are much, much more efficient. And so there were these scenarios when you thought, Wow, that makes sense. I, as we didn't envisage, is technology generating a new form of demand that in itself is creating a huge energy demand. But even if all the data centers becomes completely carbon neutral, which you know the Big Tech platforms are claiming they will, it's all the people then using those data centers as well. I mean, how many more Tik Tok videos do we need? You know how many can be classed as, say, art vs. waste because there's an energy cost every single one that we upload.
00:03:41
S1: I find it hard to understand the data centers, you know, saying that they're going to be carbon neutral because are they not counting the data servers themselves, which each of which create at least a ton of CO2 to be manufactured? You know who I find a lot of these accounting things, how how these entities are accounting for being neutral as being strange, that they're kind of don't account for the actual devices, they just talk about their renewable electricity. But most of the waste is actually in the device itself. It's in the server or the computer or the smartphone that is connected to these data centers. So it's a kind of a a strange accounting model they have when they talk about being carbon neutral.
00:04:37
S2: Absolutely. You know what happens? For example, if one of the large enterprise plays, you know, their next big client says, Well, you know, we're going to spend x million a year, you know, hosting things on your datacentre, but we're not pursuing that carbon neutrality, you know, and we're going to be using these things, these equipment that will cause a massive carbon cost with the platform say, Well, we're not going to have you as a customer. That would be an interesting. Alema, to to see play out
00:05:07
S1: when I was researching my my book Worldwide Waste, one of the stats that I kept coming across in the kind of different variations was that 90 percent of data is not used either three months after it's created or it's never used to begin with that, that we have, you know, these engines of producing data and that in the vast majority of situations, we we really it's not good, quite like a huge amount of it is just of no value to begin with, so it never gets used. And then some of it that may well be useful. Well, we still don't have great capacity to use it, but this sense of you know that our capacity to create data vastly outpaces our capacity to use it. What would your experience be in that?
00:06:06
S2: I think you could use a simple example that most people can relate to is that it doesn't happen so much these days, although some people still do. But they used to be a phase where you would create a careful set of subfolders in your inbox to file away your emails should you ever need to read them again. And I would challenge how many people ever so much as looked inside one of those folders ever again. How many times did even go back over an old email from six months ago, let alone a year ago? I would wager very, very few people and say, Yes, this is a huge challenge is that much of the data being generated has minimal value, but it's knowing which bits are going to get used. That's the that's the challenge. I don't know off the top of my head what the solution is as to whether or not, you know, how did you create some kind of tax or cost to make people more careful about what data does get captured and stored because of its cost compared to its potential value?
00:07:04
S1: That's a really interesting concept. I've been thinking as well about that we will need at some stage a data tax in some way because there's such an explosion of of debt. I was talking to a physicist who is saying that there's a theory. Not everyone agrees with it, and it's not been totally proven. But that actually data has a weight, a physical weight. It's a microscopic weight, but that it has a weight independent of of the weight of the hard disk or whatever that it's stored on, and that this physicist estimates that within maybe three, five or six hundred years, depending on the growth of data that it like, this weight will become really evident and will become, you know, very significant in the Earth's mass and because we are creating such unbelievable volumes of data. And he was saying, we ain't seen nothing yet with Internet of Things. It's it's going to it's going to explode in it in exponential value so that, you know, at some stage quite soon, data will really begin to stress organizations and, you know, and stress societies and in the massive volumes that it's it's being created, is that something you see coming or do you think those those fears are unnecessary or what what what do you think?
00:08:35
S2: I think it's fascinating when is proposing that there's some kind of additional actual white information beyond and talk about it where you can say it, there is. It's it's expressed in physical form the minute you print it out on a piece of paper, or even arguably if it's visible on a computer screen, then certainly when it's stored on a hard drive, there's there's a there's a physical artifact now involved. You know, we've both been around the digital for a similar period of time, and I've been through that process of working with organizations transitioning from paper to digital, where one of the big arguments is the sheer reduction of physical storage space required is a significant cost savings. So to then say, well, things keep getting smaller and smaller, so you can argue going. Dropping from paper down to digital was a massive reduction. But yes, if that digital then keeps exploding, will it then eventually fill the same amount of space but with, you know, realms more information than was possible with the physical and paper form?
00:09:38
S1: See, I did see him, as did a lot of air we train it in because, you know, we train with very poor quality data, but not alone. It's not. It's almost like the dump yard in McDonald's. It's like it's like the it's like the bins we feed. If we're feeding a lot of low level data that we don't know because we didn't clean up the data lake or, you know, the summer. And we don't know a lot of times whether it's. Good or bad, but we're just feeding these massive quantities of data. Isn't there a danger that that we that I, you know, because there is this a kind of independent? Oftentimes, we're not quite sure exactly what's happening in these machines or exactly how it's interpreting, but that if we feed it junk, we get a kind of a junk. I and a are a kind of a hungry A.I. or AI type of AI models that actually demand huge quantities of data going forward is what we don't to kind of make them clean living. This may sound like crazy talk, you know, the sense of, you know, that they're not designed to live off as little debt as possible. They're designed to demand as much debt as possible, and that creates energy inefficient designs.
00:11:05
S2: Yeah, this is where I actually think we will have a breakthrough, where design of A.I. changes fundamentally in the not too distant future. And I think the most advanced eyes were reached to at the moment are ones that use deep learning and deep reinforcement learning. And they are very the fed of massive amounts of data, or they run massive amounts of simulations as they learn scenarios from what they're often quite simple rules to begin with and to become quite complex scenarios. We've seen that with the various incarnations of Alphas that Google's produced. But at the moment, the general approach has been, you know, more data is better. So feed it. You know, it's like planning a training, a computer vision algorithm to feed as many images as you can give it, and it'll ultimately outperform the human, which has been proven. And I think it was 2014 when computer vision algorithms that first outperformed humans object detection in images, but we're still simultaneously is we're developing on the computer science from as we're learning more and more about how the brain works, too. And the current incarnations of AI algorithms are very much based around neural networks. I think that's no surprise to anybody. But we are starting to have other breakthroughs in terms of, well, how do we process information? Because, you know, if I ask you a question, you AI and you answer it, I doubt you'll have a conscious awareness that you went and trained a classifier on your advanced memory bank of previous experiences of raw data. And then from that class, I came up with a prediction. Instead, you quickly suppressed the abstract concepts that you just know that you've learned through life and then probably blended that with your most recent experiences. And I think that maybe the kinds of breakthroughs we'll see going forward where I might have its initial training on quite a large dataset, but I don't think it'll it will keep training on new data, but smaller data, and I think it will will develop better parameters and algorithms that start to have abstract concepts rather than requiring a huge amount of raw data each time to make change. I mean, it's a given example. It's some of the the DeepMind breakthroughs they had with playing computer games. I think it was the catch, the flack scenario, the way a computer performed brilliantly. But then if you make one change to the game, change a rule or change a structure, you've got to go right way back to the start and retrain the classifier all over again, whereas a human will just look at me. Yeah, OK, I can. I can kind of envisage what impact that change is going to have and adapt. At the moment, arias don't adapt very well. And I think that's the breakthrough we'll have where we'll see a fundamentally different approach to the amount of data. So I think that we can learn much less greedy, but much more contextual. So what's the current data in this scenario, as opposed to some archives that reflects some past that may or may not be appropriate to learn from?
00:14:08
S1: Well, that would be that would be a major improvement. Sean, do you see in conversations that awareness among AI researchers that they are gluttonous for energy like their that their systems in a climate crisis are incredibly, you know, they're like massive gas guzzling hummers, you know, and that that this is not exactly scientific model is that we should be an approach is developing when you know, we are worried that there may not be a planet that is livable for our children and our grandchildren. Is the AI community aware that its habits are and that its its its current scientific practices are are more like the oil industry than, you know, than than a sustainable industry?
00:15:12
S2: Yeah, I don't think that phrase data is the new oil helps on that front whatsoever. I think there are people within the community that are aware of it and concern. But I think I think there's a lot of different aspects that are raising concerns at the moment because obviously ethics is a very, very big concern at the moment. You know, there's much more awareness now of the bias in AI algorithms and the reality that an awful lot of them are merely reflecting the cultural biases that have existed in society for years, decades, even centuries. And we're seeing that reflected back at us when we build a machine off that archive dataset. So it's a huge community not really looking at ethics. And should we have some kind of equivalent of the medical Hippocratic Oath? You know, when I'm building in you, I should there be constraints or because at the moment there are none. I was writing an article recently, and I don't know if you remember back into this little round about 2000 2001 there was the Enron scandal, which brought down both Enron and their auditors at the time for being fraudulent, and it led to the Sarbanes-Oxley Act to try and prevent it from happening again. We haven't got anything yet like that Constrain II, despite the numerous examples of abuses that we're seeing within the criminal sector, for example, within benefits, we saw it with the A-level algorithms fiasco last year and the untold mental health issues that may have caused for some poor students who were completely unfairly judged by an algorithm that was known to have insufficient data to produce a reliable result. At the moment, there are no checks and balances. There's no cause for redress. There's a whole host of areas of AI that needs tackling at the moment.
00:17:00
S1: Yeah, and you just reminded me there it wasn't A.I., but the post office scandal another one?
00:17:09
S2: Absolutely. It's perhaps even worse because it's so simple and distressing, but the fact that there was a fault in the software, so it was misreporting. And despite numerous people having no prior history of any criminal activity being considered upstanding citizens, it didn't matter. Is that what the computer says? No, the computer says this, and we're going to trust the computer more than no character references testimonials. And they had no cause for redress, and it's taken years to now realize that they were all innocent and some people have died. You know, some have been ostracized from the communities they've been divorces. It is horrific that it's taken this long to find acknowledge that it was a fault in the computer system.
00:17:55
S1: Yeah, it's amazing. And I think, you know, we seem to have this inbuilt, you know, part of our DNA to look for new gods. You know, and it feels in many ways like technology is the new guard at the moment is there is is the is the head Jesus, you know, that, you know, the computer says, you know, the you know, and therefore it must be right in in the process that, you know, we we just externalize this sense of there's this higher thing that has the understanding and that we must follow it. And yet, you know, as you say, you know, it's it's just a malfunctioning computer program and that's just such a big loss. Not there wasn't any AI system, but when you consider, you know, the levels, I remember early ones and particularly, yeah, minorities and particularly gender. And I saw early AI systems that recommended that women were just having a panic attack, but that men were having a heart attack because, you know, there was a hundred years of data of male chauvinist doctors misdiagnosing women when they'd come in to visit them, say, Oh, you're just panicked, there's nothing, you know? And that that was the records of misdiagnosis that was said today. I and I just became a male chauvinist pig because that was the data.
00:19:35
S2: But I think the mix of Tape of Experiment is the current favorite one to bring up to demonstrate that, you know, and I will typically just reflect society back at us. So we might not like what we see, because you must remember that men like self-released the chatbot onto Twitter, and it started off being very friendly, and I think it was what it was. I was turned into a foul mouthed, abusive, sexist, racist, homophobic, you name it. If it could insult someone, it would so. Absolutely. I think we're a little bit in thrall of data sometimes, and I say that some who works with data all the time and I'm I'm, you know, when somebody gives me an opinion, I am trying to say, show me the data on that because, you know, opinions are easily formed sometimes on the scan system evidence. But that said, you know, my favorite phrase is this idea that an overreliance on data is just as flawed a decision as an overall reliance on beliefs. You know, it's the real world is messy. It's, you know, we've created an artificial environment for myself. And then we started building things, human nature and social organizations that are messy, ambiguous, sometimes chaotic environments. And at the moment, we're expecting clean, precise answers from AI that's not really fitting with reality. And so I can be. Just as wrong as asking somebody to have, you know, based on their belief, what do they think happened
00:21:09
S1: and how do we, you know, some of that because, you know, yeah, I've been spent, you know, in web design and all the best decision making. And let's, you know, let's judge how this page works based on a B testing. And I thought that was the holy grail like, you know, and that, you know, and as you say, it's it's a lot more messy. And one of the things that I've noticed in, you know, been involved since the mid-nineties in designing really big websites like I've worked for Microsoft, a lot of other technology companies and over the years in building really big websites is that the skills what are sometimes called information architecture are actually in decline that less people are skilled at and there's less interested interest in in organizing information and coming up with classifications and structures than in my experience I need than there ever has been. And part of the reason is are Google. And so why do I need a set of folders? I just I just search for Toyota, so I, as data explodes, the actual essential skills to structure and it are actually in decline. I am. I am I crazy or is? Are you seeing the opposite of that or? But that's what I'm saying. Information architecture, skills in decline during a period when they have never been more needed.
00:22:42
S2: Now I'm not going to disagree with you. That doesn't add to it as well. I think data presentation is woefully under-researched and underappreciated at the moment, too. You know, we're presenting outcomes from increasingly complex algorithms. And the standard criticism for somebody who perhaps doesn't understand what's coming out from the algorithm is, Oh, well, we need to improve data literacy. You know, we need to improve digital skills in people. And I don't disagree that something that you know, would be beneficial for everyone. But there's also a responsibility on those designing the systems that their systems are understandable. I think we need a massive investment in human computer interaction and user interface design to, you know, as we start to have medical apps on our phones. There was an example actually given by Professor Yvonne Rogers of University College London at a talk just last month. She was talking about this exact challenge, saying, Well, what if you've got an app that can you take a picture of your skin and then diagnose is a model and it says it's cancerous? You know, what should it tell you? Should it just say we think it's cancerous? Contact a doctor, should it tell you? Well, it's 20 percent likely, or we think it's 80 percent likely. This cancers, you know, does these differences matter to the person? There's been very little said about how the results are presented from these algorithms. And I think if you look a lot of the headline cases that we've seen, such as the poor use of AI, well, it's simpler than an AI, really. But these are the compass algorithm in the American criminal justice system where it was simply your color would determine whether or not you were likely to commit further crime. I mean, it was just a shocking failure. But that's partly because the information was presented poorly. Nobody thought about how do we present the results of what is a complex series of steps that then gets this one number? What does that one? No say, and should it actually be one number? Or should it actually be a series of criteria with some kind of explanation? Now I work a lot with spatial access now maps. And I mean, I saw a great tool when some blessings Albert Cairo, who does a lot of work in map visualization, saying, Well, maybe we should make the maps harder to read. Maybe we should require people to work a little bit. So they don't just jump to a quick conclusion, they need to dig a little bit deeper into the visualization. So I think data presentation, as well as its organization, though, there is the skills that we desperately need more of as we rely more and more on these systems.
00:25:14
S1: It's a great point that Professor said, sometimes you need to make things harder, you know, like, you know, is it always good, you know, to, you know, to make it easy to to buy so much stuff or to buy, you know, there's a huge problem in e-commerce about e-commerce returns that, you know, so it's very easy to buy loads of clothes, but it's not easy to know are they going to fit? You are, you know, or should you buy? I'm sure you got them delivered, you know, in 24 hours, you know, or in three days. You know what? Three days would probably be better for the planet because it's a more sustainable delivery medium. So. All these, you know, making it easy for you to get convenience and quick, you know, sometimes you do need to make it hard or at least make people think more. So these are big design challenges as well. And yes, a lot of times we don't really delve into them enough.
00:26:24
S2: Not absolutely. A book is quite popular, I think, because he's been very vocal and curvy too. I think is quote the art of statistics is by David Spiegelhalter, who studies risk at Cambridge, I think. And he talks about the fact that, you know, because people are becoming much more aware of Bayesian inference and its role in decisions. And I'm very much in favor of Bayesian modelling over frequent statistics because it effectively enables you to update your beliefs. It tries to bring in this combination of having both data and beliefs. And how do we resolve them? But it's quite fascinating. In his book, the fact that this approach isn't allowed in court because it's considered to have some sort of ambiguity, and that's not a good thing. We want clarity. We want a clear decision. And so maybe we just, as you know, as a species, we need to embrace ambiguity a bit more. I think we need to find ways to understand there isn't one single answer in so many different scenarios and to be better at understanding the different tradeoffs or the different consequences that each possible outcome is going to produce and accept that we're going to pick one of them. But we're going to pick one knowing that it's not. It's might be optimal in some ways, but it's going to be suboptimal in other ways, and we need to have thought about what that means. And that can then cover so many different scenarios, including the concerns around ethics bias, climate costs and so forth. At the moment, I don't think we would do that. We just look at one possible outcome.
00:27:59
S1: I was in touch with some Dutch researchers who were researching self-driving cars, and they're going over their numbers again and again because they can't believe them. But the current numbers that they've cut they have and they've cross-checked them multiple times at this stage are showing that if the Dutch fleet, you know, transferred over right now or within the next couple of years to self-driving cars. So the entire source is not physically possible, but let's say it was possible and that in the way that each of these cars that are throwing off, you know, these current models are turning off like a couple of gigabytes. Second of data, they're saying that basically it would demand like 10, 20, 30 times the total electricity supply of the Netherlands to run that fleet that you know that the solution some of the solutions to save the planet, so to speak, because self-driving cars are saying, no, we're going to drive more efficiently and there'll be less crashes and there'll be there'll be better consumption of electricity, et cetera, et cetera. But the actual data required to keep them on the road and all the infrastructure around that data is vastly greater than any efficiencies gained in driving more efficiently. So some of the solutions I'm seeing coming out a lot of technological space are more likely to kill the planet before it saves the planet. But what what sort of ones might help us say to walk more or get on the bicycle more? Because I think we don't need self-driving cars, we need more bicycles, we need more 15 minute cities like in Paris. What they're what they're trying to do. How can dare to help, you know, in those areas where we can really get the win win of, you know, healthy people, bicycling and data, helping them to bicycle more rather than, you know, these other areas which are increasingly seen are not necessarily going to help at all. They may be wonderful technical solutions, but they may create far more pollution as a side effect than the solution that they actually create.
00:30:33
S2: Yeah, that's a big one sensor. I mean, the thing with autonomous vehicles is in particular, I think, is it's a little bit like the flying cars of the 1950s, and we still haven't got those. I think a lot of people are dialing back on their predictions as to when will have such things on the roads because of so many different complexities and the energy cost of running them being one. So I assume it's a big topic in its own right to answer that one, because a lot of it's going to ultimately come down to leadership visions, you know, what kind of world do we want to see in the future? You know, we're seeing a huge pushback against globalization at the moment. What's that going to mean? You know, we could head off into some very, very big topics here, possibly beyond the realms that either of us can answer. So, you know, it is ultimately going to come down to some tough decisions are going to get made. I mean, in my research of cities, when I do the Smart Cities Masters a few years ago now it shouldn't be, and overwhelmingly the best cities were the small ones. Time everywhere, you know, it wasn't the big metropolis it was. It was generally the smaller ones like in the UK, like Edinburgh, Winchester. And remember, the cathedral was quite small but still get city status. And the same in America, you know, so I was in the cities, in Silicon Valley compared to to the huge metropolis that generally people are happier, healthier and that works. But we're back to a scaling problem in terms of population size as well. The general assumption has been based on the numbers is we need the big cities to cope with big populations. And I had to start on my blog a few years ago and I haven't checked with a how much has changed. But it's still the fact that, you know, we now know over half the population lives in cities and over half of that population lives in slums. And frankly, they are not concerned about electric vehicles or autonomous vehicles or air or anything. They're surviving. So, you know, this is where it becomes a challenge which challenges we trying to solve. So I would be very interested to see when that study comes through. There's a lot of research getting specifically into vehicles. You know, what will be the impact if we did have autonomous vehicles because it's not just the cost of the data generation and running them, but there's the other models show that they would produce more trips, not less or fewer sorry using the terminology. There's, you know, there's an argument around at the moment 40 percent of lung space in cities in America, or at least, I think is car parks. So would it suddenly release lots and lots of land for redevelopment? And then it's like, well, actually they would end it with more of these. So unless we get the smart parking where we can sink them into the ground or have skyscraper car parks and what have you, then we've got to put them somewhere since the Sun. So many different challenges that we're a long, long way off solving. Just getting to the point where you've got one of them working on the roads without killing people is a good start point.
00:33:47
S1: Yeah. And you know them, what do you think of the idea? You know the maybe the best of both worlds. The what Paris is really trying to drive the 15 minute city in a kind of like multiple cities, inner city. You know that everything is, you know, within 15 minutes, walk that dirt or 15 minute cycle or stuff in in the process. Is that? And I wonder how, you know, data could help that type of, you know, is it a ways of of of you know that that the greengrocer or. You know, I think some people are looking at the return of the milk man or woman, so to speak, and grows, but in a different concept that that there's a local provider with cargo bikes, but a kind of bringing all sorts of materials, you know, vegetables and stuff like on a constant a kind of system in a very localized area. And that would probably require quite a bit of coordination, so to speak, in a in an app or, you know, getting everything freshly around an area. I wonder, is there is there ways where we could have small data rather than big data? Or you know that that because data is still so? I mean, it's how we have become, you know, if we do it right, it makes things work so much better. But you know, maybe it's small data in 15 minute cities, ideas, you know, around helping people to walk more, you know, our cycle more cargo bikes rather than self-driving cars?
00:35:31
S2: Yeah, I mean, I'm so I'm quite a fan of the 50 minute city concept. To start getting thinking is taking more of a design thinking approach to cities to which, well, that's always been that. But throughout the 20th century, obviously, the car has dominated an awful lot of the approaches taken. So you saw the real shift to zoning where you've got your suburbs or lives and then you got your central business district and you've got a big retail centre district outside, you know, on the outskirts of town. And so for everything involved requiring a car and. In terms of this, another vision that's starting to grow in popularity, the 20 minute neighbourhood to which is of a similar, you know, which is the do we get back to this? It's not just going back to how things used to be happen. You know, if you look at how London grew organic is actually a collection of villages that just joined up over time. And you still see that in the naming conventions and the different high streets in different pockets of London. So London as a city is quite a good example of lots of villages joining up into one giant city, as opposed to the sort of huge design from day one city where it is just a massive, grid based metropolis. So, yeah, I mean, they will become essential to those sorts of scenarios because we're looking at it's actually much more hyperlocal distributed systems. You know, there's a big push for should we see more urban farming in these scenarios because farming and agriculture is causing a huge impact on the environment and the sustainability of the ability of the planet to sustain us, at least. So if you're going to have much more urban farming, it's like local, but it's going to be distributed very, very quickly to avoid waste. So absolutely, there's going to be a role for data in those types of scenarios. But it is going to require much more of a sort of holistic systems thinking approach to connect all these ideas together into the kind of future life we want to see.
00:37:26
S1: I saw a fascinating study there number of years ago in China, where they increased crop production by something like five or seven percent, which is very substantial over a 20 year period. And they did it with practically no new technology but basically creating big networks of farmers and policy makers. And so they'd say, Oh, we planted it. It started multiple experiments of doing crop rotation, planting seed DAPT, cetera, etc. and they just tested different sides and different planting. But they fed all of the information around to government people, the local farmers, et cetera, using the web essentially. So it was no, it didn't buy any new tractors or stuff, but the sharing of the data and what was working and what was not working really reduced and reduced pesticides, reduce fertilizer and increased crop production. And I think, you know, in the cities, you know, some will have growing in underground and infrared, etc.. I think the sharing of what's working, what's growing, what's not growing, you know, with 10000 farmers, you know, feeding into a central database. Here's what I did today, and here's how it grew. We can learn from each other. We don't need, you know, this is where I think data is so exciting is that you know that yeah, that can really help better crops cleaner develop more locally delivered just in time. So this is where data could really help the planet.
00:39:16
S2: No, I mean, absolutely. I mean, me and you know, we've done well, not to mention COVID really and get this back to the conversation and give it answers before it. A good example. Stay with having computational power and data enabling scientists to work globally to come up with solutions that previously took us decades or at least years or, you know, to bring that cycle down from decades, two years, just months, months to days, days, even two hours. This is some of the possibilities that we see now, where it's it's normally a good thing to have access to the data.
00:39:50
S1: One thing that unfortunately didn't happen is that very early on, certainly in Asia, the scientific consensus was that COVID was airborne. And yet the Western societies in particular were very, very slow to accept that data that, you know, which had a massive impact on the spread of the virus, you know, versus the conventional wisdom that wash your hands and it's it's going to be spread through surfaces. Yet the data was there very early that it was airborne and yet we weren't able to as organizations, our societies, our medical establishments accept that data.
00:40:35
S2: Yeah, I think this was one of the interesting examples again, was coming back to what we're talking about earlier with data and beliefs sort of having a bit of friction because if I understand correctly and again, I could be wrong here. A lot of the early modeling was the belief that it would behave like flu. And so it was modelled like flu. And so has this. The focus was very much more on droplet spread and fomites of surfaces. Is expected the general consensus of how flu spreads, and I think there was such a strong belief on that model that it almost override seeing what the data was telling them at the time. And because we've seen these sorts of examples before where if if what you you know, if you've got a certain strong belief that, well, this is the way things are is very difficult even when you've got data to push back on that kind of belief.
00:41:30
S1: Yeah. And isn't that the conundrum as well? We need the belief and we need the data because if we just believe in the data, we can make almost worse. Decisions than if we just believe in belief.
00:41:45
S2: Absolutely. This is one of the big I is back to this idea that we need this. We need these systems of redress, of ways of testing, you know, is, is this something missing here? What's the hidden was the data we don't have, and it's like that classic example. Everyone goes back to their figuring out what was causing the weaknesses in planes returning from World War Two. And don't look at where the bullet holes are. Look where the bullet holes aren't because they're the planes that didn't come back and we still struggle with that. There was the great example with DNA. You know, what's DNA testing became a thing. It became such a certainty that while if you know, if your DNA showed up in a crime scene, you were there because your DNA is there. So how could you not have been there? And yet there's countless examples now where it's been proven afterwards, and often after people have spent some time in prison that there was cross-contamination. And then there was one example of a gentleman who had an absolute cast iron alibi. He was in hospital, but his DNA was detected at a crime scene, and he spent six months in prison before they finally got to the bottom of it, which was he was transported in the same ambulance and somehow some of his day. And I ended up under the fingernails of the victim at the crime scene, so it transferred by the paramedic that collected the samples. You wouldn't think that was possible, but it happened, and it took six months, despite the fact this man had an alibi with ample witnesses because he was in the hospital at the time of the crime, and yet he was still arrested because he filed the DNA. To such was the belief in the absoluteness accuracy of the DNA testing that they didn't think about potential for contamination. And I think this is this is why a decision maturity needs to improve you. We need to be able to stop ourselves and say, OK, I've got a strong belief. We've got quite a bank of evidence here, but we've got some data here or we've got we've got a blind spot and we're not addressing it.
00:43:45
S1: If you're interested in these sorts of ideas, please check out my book World Wide Waste a Gerry McGovern dot com. To hear other interesting podcasts, please visit this is dot com.