The World in 2029
I'm on a mission to explore what the world might look like in 2029. The podcast features interviews with tech startup founders and researchers, addressing pressing issues like climate change, hunger, and disease. These changemakers are aiming for a better world in 2029. The future is better than you think!
The World in 2029
The Second Half of the Chessboard
Keywords
exponential growth, sigmoid-curves, AI, sustainability, technology, climate change, exponential mathematics, solar energy, governance, future predictions, trust in AI
Summary
In this episode, Lars Rinnan and Professor Richard Sandberg discuss the implications of exponential growth in technology, particularly in AI and sustainability. They explore the differences between linear and exponential thinking, the challenges of AI adoption, and the importance of understanding these concepts for future advancements. The conversation also touches on the role of solar energy as a case study in exponential change, the evolution of technology through sigmoid-curves, and the need for trust in AI systems. Ultimately, they emphasize the potential for a positive future shaped by these technologies if we can overcome fear and misunderstanding.
Takeaways
Humans evolved in a linear world but now face exponential growth.
Exponential growth can feel invisible until it suddenly accelerates.
Understanding exponential mathematics is crucial for predicting future developments.
AI adoption is often slowed by fear and misunderstanding.
Solar energy exemplifies exponential change in technology.
The evolution of technology can be visualized through S-curves.
AI has significant potential for sustainability if implemented correctly.
Governance often lags behind technological advancements.
Trust in data and AI systems is essential for successful implementation.
The future holds great promise if we embrace exponential thinking.
Sound bites
"Our brain is less adapted to a linear way."
"AI is a fantastic tool for sustainability."
"The future is better than you think."
Chapters
00:00 Welcome to the Future: Understanding Exponential Change
02:37 Linear vs. Exponential: The Human Perspective
05:43 The Chessboard of Exponential Growth
08:26 The Impact of Misunderstanding Exponential Growth
11:09 AI: The Tipping Point of Exponential Change
13:51 The Challenges of AI Adoption
16:48 Solar Energy: A Case Study in Exponential Change
19:13 The Future of AI: Stacked Sigmoid Curves
22:03 AI and Sustainability: A Dual Approach
24:45 The Energy Challenge: Balancing AI and Sustainability
36:05 Navigating the Complexities of AI and Sustainability
38:49 The Exponential Growth of Technology vs. Linear Governance
42:21 Understanding Exponential Growth and Its Implications
46:36 The Importance of Trust in Data for AI Systems
50:18 Envisioning the Future: AI and Sustainability in 2029
55:05 The Rise of AI Agents and Their Impact
57:33 Exploring Exponential Technologies and Their Future
Lars Rinnan (00:08)
So welcome to the world in 2029, the podcast where we explore how today's innovations are shaping our future. I'm your host Lars. I'm on a mission to spread positive insights into how today's pressing issues like climate change, hunger and disease.
are being addressed by exponential technologies that most people don't know about. I worked in the AI space for about 10 years and I have helped numerous tech startups. So this is a topic I know well and one that's close to my heart. So today we're diving into one of the most misunderstood forces in human history, exponential mathematics. Most people fear that AI is moving too fast, but the real danger
Isn't that AI is exponential The real danger is that we still think in straight lines. In other words, the problem is not the machines, it's the math inside our heads. And joining me is Professor Richard Sandberg from Stockholm School of Economics, a mathematician and educator who works with applied AI and sustainability. And he uses exponential functions in his daily work. Richard, it's great to have you here.
Rickard Sandberg (01:35)
Thank you very much Lars for inviting me to this podcast. I'm very much looking forward to this. It's a great topic.
Lars Rinnan (01:43)
Fantastic.
So let's start with the basic mismatch. So humans evolved in a linear world, but we now live in an exponential world. So if you look at everyday life, like walking, aging, harvesting or production, how would you explain the difference between the kind of linear growth we intuitively understand and the exponential growth that we usually fail to grasp?
Rickard Sandberg (02:13)
Yeah, I think you brought up good examples immediately there, Lars. So if you look historically and at our evolution, so then most of the times we've actually been surrounded by things that are linear. So you took aging as a good example. So next year I will turn 54 and the year after that I will turn 55. So it's very easy to understand and it's easy to predict. And this happens in a linear fashion.
and even longer time back ago so say like if you were hunting or harvesting
something like that. Then it's also like you can expect to have the same harvest as last year and you can expect to hunt the same number of animals that you did like last day and so on. So pretty much what's surrounding us is actually linear. So that also means that we are exposed to linear phenomena the whole time. That also means that our brain stem more or less is adopted to think in a linear way.
Lars Rinnan (03:18)
Absolutely, we see that all the time. And I know there's this classic chessboard and grains, grains of rice example. ⁓ Maybe you could walk us through that, you know, in simple terms, and help the audience to explain, to understand ⁓ why exponential growth feels like almost invisible in the beginning, and then suddenly it explodes in the second half of the board.
Rickard Sandberg (03:47)
Yeah, it's exactly like that. if you know, you just have a little bit feeling of that many of the things surrounding us today is linear. But then of course, there are also many examples of exponential growth surrounding us. So that we took a lot on today as a technology, of course, is a very good example of that.
But then with respect to this rice grain example and the chessboard, that's just actually like a consequence of you multiply things rather than you add things. So you know, Lars, that if I would ask you what is 12 plus 13, hopefully you will answer 25.
Lars Rinnan (04:33)
Hopefully.
Rickard Sandberg (04:34)
and hopefully
most of the audience as well. So this is pretty easy but then you know that if you switch, take the same numbers and then you take multiplication. So if I ask you what is 12 times 13?
Then perhaps you answer 156 Lars, but I think all of us immediately understand that multiplication is actually more complicated than adding things. So this is also part of evolution and also how we're trained and also capable of actually understanding math. So if you know then how this multiplication exercise in mind, that's exactly what happens if the number of rice grains on the chessboard.
So in the beginning you know that, or the story is like this, that you have one rice grain in the first square and then you have two the second and you have four on the third, eighth and so forth. So then you know that in the beginning the rice grains, they are increasing with the number of squares on the chest board, but it goes pretty slow in the beginning. But then you know that suddenly, think about this like a medical threshold.
then it explodes. So then suddenly you have more ice grains than you never ever have ⁓ can imagine actually. So at the end of the day, falling in this scheme across all the squares on the chessboard, then when you add up on the last 64 square, then you have more ice grains that our civilization ever have produced actually.
So you start with one and only in 64 squares you end up with more rice grains that you never have seen before.
Lars Rinnan (06:05)
you
Yeah, so you
can understand that people are actually, you know, ⁓ misunderstand this and are very, very surprised.
Rickard Sandberg (06:21)
Yeah, I mean, I should not say this is a good thing, but having in mind that our brain is capable of reasoning quite well with respect to linear functions, so I would say that our intuition faces like big time when it comes to these type of examples and phenomenons.
Lars Rinnan (06:45)
Yeah, yeah. Do you also remember the exact number of rice grains?
Rickard Sandberg (06:51)
Yeah, I studied this a little bit, so it was actually a long time ago, so I, of course, I remember the story long time ago, back at secondary school. But the final number of rice grains that you have on square 64, that is 9.22 times 10 to the power of 18.
and you know that it's so much and say that if you're halfway through the chessboard that means if you're standing on square 32 halfway through then you have 2.41 billion rice grains and what is a billion if you don't know so that's 10 to the power 6 so it's a huge difference
Lars Rinnan (07:39)
Yeah.
Yeah, it's amazing. And I mean, even you talked about this, that even compound interest, you know, we all have, hopefully, some money in the bank and we have some kind of interest rate and it's usually quite low, at least in these countries that we live in. But even compound interest is really hard for people to understand. We really don't know that. We can't understand it.
Rickard Sandberg (08:10)
No. So then again, our intuition fails, so we are not trained to understand it properly. But then also to be fair, maybe you need a pocket calculator or computer to calculate the exact amount if you have compound interest rate example in mind. But then actually going back to multiplications for this compound interest rate example, that again is an example of multiplication.
Say if you have an interest rate of 7.5%, you have 1000 kronor on your account and then I ask you the question, so if you reinvest this for 10 years, so you have an interest effect on interest, so to say compound interest, then how much money do you have in 10 years? It's 2000 something, but still you know that you need like a calculator to get it. Whereas if I'm very...
Lars Rinnan (08:57)
Yeah, I don't know.
Yeah.
Rickard Sandberg (09:08)
conservative like grandma say, they only put money in a mattress. There is no discount in a mattress, I say. And then you know that it's easy to understand that if you put 100 kronas every year in the mattress, then after 10 years, I have saved 1000 kronas.
Lars Rinnan (09:26)
Yeah.
So do you think that if people actually understood this, would actually save more money? Do think so?
Rickard Sandberg (09:36)
At least perhaps only the older people. I don't know if anyone is saving money in a mattress anymore. But more honestly, I actually think that if you really trust the banks and the system, and then if you really don't understand and think about the pension to come.
Lars Rinnan (09:43)
You
Rickard Sandberg (09:58)
So then actually if you save money for that, and if this compound interest effect kicks in, and then you can actually play around with that one, then I think actually you yeah, you will like it.
Lars Rinnan (10:11)
Yeah, yeah, absolutely. Of course, ⁓ we like to talk about AI, of course. We have the same thing. It felt like it arrived overnight. I heared this all the time, like in 2022, 30th November, I guess it was, when ChatGPT was launched. And people were so surprised. ⁓ Where did this come from? ⁓ And we know that AI really dates back to
to 1956. So that's a long time ago. And we all used it in products like Spotify or Google Maps for a number of years. I would guess at least 15, 20 years, almost. ⁓ But from a mathematical perspective, how does exponential growth explain this, let's say, sudden visibility of AI?
Rickard Sandberg (11:07)
That's a very good question Lars. So, I mean the mathematics is actually pretty clear. So I think we can think about this like in two ways. So if you first...
try to reason why we saw AI exploding like overnight. So then I think that we have to have a user friendly interface, so to say. So chat ChatGPT became like a common tool for everyone. And that's why also most of us actually felt that.
Lars Rinnan (11:32)
Mm.
Rickard Sandberg (11:41)
We have AI now and we are working with AI and in principle everyone understood AI, at least chat GPT AI, if that is AI. But then from a mathematical point of view, and I think it's just for example, I think that...
you have to have like a threshold. And suddenly, know that when you cross that threshold, then you will think or see this as something that's exponentially growing. But it may take a while. So in this example, it's from 1956 to say 2022. So then you know that we understand this as linear development, but then suddenly you reach like a threshold or like a medical point. And then suddenly from that point on, it takes off.
Lars Rinnan (12:10)
Hmm.
Yeah, so it's a tipping point really.
Rickard Sandberg (12:31)
So it's
exactly that tipping point was the word I was looking for.
Lars Rinnan (12:36)
Yeah, Fantastic. So what would you say are some of the, let's say, concrete consequences when people misunderstand these exponentials? What happens to our predictions? Do we make wrong predictions? Do we become more fearful because we don't understand it and we think it's going so rapidly?
What happens to decisions? Are we paralyzed? What happens really?
Rickard Sandberg (13:11)
I think you're right on all your predictions, Lars. So first of all, then, since we most likely think that things are developing in a linear way, including AI as a technology, so you know that if that's our perception, it's linear. When it's fact, it's nonlinear. It's exponential growth. The more you have, the more you get. ⁓ Then it means that you under estimate.
And it also means that you underestimate the speed of the development of the technology. And that's exactly, if you ask me, then this is exactly what we see. So we are already now, and say starting since 2022, though I think we started before that, then we really underestimate this. And for every year that goes on, this underestimation just grows, actually.
Lars Rinnan (14:09)
Yeah, so
the gap between what is actually happening and what we think are going to happen kind of widens.
Rickard Sandberg (14:20)
Absolutely. And you're also then indicating that so what are the consequences? So I think that many of these predictions that we do, I think it's a fact that we underestimate this. And then what's the consequence of us, society, or me as a researcher say, what's like the consequences? And you said fear, and yeah, perhaps it's actually fear.
Lars Rinnan (14:21)
Hmm.
Rickard Sandberg (14:45)
And I think this is also quite natural for human beings because what we don't understand, it's like a basic instinct that we fear that one. And then you know that being paralyzed, yeah, I think that's quite close to if you're scared enough, then you become paralyzed, know, and you just hide away. Yeah.
Lars Rinnan (14:53)
Mm-hmm.
Mm-hmm. ⁓
Yeah.
Do you think this is also the reason why adoption of AI is exactly slower than you might think? I mean, there was an MIT report some weeks back saying that 95 % of all AI implementations or projects were unsuccessful and AI adoption is slower than you
than you might expect. Do you think this is also the reason for that?
Rickard Sandberg (15:37)
Partly, so sure underestimation and fear or not fear, it's a little bit also required a lot actually like misunderstanding. So it's definitely like an AI arms race, but still, and everyone would like to talk on AI and that they're implementing and then using AI. But as I see it, the biggest problem now is that they miss out with the AI implementation.
So I think that we have a great technology. It's like we have a Ferrari. But when people try to implement it, they run at the maximum speed of a Fiat. So I really think that will have an impact on the AI adoption for sure. But I also think, as I've been indicating that, I think also strategy is missing actually when it comes to implementation.
Lars Rinnan (16:20)
Yeah.
Yeah, that's a good point. That doesn't really have to do something to do with exponentials, I guess. Or it might be paralysis, also leading to no strategy for implementing.
Rickard Sandberg (16:53)
But I would
think about it like this, that so if you are understanding this like properly, that means you should not underestimate this. So if you understand that this is not, if it is exponential, then of course you adopt a strategy after that and then boom, then you have something extremely powerful.
Lars Rinnan (17:00)
Mm.
Yeah.
Yeah, and of course, the strategy is actually quite interesting, you know, because if you do understand this, you also understand that technologies that might not be perfect today will be a lot better next year. And then the year after it might be close to perfect. And you can actually start experimenting while the rest of the market, you know, doesn't understand what's going on. So there's definitely a really strong link to strategy as well.
Rickard Sandberg (17:41)
And I also think this is material report. mean, of course, it's a good report in a sense, but I also think that it's a little bit like to be expected. And AI as a technology, I think it's a little bit unfair to that technology. So maybe you blame the technology, but perhaps you should blame the people implementing it.
Lars Rinnan (18:06)
Yeah, it's usually down to the people, you know, it's usually not the technology that goes wrong. So if you're misreading the math, we're probably also misreading the timeline. And you talked about the second half of the chess board, which I just love that term. So let's move to that second part of the chess board where things get really interesting.
You know, solar energy is often used as one of the real world examples of exponential change. ⁓ How would you describe exponential in terms of solar panels and solar energy?
Rickard Sandberg (18:56)
Again, a very good example. So we have a few of these, what should we say, like empirical laws. So you can think that they're like bound in mathematics, but typically they're derived in an empirical fashion or like a rule of thumbs. So this will be an example of rights law. And then you can think about that. ⁓
and solar panels and solar power as an example. So say like if it takes wind as an example, so around 2010, 15 years ago, then maybe we had a pricing of Swedish kronor of 45 per watt. And then of course it was perhaps a little bit underdeveloped and this was certainly on the market. But then you know that gradually the price drops.
And then if you have a price for 2025, then maybe it's down to 6-7 kronor per watt. So you started from 45 and today we have 6-7, but you cannot connect these points by straight lines. It's actually going in this way. So it's an exponential decay when it comes with respect to cost. So that is one aspect. So it becomes cheaper and cheaper in an exponential way.
Lars Rinnan (20:05)
Hmm.
Hmm.
Rickard Sandberg (20:25)
But good news also there with respect to solar power that is also you become more and more efficient. So it's not only it's becoming cheaper, it's also you get more watt per krona, to say. So that's another empirical law that we can rely upon. Or perhaps both of them are referred to as Wright's law.
Lars Rinnan (20:40)
Exactly.
⁓
And what you call it when you have both at the same time, your capacity is increasing exponentially and price is decreasing exponentially. Is there a name for that?
Rickard Sandberg (21:08)
Yeah, but I think that's what is this Wright's law actually. Yeah. So if I remember correctly now, so I think Wright's law is saying that when the, when the capacity is doubled, so say if you have some watt unit with respect to the solar power, and then say if it increases from 1500 watt to say 3000 watt, that's a double increase.
Lars Rinnan (21:11)
Okay, yeah, so it's usually just called that.
Rickard Sandberg (21:38)
But then, though you get this double increase, the costs are reduced by, 30%. So that's like an empirical rule that we don't see.
Lars Rinnan (21:51)
But again, I I'm guessing that if you stop 10 people on the street in Stockholm, ⁓ most people will have no idea that this is happening.
Rickard Sandberg (22:00)
No,
not at all. And I think this is really talking about like ⁓ blind spots. And I think this is really lack of information actually. And it's also, I mean, any day today is like almost everything. And I think we really should be more clear about this.
Lars Rinnan (22:24)
⁓ And if we go back to AI, we've seen decades of buildup and then let's say explosive periods maybe, ⁓ or at least in perception. So how would you place AI on the, let's say, the chessboard metaphor? Are we already in the "second half? And if we are, what indicators do you look at to decide that?
Rickard Sandberg (22:52)
Oh, that's a very good question Lars. So personally, I think that we are on the "second half of chessboard. I think so. But then how far we have reached on the "second half, that's a delicate question. And I think you will have an answer, a different answer for whatever person you ask them. But maybe I can frame the answer like this that we have now this AI technology.
Lars Rinnan (23:08)
Hmm.
Rickard Sandberg (23:22)
and say that it started 1956 though it's in different shapes of course. But then if it started 1956 and now we are 2025, so then we are perhaps like pretty far. But we also start to talk about like new technologies, though they still are framed as AI technologies. But already now we can see that there is another technology.
shaping and that's like generative AI. So actually it's ChatGPT 3 and it was also born 2022. That's a very good example of Lars language models and generative AI actually. So I think definitely we are on the "second half of the chessboard. I don't know exactly the number of the square Lars Yeah. So yes.
Lars Rinnan (23:54)
Mm.
Hmm.
Which square? That would be a bold guess.
Rickard Sandberg (24:21)
Maybe I should, yeah, I don't know. Maybe I can ask you tomorrow, Lars.
Lars Rinnan (24:23)
Yeah, it's probably
square 42, I would guess.
Rickard Sandberg (24:28)
Then I'll say 45. ⁓
Lars Rinnan (24:32)
Yeah, it's really
hard. I mean, like even even touch ChatGPT, you know, which was launched 30th November 2022. You know that that was GPT 3.5. And my company started working with GPT 3 in 2020. But generative AI goes back to 2017 or around there. You know, at least the first the first reports.
and papers. But of course people didn't know about that either. And then of course they were super, super surprised when ChatGPT came out. You probably weren't surprised. I wasn't surprised either because I knew this was coming. I knew it was going to be exponentially better. But I was surprised about the user interface. That was, you know, that was mind blowing.
Rickard Sandberg (25:01)
Yeah.
Yeah.
But to continue on that one and say now that we have different ChatGPT or newer Chatt ChatGPT versions after that, of course. So if you use like five something and you compare it to version three something and I'm talking about exponential performance ⁓ improvement, then you can also just look at the number of parameters being trained.
So also have actually like an exponential increase in the number of parameters used per model. So in a sense, can also expect the performance to also be exponential.
Lars Rinnan (26:12)
Yeah. But the interesting question then is perhaps, does it continue ⁓ all the way or are there some kind of thresholds? And I know that a lot of people have been talking about thresholds. know, do we have enough data? Do we have enough compute? Do we have enough energy to actually, you know, power all those data centers? And you have said that, know, techno...
technological progress is often not a pure exponential, but rather a sequence of sigmoid curves. And that is also true for like, steam or electricity, computers, anything. So could you explain how these stacked sigmoids work and why from a distance they still feel like one unstoppable exponential trend?
Rickard Sandberg (26:49)
Yeah.
Yeah, exactly. So I think this is really the key to understanding the evolution of technology. So now we're talking about the AIS. But first of all, guess that to most of the audience, sigmoid function is like unknown. So layman term of that one is an S curve. So it just looks like this is like a squeezed S.
So then it just means that it's pretty slow ⁓ in the beginning. So again, I'm back to the chessboard. I can now think about that. The first question, the chessboard, it feels like there's a linear evolution. And then suddenly it kicks off. So then you have this exponential phase. And then we have entered the "second half of the chessboard.
But with almost all technologies, then after this exponential phase, it flattens out and then you obtain this S-curve. I think this is also what you can expect from AI as a technology. And there are good reasons to think about it in that way. And as we said before, that there are other technologies coming after.
Lars Rinnan (28:06)
Hmm.
Rickard Sandberg (28:24)
so you know that it's like a relay and then you just give the stick to the other technology to run further.
Lars Rinnan (28:32)
⁓ So perhaps the advent of generative AI was actually one of those, let's say, transitions from one sigmoid curve to another sigmoid curve. Whereas before that you had more like, let's say, traditional AI, which is really strange to say, ⁓ predictive AI, et cetera, et cetera. And then you had generative AI and...
Rickard Sandberg (28:46)
Yeah.
Lars Rinnan (29:02)
I know that these days, like the godmother of AI, Fei-Fei Li, is working on world models, which again might be, know, again another like sigmoid curve transition, perhaps.
Rickard Sandberg (29:20)
So thanks for reminding me Lars. So then there I tried to explain the Sigmoid curve and that's a good functional form to describe the evolution of technology. But I didn't explain carefully enough like why we
perhaps perceive this from the outside as still an exponential increase of technology, whatever they call it. So then the idea that is that you have one Sigmoid curve, and then on top of this one, you put another one. So you can actually think about it, that you sum it up.
So then you have AI, then a generative AI, then you would have like agentic AI. So then you would just sum them up like this. So then it will be like a continuum of technologies, something like that. But that's like one explanation, but it could also very well be that, you know, that if one technology is one S-curve, then the other one starts pretty close, but then the third one starts even closer. And it can also be the case that they start at a higher level.
Lars Rinnan (30:27)
Mm.
Rickard Sandberg (30:27)
So it's not only that they add up, instead they build or stack on top of each other. And if that is the case, then even though we have generations of different AI, then if that is the case, then you feel that it's just an exponential growth of AI.
Lars Rinnan (30:33)
Exactly.
Yeah, yeah, exactly. I think that's a very good explanation, really. ⁓ You know, I was doing a keynote a little while back for an insurance company. I was talking about exponential development of AI. And of course, in insurance companies, you have a lot of mathematicians. And of course, there was this one guy in the audience who asked me,
Rickard Sandberg (31:07)
Yes.
Lars Rinnan (31:11)
Are you really sure this is exponential or do you think it might be a sigmoid curve? said and I was just you know, I had to think on my feet a sigmoid curve. What was that again? Luckily, I remember the S curve like kind of Yeah, I kind of delivered an answer that he sort of accepted but
Rickard Sandberg (31:18)
Yes.
Yeah.
Lars Rinnan (31:37)
Then I had to go back ⁓ and read some more about sigmoid curves. So yeah, I was really looking forward to your explanation on this. ⁓ It's a really good one. ⁓ But I mean, if technology accelerates like this, in stacked waves of sigmoid curves, what does that mean for climate and sustainability and the big questions?
Rickard Sandberg (31:50)
Thank
Lars Rinnan (32:07)
I mean, we all see that climate change is getting worse and worse every year. know, have droughts, have floods, have wildfires, have glaciers melting. ⁓ But maybe some of the solutions we already have, like AI-driven optimisation of solar or batteries, they're not always deployed at the speed that the
exponential curve or the stacked sigmoid curves kind of suggests. Why is that you think?
Rickard Sandberg (32:42)
Yeah, again, a very good reflection, Lars. So in the introduction, I think you mentioned that. So I'm working on a project that is called AI for Sustainability and Sustainable AI. And this question fits very well under this umbrella. And it also fits very well under this umbrella because...
There are lot of external collaborations. So actually working on a daily basis, trying to implement the technologies to firms and companies to reduce emissions. So my experience there, as you were saying, Lars, that is that we have the technology to say like cut energy.
use, say like 10 percent, it's like low hanging fruit and that they are technology we already have. So then there again I think it's a little bit or a lot actually on the implementation phase and I think that ⁓ still large part parts of the society but I'm also still in the company world they they know too little.
So they are a little bit afraid of saying it's an investment cost and they are not entirely sure what they're investing. So if I'm saying that you should invest all in this AI technology, it may be the case they don't trust me.
Lars Rinnan (34:08)
Yeah, but what is one of the biggest opportunities you see that are, let's say, currently underutilized when it comes to cutting emissions?
Rickard Sandberg (34:20)
Yeah, down there, I think you can think about this in this different categorization of Scope 1, 2 and 3 emissions. So very briefly, Scope 1 emissions are the emissions directly from a company. Scope 2 emissions is, for instance, electricity that companies are buying, but it's still within a company. Scope 3 emissions are all emissions, say, from the supply chain.
So if you have none of that in mind, you can apply AI to all of these different scopes and then reduce emissions.
to say, ⁓ saving energy by optimization in most factories and industries. Absolutely. And some are already far and they are doing this AI optimization quite well. But my experience is that there are way too many companies still that could pick this lower hanging fruit by simply using technology. And that'll be for...
Lars Rinnan (35:18)
Mm.
Yeah, yeah. And you also worked with,
you worked with Assa Abloy on something like this. So what did they actually do? And how do they actually translate into CO2 reductions and probably also cost savings?
Rickard Sandberg (35:26)
Yeah. Yeah.
Yeah, absolutely. So this is a research project that we started with Assa Abloy, that's a multinational company, 45,000 employees. So the first and the major task that is to help them to reduce CO2 emissions from scope one and scope two.
and all these companies they have a net zero journey so they would like to halve these scope 1 and 2 emissions by 2030 in four years and then there should be net zero in 2045 so that's an extremely challenging task and I
Lars Rinnan (36:15)
Hmm. Hmm.
Rickard Sandberg (36:26)
should say that all means that you have you have to use them to obtain it so AI is a fantastic lever in that context but you yeah you must rely upon other technologies as well.
Lars Rinnan (36:40)
⁓ And you've also been talking about AI for sustainability and sustainable AI. And you have made a point of saying that you should draw a distinction between those two. Could you explain that difference and why it's important that we focus on both at the same time?
Rickard Sandberg (37:00)
Yeah, so this is a it's like AI for good and AI for bad, but now it's with respect to the climate. So I must first really emphasize that I've written a few articles where I really demonstrate facts and numbers, what you can save emission wise and then also money wise by utilizing AI in the climate context. That said.
⁓ AI of course is also emitting ⁓ CO2 and that's mainly because of the scope 3 part. I think all of us know that data centers are invaluable for AI because that's the engine for AI. require a lot of energy. So like in Sweden
we need something like three terawatt hours and in just a few years we think that this will triple to 10 terawatt hours so that's one problem but then again then there is also the emissions coming so when we use ChatGPT for instance it will be some emission just because we're using it but mainly because it's like
billions of parameters being trained.
Lars Rinnan (38:28)
Hmm. Yeah. So then you need more energy and seemingly exponentially more energy. And then you need the exponential technologies to actually make that energy. So what are Sweden looking at in terms of increasing the energy production? Is that actually solar or is it nuclear or is it...
Rickard Sandberg (38:36)
lot.
Yes.
Lars Rinnan (38:58)
hydro?
Rickard Sandberg (38:59)
Yeah, this is a... It's leaning towards a new clear power actually, but nothing is set yet. And we have an election now coming up in 2026. So it's also a little bit... What should I say? You have to be political when you answer. So you...
Lars Rinnan (39:10)
Mm.
Yeah.
Rickard Sandberg (39:24)
All of us, know that if it's going to be nuclear power, we have it in effect in 10 years. And then we should rely upon solar wind or water, then we can use it much earlier. But then it will also be like cost issues.
Lars Rinnan (39:30)
Mm.
Yeah,
it's complicated. Maybe we should leave that to the politicians for now.
Rickard Sandberg (39:54)
think they are
better to answer that question. I have a firm opinion, but that should not be brought forward in the podcast.
Lars Rinnan (39:58)
Ha ha.
Yeah, yeah.
Yeah, no, but yeah, it's tempting to dive into that, but let's try to resist. But let's...
Rickard Sandberg (40:19)
But I
can sort of, I can just add on this, the good day and the bad day or AI doing good for sustainability and doing bad for sustainability, say. So then we have these data centers and Microsoft have been investing throughout Europe. And then of course, if they wouldn't do that, what is the alternative? So I think that really this green.
Lars Rinnan (40:42)
Hmm.
Rickard Sandberg (40:46)
or regenerative data centers that are popping up now. I think we should be happy about them actually. So they claim that they are like net zero though they have some emissions then still.
Lars Rinnan (40:54)
Yeah.
Yeah, I think we're going to see a lot of development in that area in the coming years. So what we talked about then is, mean, we have really powerful tools and they are developing exponentially. ⁓ The math suggests that they could perhaps move even faster, but then these tools hit politics.
Rickard Sandberg (41:08)
Yeah.
Lars Rinnan (41:28)
like we talked about and institutions and regulation. And then you get that gap that you were describing. So it's a little bit like exponential tech meeting stone age governance. So we humans, of course, we haven't had a brain upgrade in millions of years. And institutions move slowly, politicians worry about reelection cycles.
like you talked about, you'll have to wait till next year after the election to see what the energy policy is going to be. ⁓ But meanwhile, the tech, mean, the tech is still accelerating no matter what the politicians are thinking or not thinking and doing or not doing. if you compare the pace of technological development,
Rickard Sandberg (42:05)
Yeah.
Lars Rinnan (42:23)
with a pace of change in institutions. How would you describe that gap and what kind of mathematical shapes would you put on each of those curves?
Rickard Sandberg (42:35)
Yeah, here I think it's a good example of when linearity meets exponential and actually, so I mean, we have this exponential evolution and development of technology stuff. this is a clear exponential path, whereas this governance regulations and so forth, they are maybe trying to catch up in a linear way.
at a steady pace, or perhaps a pretty slow steady pace, or maybe even like constant at some periods. So a consequence of that is that the gap between the technology and the regulations and the governance, that gap will widen over time. So one can of course like reason
Lars Rinnan (43:26)
Yeah.
Rickard Sandberg (43:31)
why so and is it anything we can do about that? Or should we do anything about that?
Lars Rinnan (43:39)
Yeah, it seems like, I mean, if it is exponential ⁓ and it is, then of course that gap is just going to be wider and wider. ⁓
Rickard Sandberg (43:52)
So then there,
I think then there again, Lars, you know that if you now should talk about the politicians again, then I'm making them this law and having like the ultimate governance, so to say. So of course you have self
regulation and compliance with respect to these mainly for us, but if you look at the regulating bodies, so to say, so that is that they are like always lagging when it comes with respect to like information. So I think that they have a pretty good understanding, but it's different to regulate and actually to keep up with this exponential explosion that you see.
So I think the bottom line is that these governance regulations, they are risk minimizing. I think that's key.
Lars Rinnan (44:43)
Yeah.
Yeah. No, I definitely agree. Actually, the last podcast episode was actually interviewing a guy from the European AI office in Brussels about regulation. he also, he said that, well, we have some really good people on board. They understand this completely, but they're still unable to do anything about it because, well, they're trying to get 28 nations to agree on.
on regulations and that's really hard. I think it's probably even worse when it comes to politicians. Do you think they actually understand the concepts like exponential growth or accelerating returns? Or do you think many of them are still thinking in linear terms?
Rickard Sandberg (45:38)
I think many are still actually thinking in linear terms just because of simplicity and then also, I don't know, perhaps a little bit like of education. But I think that all of us, heard about say like exponential growth when it comes to virus spread or something like that.
So then you understand that it's an exponential growth. But I think it's in a different context when you talk about like technology. And it's also really to understand again, where on the chessboard are we actually? So you predicted, was it 44? And maybe the 42, I said 45. And maybe you know that the politicians, maybe they are at square 10, you know.
Lars Rinnan (46:06)
Mm.
42.
Yeah, exactly. And that is, mean, then you definitely are at risk of making the wrong decisions because you have the wrong model of how the world is actually developing. I think that's really risky.
Rickard Sandberg (46:33)
Yeah.
But it's also not entirely like
to perhaps blame them like 100 % because they have enormous responsibility. then again, I think that say, yeah, risk minimizing again.
Lars Rinnan (47:02)
Yeah,
it's really hard. And of course, at the basis of this is something that a lot of people usually just call Moore's Law based on Gordon Moore the founder of Intel. ⁓ But you also have Ray Kurzweil, ⁓ he is a favorite of mine. I read all his books. ⁓
Rickard Sandberg (47:28)
Yes. Yeah,
yeah.
Lars Rinnan (47:29)
and
he has what he calls the law of accelerating returns. ⁓ So maybe you could explain for us why Kurzweil's view ⁓ is different from Moore's law.
Rickard Sandberg (47:49)
Yeah, so Moore's Law is just telling us in principle, mean, Intel guy as he is, that the computational power doubles every one and a half to two years. So still it's like exponential. But Kurzweil and I think we already touched upon this, but he is explaining that
this sigmoid function to describe a technology. It's not only AI, there are many other technologies before this having this shape. But then his story or his empirical law that is that you're actually stacking them, these sigmoid curves on top of each other such that you get this exponential increase of the development of AI as a technology.
Lars Rinnan (48:38)
Mm.
He goes a lot further back in history, don't he? Than Gordon Moore, which is in the 60s, I think, 64 or something. But Kurzweil goes, you know, 100 years further back into history.
Rickard Sandberg (48:53)
Yeah.
Yeah, and that's also to bring evidence to that. I mean, we have had other technologies now in whatever way we define our technologies, but you have like the development of language typing and many other ⁓ examples. So then he demonstrated or
manifested that we have then the sigmoid curve look like for these type of technologies.
Lars Rinnan (49:29)
⁓ So it's not just let's say digital technologies like from with transistors and Moore's law is also going further back with other technologies stacked on top of each other.
Rickard Sandberg (49:45)
But I think a key difference, I mean, I really love, it's an essay and it's also a book by Kurzweil. But I think it's also important to keep in mind that now we are talking about exponential growth.
but it's also like the speed of the development of new technologies. So it's not only that we have on the sigmoid curves, you know that they are close to each other in time and maybe also even steeper actually. So yeah, if you just look back historically on other technological.
revolutions, have steam power, have electricity, computers and then AI. So they are like distance like some hundred years in between and suddenly it's like 30 years, now it's 20 years, 10 years and maybe it's like two years when we talk about new technologies.
Lars Rinnan (50:40)
You
Yeah.
And if the time horizon between them will shorten and shorten, what happens in the end? Is that the singularity? Yeah.
Rickard Sandberg (50:54)
Yeah, it's a, then I think so. And
it can also actually be a proper definition of the exponential function because the exponential function, it's defined as a limit. So I can, in another podcast or over a coffee when we meet, I can sketch the proof to you so then you will see it.
Lars Rinnan (51:18)
Yeah,
I'm looking forward to that. That sounds good. So ⁓ you also worked on the idea of trust in data ⁓ in an industry context. So how important is that foundation of trusted data for building trust in AI systems? And what needs to change in how organizations and public institutions handle data if we
won't govern us to catch up with the tech curve, which they probably won't.
Rickard Sandberg (51:50)
Yeah.
I think trust is really key. you know that data is the food for AI say, and then if you cannot ensure guarantee the quality of the data, then we will have all this.
Lars Rinnan (52:01)
Mm.
Rickard Sandberg (52:09)
examples of failures of algorithms that are hallucinating and they are not being fair and they are not transparent and you have bias and so forth. So that means in the first place that you have to trust the quality of the data. And then when you have the trust of the data, then you must build trust in the algorithm that actually deliver what's expected.
And it's also great trust because you will not understand it. So you have to make decisions coming from a black box. And that, and it's a human instinct. It's, difficult to, to let someone drive your own car, you know, you would like to drive it yourself. You don't trust so many other than yourself like driving the car. And then suddenly you should steer a whole company based on black boxes.
Lars Rinnan (52:45)
Hmm.
Mm.
Yeah, autonomous company.
Rickard Sandberg (53:07)
So trust is really, yeah.
So then in that book that I co-authored with Cap Gemini so then we had four pillars. So that was aligning data and AI with the business model or strategy. And the "second part was engendered trust in data. So, yeah.
Lars Rinnan (53:25)
Hmm.
Hmm.
Rickard Sandberg (53:33)
You should not rank them as such, but it was four pillars. So these were the two first ones.
Lars Rinnan (53:38)
Hmm, that's very interesting. I definitely agree that we need to trust the data and also that the data is safe. ⁓ Hmm. So.
Rickard Sandberg (53:51)
But then also
I think that from a trust perspective, I think that once you're convinced, I think you just want to look at yourself. So then it means that again, I'm taking a car as an example. So if I can drive the car, so to say, then I trust it.
So I don't have to understand it. I really trust the engineers building a car to me. And in a sense, I am not interested in the engine as such. I'm just interested in taking the car taking me from A to B.
Lars Rinnan (54:09)
Mm.
Yeah, yeah, in a safe way. Yeah, but you know, probably the autonomous cars are a lot safer, better drivers than people. I think the statistics from last year is that 1.3 million people were killed in traffic, mainly by human error, which is not good.
Rickard Sandberg (54:25)
Yes, NSFA.
Yeah.
And I mean, I mean, definitely just look at all the accidents caused by your driving, tired or even drunk. So, yeah, or texting. Yeah, absolutely. Yeah. Yeah. That's a big one.
Lars Rinnan (54:54)
Yeah. Yeah. We're texting. Yeah. I think that's a big one actually.
So this podcast is called The World in 2029. So if you put on your long-term vision goggles and you look into 2029 in terms of...
know, exponential development. What do you what you see? You know, do you see a hopeful version where everything is good and AI has helped to reduce climate change and ⁓ provided us with energy, etc. Or do you see a more scary version?
Rickard Sandberg (55:47)
I don't see necessarily a scary version, Lars. So in 2029, ⁓ if I say like this, I really hope that we are better at AI implementation and AI adoption. There are buzzwords now, generative AI still, but we talk a lot about agentic AI and autonomous like AI.
Lars Rinnan (56:13)
Hmm.
Rickard Sandberg (56:13)
And I
think that is fantastic. And I'm sure we will come there one day, but I'm not entirely sure we have reached that point 2029. I think it's, we would like to think that will be the case. So perhaps it will be a little bit like postponed for like various reasons, but one day for sure we will be there.
Lars Rinnan (56:28)
Hmm.
Rickard Sandberg (56:39)
And then there is great faith in, again, and you mentioned this sustainability and tackle climate changes and so forth. So that is, of course, a vision and ambition from my side. And I think that there are ⁓ good potential for almost all companies to adopt AI such that we can improve the climate.
Lars Rinnan (57:02)
Hmm.
Yeah.
Rickard Sandberg (57:07)
But of course, there
are many more examples where you can think that 2029 will be a great year.
Lars Rinnan (57:15)
Yeah, let's just dive a little bit into the AI agents because I think a lot of people are very ⁓ interested in AI agents these days. It's not an old technology. If I remember correctly, I think it only started like in January this year (2025). So it's still very young and I tried to follow it quite closely. And in the beginning, it was really...
Rickard Sandberg (57:34)
Mm-hmm.
Lars Rinnan (57:44)
⁓ cumbersome and there was not very many good solutions. Now these days you have so many fantastic solutions. I think I saw only the other day now that AWS had now implemented some agents who could work five days straight, which is pretty impressive. I mean, and then of course,
Rickard Sandberg (58:08)
Yeah, sure.
Lars Rinnan (58:13)
you also have the same, let's say the same kind of exponential development in that small subfield. ⁓
Rickard Sandberg (58:21)
But yeah, that's a really good example. And I think we're really like getting there. But then again, we are a little bit back on the trust issue. So it means that we must monitor these agents doing whatever they are supposed to do. And once we kind of have a green light on that one, yeah, okay, then we can implement it at scale, so to say. But I'm using, agentic AI
Lars Rinnan (58:31)
Hmm.
Rickard Sandberg (58:49)
almost like on a daily basis but they're like baby agents but then still it's extremely useful for simulating and you can think about banking and something price quotes and so forth so it's pretty cool.
Lars Rinnan (58:54)
Mm.
Yeah. Yeah,
yeah. I think it's fantastic. mean, even even the small, like you say, baby, baby agents, you know, I have this, I built this baby agent that just reminds me before every meeting I have that in 10 minutes, you're going to have a meeting with Richard, you know, who's Richard? Well, he's a professor at the Stockholm School of Economics. What are you going to talk about? Well, this
Rickard Sandberg (59:23)
Yeah.
you
Lars Rinnan (59:34)
This is what you discussed in the last 10 emails and everything. just it takes me 30 seconds to kind of just skim through that message. And then I'm more or less prepared. And if you have like wall to wall meetings, has saved me so many times. And it's very, very simple though. But so what kind of areas do you think these agents are
Rickard Sandberg (59:47)
Yeah.
Lars Rinnan (1:00:02)
are going to be most prevalent and you think it's like is it supply chain is it finance is it logistics
Rickard Sandberg (1:00:09)
Yeah,
I think almost everywhere but again then here I think you should pay attention to the EU AI Act and that is that all AI systems, agents included, so they will have a risk classification. So I think then we will see them prosper so to say in where the risk is the lowest that things can go wrong.
Lars Rinnan (1:00:23)
Hmm.
Hmm.
Rickard Sandberg (1:00:36)
so we can
manage it. I think again, it will be this risk perspective. So that's like one take, but then it's also where do they have the best possibility to be successful? So in which domains do we have the best data, the best quality of the data? So say industry perspectives there, I think agents have a great future in the near future.
Lars Rinnan (1:00:54)
Mm.
Yeah, absolutely. And also going back to exponential development and stacked sigmoid curves, looking ahead to 2029, which kind of domains do you think are most likely to hit the next, let's say steep S curve phase? Is it energy? Is it materials, technology? Is it biology? Or is it something else? Any thoughts on that?
Rickard Sandberg (1:01:32)
Couldn't we say energy then? I should not rank the, yeah, so maybe that's my answer then. Yeah.
Lars Rinnan (1:01:35)
Yeah, I think that would be good. Yeah, yeah. I mean, you
mentioned nuclear and you also said that it probably takes like 10 years to develop. I think that in the US, it's actually like 15 years on average ⁓ before you have it decided and built and in operation.
Rickard Sandberg (1:01:49)
Yeah.
Yeah.
Lars Rinnan (1:02:05)
But in
China, it's less than half. It's between six and seven years.
Rickard Sandberg (1:02:10)
Yeah,
so they are efficient, there are also different regulations. But then of course, when we talk about nuclear power, you also have this mini power plant. So I think they can be implemented in six to seven years, but I don't know the efficiency or how much power actually they generate. Maybe it's like an intermediate solution.
Lars Rinnan (1:02:16)
Yep, absolutely.
Mm.
Yeah, yeah, so yeah, so these are the small mobile reactors. Yeah.
Rickard Sandberg (1:02:40)
Yeah.
Lars Rinnan (1:02:43)
Yeah, maybe that is an exponential technology as well. I don't know.
Rickard Sandberg (1:02:52)
Yeah, no, I don't think I am the right person to answer that actually.
Lars Rinnan (1:02:58)
Hmm, we
need to look into that. Then I have something to do tonight.
Rickard Sandberg (1:03:02)
You have to bring some, yeah, and bring some
really proper energy expert also.
Lars Rinnan (1:03:09)
Yeah, yeah, but we do know that, you when it comes to solar and you know batteries, that is definitely ⁓ exponential. ⁓ Do you think this will change the reality of sustainability going forward?
Rickard Sandberg (1:03:16)
Yeah.
Yeah, I think so. So if you go back to the Assa Abloy example and their mission ⁓ of net zero, so then there is, I mean, what options do you then have? So when it comes to energy, then you have like, fossil energy and you also have gas or anything coming from gas and then the alternative is most likely solar panels.
Lars Rinnan (1:03:40)
Hmm.
Mm.
Rickard Sandberg (1:03:55)
And I mean, these industries, in a sense, they have the space, so that's not a problem. And that they're ugly looking, that's not also a problem. And they have a lot of facilities, so they just put them on top of the roof. So I think that that technology is becoming even more cheap. That is definitely a viable option to other types of energy.
Lars Rinnan (1:04:20)
Yeah, I saw this energy report the other day that said that almost all the net growth in energy production in the world came from sustainable sources, primarily solar, but also a little bit from wind and hydro, but solar was by far the biggest. Yeah.
Rickard Sandberg (1:04:38)
Yeah.
So
know that we're talking about the energy. then, I mean, there are plants in Sweden. I don't know how far we are coming to these plants, but we are thinking, and I think it's Morocco. It's like a solar pipeline. So we will buy solar energy from Morocco.
Lars Rinnan (1:05:00)
Yeah, exactly. I was actually talking to a guy from the Moroccan embassy the other day, just a few days ago, and he talked about solar in Morocco and I didn't have a clue. I was not aware that was a thing.
Rickard Sandberg (1:05:16)
No, But I think that that might actually be reality.
Lars Rinnan (1:05:21)
Yeah, he showed me some pictures. that was definitely, that's definitely real. And then we know also that Elon Musk is going to to, you know, have satellites now from going up next year by SpaceX, you know, also harvesting solar energy from space.
Rickard Sandberg (1:05:25)
Yeah.
Mm-hmm.
Yeah.
Lars Rinnan (1:05:46)
So
this is again, maybe that's another S curve on top of one that has been there.
Rickard Sandberg (1:05:54)
Yeah,
yeah, yeah, eventually. But at least whatever they think, I think it's a great bonus that you're open to explore on these possibilities. Though, mean, Musk is in a sense like obsessed by space and he's been that for since he was a kid, actually. but if, yeah.
Lars Rinnan (1:06:17)
Yeah, but
it's probably a good thing. Yeah, he's a little bit crazy as well, but he does some remarkable stuff. And yeah, and of course, if we want to be really into the sci-fi regions, you could look at Dyson sphere it's built around the sun.
Rickard Sandberg (1:06:22)
Yeah.
Yeah.
Lars Rinnan (1:06:43)
but that's probably not going to happen before 2029. That's a bit further off. Okay. Well, we talked a lot about exponentials and sigmoids, et cetera. Probably a lot of people will never understand exponential math deeply. Like most of us don't understand how our smartphones actually work.
So what would you say to someone who feels a little bit overwhelmed by the speed of change? What mindset should they have to still, let's say, benefit from these technologies in 2029 without needing to be a mathematician?
Rickard Sandberg (1:07:32)
But then I think that it's back to this driving a car example and I think it must be like that so there is no way that everyone should understand everything. So then you know that it's yeah I think we... ⁓
we have to trust this. And I also think it will become pretty evident. I mean, if you can't trust the system, okay, then it fails, but then it will be repaired in a sense. So until we can trust it. And once we trust it, then we will just be happy that it exists actually and doing these favors or whatever to us.
Lars Rinnan (1:08:03)
Mm.
Yeah, Maybe people should Google "second half of the chessboard". That should give them some interesting hits. Maybe also Moore's Law ⁓ and Raymond Kurzweil's exponential accelerating returns.
Rickard Sandberg (1:08:35)
But I think
that for most people, if they just dig into this compound interest rate example, that will be pretty good. then, I mean, most of us, have a saving account, and then we just try to understand how the money has grown over the last 10 years, they say. So if you understand that, then you're on a good way.
Lars Rinnan (1:08:59)
Yeah, And if that's like 3 or 4 % annually, and you can look at exponentials in technology, is really high exponentials. So that's same concept, but very different speeds. But yeah.
⁓ This has been really interesting ⁓ Richard. You helped us decode the hidden mathematics behind the technologies that are shaping our world. You have to have a feeling of this. And of course these curves, if they're exponential or if they're sigmoid, they change everything. And I think that understanding them also maybe helps us replace fear.
with clarity. So I think that's really, really good thing. So thank you for your great explanations and your great examples Richard. That's greatly appreciated. And to everyone listening, so if this episode helped you see AI, climate or the future in a new light, follow the podcast and share it with someone who still thinks in straight lines. And remember,
The future is better than you think. Thank you.
Rickard Sandberg (1:10:29)
Thank you very much Lars for letting me join this fantastic podcast. Bye bye everyone.
Lars Rinnan (1:10:35)
Thank you. Thank you, Richard.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Lex Fridman Podcast
Lex Fridman
Moonshots with Peter Diamandis
PHD Ventures