The World in 2029

Regulation vs Innovation

Lars Rinnan Season 1 Episode 7

In this conversation, Lars Rinnan and Alex Moltzau dive into the complexities of AI regulation, particularly within the European context. 

Alex Moltzau is a former policy officer at the European AI Office and spent two years at the center of EU regulation of AI. 

They discuss the philosophical underpinnings of why AI should be regulated, the challenges posed by the rapid evolution of AI technologies, and the geopolitical dynamics influencing regulatory frameworks. The conversation also touches on misconceptions surrounding the EU AI Act, the importance of human rights in policy-making, and the need for measurement standards in AI. They explore the implications of open-source AI models, the intersection of AI with social democracy, and the future of data control and portability. 

Ultimately, they envision a more organized regulatory landscape by 2029 that prioritizes safety and innovation while addressing the challenges posed by AI.


Takeaways: 

-AI is evolving faster than regulation can keep up.

-The philosophy of regulation is rooted in defining a good life.

-Regulation aims to establish common rules for technology use.

-Geopolitical dynamics significantly impact AI policy discussions.

-Human rights are central to the EU's regulatory framework.

-Misconceptions about the EU AI Act can hinder innovation.

-Self-regulation by AI companies is often insufficient.

-Open-source AI models present unique regulatory challenges.

-Data control and portability remain complex issues.

-A vision for 2029 includes better-organized AI regulations.


Chapters: 

00:00 Introduction to AI Regulation and Its Importance

04:10 The Challenges of Regulating Rapidly Evolving AI

08:01 Geopolitical Dynamics in AI Regulation

12:32 Misconceptions and the Cooling Effect of Regulation

16:59 The Competence Behind AI Regulation

21:31 Key Risks and Areas of Focus in AI Regulation

25:49 The Role of Truth and Authenticity in AI Generated Content

35:22 The Role of AI in Customer Interactions

39:19 Self-Regulation vs. Government Oversight

40:48 Europe's Position in the Global AI Race

45:11 Regulatory Challenges in AI

49:55 The Impact of Open Source AI Models

53:56 Data Ownership and Control

58:10 Intellectual Property Rights in the Age of AI

01:06:33 Envisioning AI Regulation in 2029

Lars Rinnan (00:03)
Welcome to The World in 2029, the podcast where we explore how today's innovations are shaping our future. I'm your host Lars, and I'm on a mission to spread positive insights into how today's pressing issues like climate change, hunger, and disease are being addressed by exponential technologies that most people don't you know about.

I worked with artificial intelligence for the last 10 years and I have helped numerous tech startups. So this is a topic I you know well and one that's extremely close to my heart. ⁓ Today's episode brings us into the engine room of global AI regulation. My guest spent nearly the last two years inside the European AI office, one of the world's most intense policy environments.

helping shape how the next generation of AI systems enter the world. Alex Moltzau welcome to the show.

Alex Moltzau (01:09)
Thank you so much. It's really a pleasure to join you here. And I you know about your excitement for tech, because I've seen you in the community for many years. So being invited here is a great honor.

Lars Rinnan (01:22)
⁓ it's definitely my honor having you here, Alex. Fantastic. So Alex, let's start with the fear that sits behind all of this. So AI is moving at a speed no government has ever regulated before. And we're already watching deepfakes, blurring truth. Models making high-stakes decisions and autonomous agents act in ways that surprise even their creators.

So let's anchor ⁓ this. Why regulate AI at all? What's the fundamental purpose, especially from inside the European AI office?

Alex Moltzau (02:06)
I mean, I cannot speak for the whole AI office necessarily. And I think this is also what you will hear as a typical disclaimer from a lot of people that worked in the European Commission. And I do not talk on behalf of the European Commission, etc. But I personally think that, you know, ⁓ I think it's a really deep fundamental question that is really kind of core of philosophy in a way, what is a good life, you know, and I think

It's a really important question to take into consideration because in your everyday life, you come into a range of situations. I think we would also like to use AI products and services in ways ⁓ that seem beneficial to us and in a way that we are not exploited by technology, but ⁓ that we use technology in a safe way that respects our boundaries and also ensures that we can live.

good fulfilling lives in a way that respects our human autonomy, but also recognizes that we are also in a world that is constantly changing. ⁓ the regulations that I have been working with, as you will talk about soon, is related to product safety. And I think I can appreciate safe products in many ways that when you plug something into a plug socket,

⁓ It just doesn't go wrong usually, right? And that's pretty positive. So I think when we experience AI in a range of contact points, as human beings, ⁓ as people just trying to navigate our own personal world, that should be ⁓ kind of hopefully a good experience and a safe one. That ⁓ is something that seems...

⁓ something that we maybe take for granted, but also like with a lot of technological process, it's just something that ⁓ never is ⁓ obvious ⁓ straight away, like the seatbelt, you know, it's not something that just happens, you know, it's a large scale social change that happens over time. And this is also kind of part what is regulation. So technology can be rapid, but as a society we...

we can adapt to rapid change, you know, and like we are constantly adapting to rapid change and regulation is just a way to agree on some common rules of the world.

Lars Rinnan (04:36)
Yeah, I think speed is perhaps a central ⁓ point here because regulation is, I mean, in its core, it moves, let's say slowly. We can definitely discuss that, but it takes years. Like the EU AI Act took several years, but we all you know that the AI, let's say evolution or some say revolution.

At least it develops really rapidly. So you have regulation moving in the space of years, and then you have AI developing in the speed of weeks. I mean, there are new models coming out just about every week. And of course, you probably felt this tension firsthand. So how do policymakers even begin to work under those conditions?

And how do you regulate something whose behavior changes every month?

Alex Moltzau (05:40)
So I think of course that it's not the first time that we've had to regulate technology. And I think this is pretty important to recognize. And I think through several iterations of new technology in the last few decades, ⁓ that has been a bit of a surprise to many. Like, ⁓ the internet. This is something that is open and free. ⁓

social media, wow, like open and free, there's a lot of good intentions there. But in the end, we also discovered that we have something called, ⁓ I mean, power and nation states and like negotiations and infrastructure and the underlying, I mean, bits and pieces that make technology work ⁓ also, you know, are owned by different actors and

And there are relationships there. And this is also the case with the field of artificial intelligence. I think, but I mean, technology is fun and exciting and I'm optimistic. you know, like my entry point into AI and technology was through kind of the interest in sustainability and the sustainable development goals. And that's kind of why I started working the last 10 years with technology. But I also feel that there is this ⁓ shift now that is

kind of different in a way than some of the previous technologies and some of the previous technological changes in a way that fundamentally asks questions to what it means to be human. And like what human expression means and replicating the way that we communicate, the way that we talk, the way that we see and experience.

through ⁓ all these different models, I mean, multiple ways, like multi-modality, right? mean, like we have now, you know, we were interested in talking to computers for a long time, you know, with Eliza and like all these earliest experiments, like that we were like, my God, this is like talking to a human, although it was just real based, although it was, so, but it is a fundamental shift now.

Lars Rinnan (07:58)
Yeah.

Alex Moltzau (08:04)
And I think it is picking up speed. And as you say, like from the last three months to now, and there are sort of like these sudden changes it feels like. So policy is difficult, right? I mean, like the geopolitics of this is intense, right? I mean, like you have ⁓ like the largest states in the world ⁓ having this as the highest political priority. And that is like ⁓ extremely different from when I started working with AI policy 10 years ago.

I mean, it's just completely different. The doors were closed. It was not an open, necessarily discussion, at least on the national level. But now the doors to political leadership is much more open to having these kind of discussions than it's of the highest political priority that I think we have seen ever.

Lars Rinnan (08:34)
Yeah.

Yeah. But does that translate into, let's say, more speed when it comes to regulation or even more cooperation when it comes to regulation? Or is it like a power game? We've heard Putin say that ⁓ whoever controls AI controls the world. ⁓ Coming from that source, it's quite chilling. we, of course, they're perhaps not expressing it.

that directly, either in China or in the US. basically, that's what they're saying also. So you have the three main, let's say, power areas of the world, ⁓ all focusing on AI. I mean, what does that mean when it comes to regulation and policymaking?

Alex Moltzau (09:47)
Yes, ⁓ think like today as well is interesting because earlier today I was at the launch of this report in Oslo about super intelligence, power and human rights ⁓ with the Technology Council, you know, and kind of Norwegian Institute for Human Rights and and the kind of they outlined also like all these different sort of scenarios sort of like what what would we see going forward sort of like what is it going to look like? ⁓ And I think

I think also the EU, I mean, as you are likely aware, is built on fundamental rights. It has a very sort of rights-based foundation. And this is sort of like how it operates in a lot of ways in most of the policy that it shapes and that it works through. And I think in many ways, this is hopefully positive, that we have human rights as a...

as a kind of like a very kind of core foundation that we build our policy upon. for Americans, it's like the constitution, you know, it's like a bit more constitutionalist ⁓ approach. I think like, of course, we will see that the way this policy is shaped, of course, relates to power. you know, it's like, ⁓ we have had now a lot of political gatherings over the last year centered on AI.

⁓ And you have seen that too. Maybe you have been at some of them as well. I was in Paris earlier this year. I was at the summit in Paris. JD Vance held a speech there and it was kind of very anti-regulation. It was very challenging the EU and of course the current government in the US has been and is. And I think there was an ongoing discussion as well on standards.

Lars Rinnan (11:31)
Mm-hmm.

Alex Moltzau (11:42)
and when should the standards and when should the rules apply? So if they saw stop the clock movement, a lobbying movement from the US that was very, very present in Brussels. So I mean, yes, there is like a game of power between different actors. Also when it comes to models being released, as you know, like private sector companies release some models. Suddenly you see some open models from other countries.

Perhaps there is like a bit of a movement counter movement type of things to the geopolitics of modern releases. you know, maybe there is like a case of I mean, there is a wish for safety that was present ⁓ and still is present among many actors. And now you also have Yoshua Bengio going on a European tour, you know, like visiting now seven different state leaders in Europe, the Norwegian prime minister being one of them. ⁓

Lars Rinnan (12:18)
Mm.

Mm.

Alex Moltzau (12:42)
And I was also eating lunch with Yoshua Bengio when he was here. And I think there is a significant shift now ⁓ when it comes to who ⁓ actually owns these different models. And what does it mean when you do not have access to or control of how information is being

⁓ spread. And as you said, like in the introduction, like it is a really philosophical ⁓ question, you know, because it relates to how we act as questions, you know, and how, what kind of answers we get. And like with, and it's not like this was not present with ⁓ like search, you know, with search engine optimization and, sort of like Google as more of like an actor that said it was not going to be evil. You know, I've made search.

Lars Rinnan (13:18)
Mm.

You

Alex Moltzau (13:40)
a bit more of a question of, we want you to find the right information. Maybe now is it more kind of ad-based? Is it more commercial? Was it ever not? But now, as well, with ChatGPT and OpenAI announcing that they will do ads, like they announced a short time ago. So I think we see these cycles in new technology with the promises and perils of it. So I don't think it necessarily is bad that the EU started to think about.

Lars Rinnan (14:00)
Yeah.

Alex Moltzau (14:09)
regulatory landscape of this because I think they also realized that this would be the case, you know, and so the policymakers are not idiots, you know, they have seen some of these movements before and it will and it does affect citizens.

Lars Rinnan (14:24)
Yeah, no, absolutely. And of course, history repeats itself. But it seems that also history is moving faster than ever before, ⁓ which is which is interesting. ⁓

Alex Moltzau (14:37)
Yeah, so mean,

like there's this saying as well, like that history does not repeat, but it rhymes or something like that, you know, and I did feel like it's, it is the case now that we see a lot of rhymes. And, I mean, we can say what we want about it, but it is kind of patterns, you know.

Lars Rinnan (14:42)
Yeah.

Yeah, that's what it is. I think we are going to visit some of these topics a ⁓ little bit later on. was just wanting to touch on one thing. ⁓ There's also something emerging. ⁓ I hear a lot of people saying that certain things are illegal according to regulation when they're actually not. And certain things are required when they're not. So actually,

confusion about regulation is also, you know, seems to becoming its own type of risk. So, I mean, have you noticed this? And if you have, I mean, what is the biggest misconception you see out there about, let's say, the AI act? What does and doesn't it allow?

Alex Moltzau (15:45)
Yeah, so I think, you I cannot give you a perfect picture of this, but I think what you're describing as well is like a possible cooling effect, right? I mean, it's a sense of like, you  introduce something and then it sounds a bit scary and then people sort of hold back a bit. And I think that the EU is going to come and then slap them on the hands, you know, and like say, ⁓ you know, get away, like we'll give you a big fine, you know, and that it happens very suddenly.

According to a lot what I have seen that the US is fairly proactive in some ways. then also, they also try to work with industry and they work in kind of very inclusive ways. Sometimes with the code of practice, there was like a thousand different actors involved. Most of the large tech companies were involved in that process. Yoshua Bengio was like leading the technical side of things. I mean, he's the most cited.

Lars Rinnan (16:34)
Mm. Mm.

Alex Moltzau (16:40)
current living researcher in the world, but also one of the most cited AI researchers in the world, if not the most. ⁓ mean, being the more like foundational player when it comes to deep learning and such. So I think if you would kind of claim at some point that the regulatory actors was maybe lax and not trying to pick up new things, I think that would be a hard statement to make. But as you were saying, the cooling effect, you know, like what?

What do actors actually think is something that there is kind of ongoing research on, you know, and like empirically that we will sort of like see more information about. But on a heuristics level, you know, like if I could count on my fingers, I would say that I think people are a bit sort of like scared that that you suddenly will kind of swoop down and be like, you know, pointing at them with the finger and saying like, no, you know, you're doing the bad thing. I think.

EU currently wants a lot of innovation. They want sovereignty. They want to see actual products and services being developed in EU. So ideally, that's kind of like the current wish, as I see it, from the political leadership. ⁓ I mean, to increase the compute capacity with these AI factories, the gigafactories, and a range of initiatives that attempt to actually move towards innovation. But I think the financing landscape, you

Lars Rinnan (17:45)
Hmm.

Alex Moltzau (18:06)
People are talking about VC money. People are talking about venture and things that need to be sorted when it comes to more like the overall European market. But the AI Act and similar regs are intended to build a more uniform European market. mean, in the US, there is a lot of loss when it comes to AI. I mean, people talk about Europe having regs, but the US is really regulated as well.

Lars Rinnan (18:35)
Hehe.

Alex Moltzau (18:35)
I mean, like if you're

a company operating in the US, ⁓ then you have to kind of deal with a lot of state legislation. like when it comes to AI, there's more and more state legislation being introduced in the US to the point that they, even earlier this year, wanted to have a moratorium, right? This was part of the negotiations around the big, beautiful bill, if you remember that, BBB. ⁓ Then ⁓ part of that ⁓ negotiation, there was like a wish to have like

Lars Rinnan (18:52)
Mm-hmm.

Yeah.

Alex Moltzau (19:03)
a moratorium on legislation on the state level and then like more centralized power to the US government when it came to AI regulations. So, I mean, like this is a bit of a back and forth, like centralized, like local ⁓ kind of what do we do? But when it comes to the EU, at least there is like a division of labor when it comes to GPAI, general purpose AI models, which is like Brussels central competence located in the European AI office and high risk, ⁓ which is more spread on the national level.

Lars Rinnan (19:11)
Mm.

Alex Moltzau (19:33)
So there is a bit of a division of labor when it comes to the largest models in the world. And that there is actually a very competent team sitting in Brussels working on this, I think which we talked a little bit about on another occasion, Lars. And I can certainly tell you a bit more about them at some point.

Lars Rinnan (19:54)
Yeah, it seems you have some really top minds involved in this. mean, you mentioned Yoshua Bengio, people who have written papers together with Stuart Russell, again, one of the grandfathers of AI, Oxford PhDs, RAND veterans, medical experts, et cetera. So that's pretty impressive. I don't think too many people outside of that environment actually know about this. What did you...

What did you think about, let's say, the internal mindset of that office? Were people optimistic? Were they worried? Were they exhausted? What do you think?

Alex Moltzau (20:33)
Yeah, they were working hard. I mean, some people have this image of some kind of lazy Brussels bureaucrat sitting and pushing papers and clocking out. That seems to be the case ⁓ amongst certain people. But this is at least in DigiConnect and in the AI office, ⁓ an extreme opposite of what I saw, because these people, they work really hard, maybe too hard sometimes.

Lars Rinnan (20:35)
Yeah.

Alex Moltzau (21:01)
⁓ And a lot of them are extremely competent. And you mentioned ⁓ some of them, like the AI safety workstream lead as well. Simon Miller, worked like seven, eight years as a senior engineer in Google. He worked on YouTube stuff. He's extremely competent and his reputation is good. And you also have ⁓ Jan Brauner, which co-wrote with Stuart Russell.

Frederike, which has an interesting background setting of the UK security institute. And the list goes on. There are 20, 30 of those people in the safety unit. also with some of the most competent AI law people in the world, because they wrote the AI Act. They negotiated it. It's the case that if you get to work in the AI office,

I think you're really lucky, you and I feel like I also was really lucky. And it's kind of an experience, a once in a lifetime experience, you know, it's a, I'm extremely grateful. You know, I feel like a great deal of respect, you know, for their craft, for their work, you know, for the team. ⁓ So, so this is kind of like my impression, you know, after getting to be so fortunate to, work with some of the best people in the field, in their areas.

Lars Rinnan (22:15)
Mm.

Alex Moltzau (22:28)
It's a huge privilege and I will be grateful for the rest of my life.

Lars Rinnan (22:33)
Yeah, it sounds fantastic. I think it's really reassuring as well, having done those kinds of, let's say, competence level and experience involved in making the regulations. Because it would be, let's say, if that wasn't the case, if there was a huge divide in the competence between the people making the models and the people regulating the models.

I think that would be, let's say, maybe even a democratic problem. So I'm really, really relieved that you ⁓ worked with those kind of people and ⁓ also a little bit envious. It sounds fantastic. But I mean, so we talked about why we regulate. So let's move on to what we regulate. That's perhaps where most of the fear comes from.

I mean, the nightmare scenario would be, I mean, regulating the wrong things, you know, and letting, let's say the real, ⁓ the real dangers and the real risks slip by unnoticed, you know, regulating the edges, but the center collapses. So when you, when you look at the AI act, so what are the parts that truly matter for everyday people? What are the risks we really must regulate?

Alex Moltzau (23:58)
Yeah, I think ⁓ this is ⁓ an extremely good question. And it's also a range of risks, right? So we talk about high risk in a way. That means there are certain areas where the use of AI is considered ⁓ a bit more of a risk. also, the AI Act is not meant to regulate everything all the time. It's also kind of like standard space. So it's also kind of like...

trying to find out how do we measure it, what are the technical requirements. And these are things that happen in other industries. And it's important to say that metrology plays a part here. And that sounds very arcane. Do you know what metrology is? Not weather science, but metrology, like measurement science. So NIST in the US just

Lars Rinnan (24:50)
Mm.

Alex Moltzau (24:55)
put out a report on the measurement science of AI, kind of like, how do we measure AI? What do we do about that? And to a lot of people, ⁓ that sounds a bit arcane, but it's something that's quite important, because people are actually working to measure different units ⁓ of these different technical procedures. And it's something that you see ⁓ only notice if you know about it. Like with the...

Lars Rinnan (25:00)
Mm.

Alex Moltzau (25:24)
I I don't have ⁓ a gasoline car or anything like that, but when it comes to kind of, I saw some marks of the measurement science community in Norway, like on some places, and it's something that's bit invisible. For example, also with international shipping, Measurement science standards have played a crucial role, right? I mean, what would international shipping be without standardized containers?

Lars Rinnan (25:52)
Mm-hmm.

Alex Moltzau (25:53)
it

wouldn't really be a thing. So in that sense, to find ⁓ units of measurement that we agree on in the field of AI, I don't think it's a stupid idea. It makes sense for international trade as well to find ways to agree on how do we measure things? What does it mean when we say a certain thing? What does it actually mean? So to me, that's ⁓

Lars Rinnan (25:55)
It wouldn't work.

Mm. Mm.

Exactly.

Alex Moltzau (26:21)
That makes a lot of sense, but also seems a bit arcane, but it's also hard to agree on, right? Because countries don't always agree.

Lars Rinnan (26:29)
I mean, yeah, we can't even agree on measurements in terms of length and time and weight. But let's not go into that one because that's a rabbit hole. And ⁓ my god.

Alex Moltzau (26:38)
Yeah. yeah.

Yeah, miles or kilometers, like gallons, like, you know, that's crazy. I

was living in the UK for some time. I had a field day with that.

Lars Rinnan (26:53)
Yeah, yeah.

You have to figure out your weight in stone. I who knows? But let's go back to regulation. So, I mean, I think one of the things that people are really, really scared about being regulated, perhaps mostly in the US, but also otherwise, is innovation. But I'm guessing that there's no sentence in the whole EU AI Act where it says stop building advanced models.

Alex Moltzau (26:58)
Yeah.

Yeah, I mean, that's correct. No, but it's true that it sort of creates ⁓ some kind of ⁓ range of compliance on certain products and services. So I mean, like, I understand that it's not completely for free. When it comes to companies, you know, they also have to think about how much will it cost to kind of like make sure that we fulfill the requirements of what the EU considers a safe

Lars Rinnan (27:26)
That's not the focus.

Alex Moltzau (27:55)
product, you know, but I think even also with products that have fairly serious consequences, if you think about children, you know, like and children toys, there have been a lot of cases in Norway of late, you know, where, you know, where we have children toys that are just toxic, right? I mean, that are just like toxic materials, like, you know, it's like, and there's a reason why we, mark products and there's a reason why we build products in certain ways.

that have huge repercussions to the people that we love and care about. mean, like, to me, this is a way to try to have, in a way, loving approach to how we use AI is to think about how do we make them safe, you and also product safety, yeah, it costs something. But if it builds a safer and more trusted market, I think it's a worthwhile endeavor, you know, but it's, I'm not gonna say it's super easy, but there's a lot of areas that are hard to regulate.

that are fairly regulated that we benefit from greatly. If you think about air traffic, is air traffic not regulated? When you get on the plane, it's some quite advanced instruments and processes that are involved in that. But you still get on the plane and you get from A to B, and maybe even to CDEFG, wherever you have to go.

So that's an extremely regulated space, right? But it works when we want it to work, I guess. And when the people are not there to make it work, like air traffic controllers, then it doesn't work. So I think it's a case of what does good governance of AI mean? And in a way that Europe has not had, I mean, as I see it like 10 years ago, when I started working ⁓ more with AI policy, I did feel like it was really not

Lars Rinnan (29:28)
Yeah.

Alex Moltzau (29:46)
clear who was governing AI or making AI policies on the national level in different states in Europe. Like, let's just say that was not really the case. And not the case for Norway either. Like, it was not really clear ⁓ which person in the ministries or that was like the person ⁓ on a continuous basis responsible for AI policy. That person and that role honestly did really not exist. It was a bit of a stunted sort of thing, you know. Like, hey, let's do some AI.

Lars Rinnan (29:51)
Mm, mm, mm.

Alex Moltzau (30:15)
The EU is asking us to do an AI strategy nationally. Should we do that? Yeah, let's do that. And then the person jumps onto some kind of Nordic conference building or like, there was no kind of long-term ⁓ dedicated resource as I see it to do AI policy. That was not the case, but now it's different.

Lars Rinnan (30:30)
Yeah,

yeah, there's been a lot of changes, ⁓ positive changes, you know, more people coming in, more organizations being involved in this. I think that's really good thing. you know, it's also a very good comparison, you know, comparing with airline traffic or, you know, chemistry or, you know, DNA research or whatever. mean,

They are regulated, ⁓ heavily regulated, and thank God they are regulated. ⁓

Alex Moltzau (31:05)
Yeah, I mean like you don't want

those things to not be regulated like dangerous chemical components. Let's just, you know, we need to take a step back people, you know, we need to make sure that like all this chemistry stuff, it will be fine. It will be fine on its own, you know, like I think people would be a bit surprised if you started saying that.

Lars Rinnan (31:08)
No. No.

That wouldn't

happen. So it's really strange that some people are advocating that this should happen in the AI space. And of course, AI is not just one thing. I mean, it's a huge field and it's moving very rapidly. And you have all these different kinds of risks, like the collapse of truth. I mean, you now have AI generated like Tai Chi videos with impossible physics.

Alex Moltzau (31:39)
Yes.

Yes! ⁓

Lars Rinnan (31:55)
Political deepfakes, I

Alex Moltzau (31:56)
my god.

Lars Rinnan (31:56)
mean, we've seen that both in the conflicts in Gaza and in the Ukraine. You  see Hollywood level video generation on demand. And then you have other people claiming that real events are fake. So there's suddenly a discussion about what is true. So how big is this?

Alex Moltzau (32:09)
Yeah.

Lars Rinnan (32:24)
authenticity crisis and does regulation stand a chance in regulating this? What do think?

Alex Moltzau (32:34)
No. But we got to try Lars. ⁓ It feels pretty hopeless to be honest. ⁓ But I think there is really no other option than to try to make the best of this. And the world is going to change. But we have to do what we can to build ⁓ a just and a better society. That's what we have to do. But it feels pretty hopeless actually. When you see all these things. ⁓

happening and coming out and a collapse of reality. But like, man, let's talk about these Tai Chi videos. what, what, they're just filling my feed, man. Like, they're just filling my feed. Like, I don't you know why I'm getting them. And like, I'm so frustrated with like all these Tai Chi content. feel like, at least like in Europe, I don't you know if this is case in the US, at least it's always marked with like, this is an AI generated video, you know, because it is required, you know, by law. ⁓

Lars Rinnan (33:19)
Yeah.

Alex Moltzau (33:30)
And as I understand it, it's also kind of like part of the AI act. But honestly, if this was not marked as AI generated, I'm not sure that in all contexts that I would recognize them because it seems, some of these Tai Chi videos, they seem pretty real. This is like this guy with a six pack just talking about Tai Chi and being all sort of like, ⁓ have you heard about Tai Chi? ⁓ Tai Chi is amazing. You should try it out.

Lars Rinnan (33:56)
Yeah.

Alex Moltzau (33:59)
And like, I have a six pack, look at me. And then another like, generated actor is like, oh, this would maybe work for my dad. I should tell him about it. And I'm like, honestly, I need something else on my feed. I even press sort of like, I want less of this. And it still kind of keeps coming in, like, in my feed.

Lars Rinnan (34:07)
Yeah.

⁓ but

Alex, you you get the feed that you deserve. You're making it yourself. We you know how this works. ⁓

Alex Moltzau (34:23)
Thank you, Lars. Thank you. That's a real... I you know, I you know. It's all my own fault. It's my fault.

I appreciate, I fairly like my LinkedIn feed, ⁓ but the Facebook feed is just dead. It's broken. That's kind of where I get the Tai Chi stuff. I seldom scroll on Facebook, different platforms, different feeds as well. And that's machine learning, that's AI as well.

Lars Rinnan (34:42)
Mm.

Yeah.

Alex Moltzau (34:52)
So

Lars Rinnan (34:52)
Absolutely.

That's behind it. mean, but going back to truth, ⁓ which is the opposite of these Tai Chi videos, of course, ⁓ truth as a huge concept, mean, it's merely up to the regulators to fix that. Of course, that also has to do with politicians, ⁓ huge companies, the AI companies, of course, but also, I mean,

Alex Moltzau (34:56)
Yeah, can, yeah, can.

Yeah, I have another point actually.

Lars Rinnan (35:22)
down to individuals. But yeah, yeah, go ahead.

Alex Moltzau (35:28)
Yeah, so I mean, I think this is a bit of a mood point. for fun, I have some personal AI projects. I'm sure you do as well, Lars, because you need to have some personal projects. We have too much spare time, don't we? No, exactly. But so we also have to do AI in our spare time. I'm a bit of a terrible person when it comes to that. So one of my personal projects is I play

Lars Rinnan (35:41)
Hmm

No.

Alex Moltzau (35:58)
piano, like I play classical piano and I quite enjoy that, but I'm not particularly amazing. I like I will never be like a classical pianist. I mean, like this is just the bane of my existence that we cannot do like everything at once. Right. but what for fun, you know, I just wanted to kind of like explore these sort of like music generation models in, you know, and myself as well. mean, like, and I made like over the last few months, I made like a

Lars Rinnan (36:12)
Mm-hmm.

Alex Moltzau (36:27)
a range of piano drafts. And I also sort of like made my own sort of classical piano album. So I put that out. You can put a link here from the pod. like, it's really, I would say it's not particularly good. But it's really interesting to see kind of like what you can do now with AI. just used, I mean, I made my own.

Lars Rinnan (36:42)
Sure.

Alex Moltzau (36:55)
Piano drafts, and then suno.ai has this digital audio workstation. ⁓ And also, it creates models. So you can basically make a musical draft, and then you can submit it into the AI model. And then you can do some prompting with the melodic draft that you have, and you get different instrumentation also, like orchestral instrumentation, like choir, like a classical guitar. You just use it

Lars Rinnan (37:06)
Mm.

You

Alex Moltzau (37:24)
prompt these things on top of the musical draft that you created and then you get a result. honestly, there's a lot of those drafts that I would not be able to distinguish myself when it came to music, I mean, human. Before, half a year ago, I... ⁓ No, it's actually ⁓ almost a year ago because I was the leader of the social committee in the European AI office and... ⁓

And I also kind of organized the New Year's party. And what I did for the New Year's party, mean, like, this is, maybe I shouldn't say this, you know, but it's a podcast and I'm not working in the AI office anymore, so I can be a more open. Now, of course, in a podcast style. So what I did, was like, ⁓ for the different sections in the AI office for fun, I generated different songs about like the different work streams in the AI office. ⁓

Lars Rinnan (37:56)
Mm-hmm.

⁓ It'll be between you and me.

Alex Moltzau (38:21)
And I played them to the different people in the New Year's party. And honestly, the songs, they were not bad, but you could really tell that they were AI generated. But now it's changed, actually. If I generated those things now, in almost ⁓ a year, it's just been such a development. Right now, I would not be able to distinguish it. And it's also the case with Sora 2 and...

Lars Rinnan (38:33)
Hmm.

Yeah.

Alex Moltzau (38:49)
And guess that's why South Park made fun of it too. sort of like had ⁓ kind of butters create all these like videos to make fun of the other characters. I mean, it's become a bit of a public consciousness thing that we are now at a time where truth is really, really hard to ⁓ find, understand. And also even if it is there, it's questionable. I mean, as was the point with the South Park episode when they generated... ⁓

Lars Rinnan (38:57)
Yeah.

Alex Moltzau (39:19)
like that when there was actually some kind of bad fallout. mean, like you can see the episode to understand what it was. But then actually everyone doubted that it was real because the state just told them like, is actually just a fake video. So I mean, so it is not real. And also at that point, as you said, maybe there is a bit of a collapse of truth.

Lars Rinnan (39:43)
Yeah, yeah. But I think, I mean, we could talk for hours about this. I mean, you know, the most played songs on Spotify now are actually AI generated. I just saw 50 Cent do a soul version of some of his most popular tunes. And it was amazing, you know, but of course AI generated. So it's really hard. we need to, you know, these...

videos, these images, these songs, this whatever AI generated needs to be ⁓ labeled. And like you said, ⁓ in the EU or maybe in Europe, it's regulated. But other parts of the world, it's not regulated. And you get a lot of videos or images or whatever that it doesn't say that this is AI generated.

But it should, of course. And of course, this also goes back to when you're talking to a customer service agent. Of course, that agent is synthetic. And ⁓ they definitely need to state that ⁓ they are an AI. You're talking to an AI. ⁓ And I think this makes a world of difference. But we have still some way to go before we're on.

let's say fully regulated, fully marked, fully labeled on this area.

Alex Moltzau (41:16)
Yeah, and

also, think briefly before we go to the next thing, I have a friend, they went to the US, and they went to a specific place in the US, and they just tried to book something. And then they were met by a range of AI agents that didn't really express that they were AI agents. But he actually did some test prompts to a baking recipe or whatever it was. can't remember.

And then he just found out that almost every single platform he was on, he was just talking to sort of AI agents and not really chatbots instead of actually talking to real person. And it was really not declared at all. that's the status quo in the US. It's really kind of what is happening right now in the US.

Lars Rinnan (42:03)
Yeah, exactly. So I also see that ⁓ some of these AI companies are actually publishing their own safety policies these days. I saw that Anthropic had this responsible scaling policy, which is, had a look at it. It looks really good. It looks thoughtful. It looks transparent, but they made it themselves.

There's no external part regulating them. They're just kind of self regulating. I mean, can AI companies realistically regulate themselves or is that just, you know, fantasy?

Alex Moltzau (42:46)
That's just fantasy. It's a beautiful fantasy, though. It would be so great if it worked. unfortunately, time and time again, it has shown that it does not work.

Lars Rinnan (42:48)
Thank you for a short, brief and very specific answer. I also think that's fantasy.

It doesn't work. But I mean, do they do that because there's a lack of, let's say, other ⁓ states or government wide ⁓ regulation? Is that the reason?

Alex Moltzau (43:17)
Yeah, I mean, think let's take a little step sideways maybe and say also that a lot of companies contribute to constructive developments of safety rules, right? I mean, like, and I think Anthropic is ⁓ one of those actors that seems to be doing that ⁓ fairly wholeheartedly. I mean, because they have also made a lot of ⁓ approaches and they also publish a lot of work on safety and they seem also...

to be working with it. I mean, quite in detail, of course, that makes sense from a safety and security perspective. And like as they said, with their reports, when it comes to AI agents and cyber attacks and like a range of quite ⁓ difficult topics to handle, it's important that they kind of do deliberations and also share information ⁓ about that.

But you need like a regulatory unit and you also need to have like evaluations ⁓ from like a government ⁓ perspective, you know, and this is also what the A3 safety, AI safety unit in the European AI offices is working on also kind of like model evaluations and then trying to understand also kind of like how you can approach that ⁓ from a governance but also a technical perspective. So the dream of self-regulation is... ⁓

it's unfortunately been broken too many times that we should think that it's dream that we can kind of keep having. And it turns out to be more of a nightmare, unfortunately. I think it is important to work between government and also like these companies and have like a good collaboration. But that means that you have to have ⁓ like a trusted collaboration.

Lars Rinnan (44:55)
Yeah.

Alex Moltzau (45:10)
And it is also very challenging because it depends on also the political climate, right?

Lars Rinnan (45:15)
Exactly. And yeah, if we kind of follow that path to the geopolitical battlefield, mean, regulation doesn't happen in a vacuum. you know, it happens in a world of competing powers. ⁓ And ⁓ yeah, you've probably seen the meme, you know, US innovates, China imitates and EU regulates.

Which is you know Americans love this, you know that this that they really find it fantastic But also a little bit hilarious. I mean At a certain level, you know, there's probably Some some truth to it as well. I mean I saw some statistics the other day that actually Europe has only 3 % of Nvidia's G 100 chips

Of course, that's not the latest version. The latest version of the H100 is fairly new. So probably this version is a better statistic. But when Europe has only 3 % of, let's say, the highest performing chip that is used for training AI models, that is quite alarming, actually. So what do you think?

Might Europe become a permanent passenger in the global AI race? Let's say ⁓ due to regulation or is it due to other factors?

Alex Moltzau (46:51)
Yeah, I think this is probably one of the toughest geopolitical questions right now to answer. But I think it's very clear that ⁓ Europe needs more sovereignty. mean, this is for various reasons and like it's been something that has been observed in ⁓ the case of recent conflicts and, you know, disagreements that can have consequences. And like if we don't have a degree of digital or technological sovereignty, it's

It certainly would ⁓ sort of influence the way Europe deals with different situations going forward. I mean, as I see it, it's the highest political priority and also it's ⁓ being worked with and there's a large increased ⁓ investment, seems as well, kind of going into this space. Will they be successful? I mean, that depends on us, Lars, you know, and like it's also our responsibility to make sure that Europe is successful.

Lars Rinnan (47:46)
Hmm.

Alex Moltzau (47:51)
in this endeavor because it will have huge repercussions and consequences if we don't. But there are also many times in the past where Europe has seemed completely redundant and then we kind of pulled together and did something, at least with the case of Airbus or CERN or common European projects where ⁓

it was really not obvious that Europe would take any significant role and then at least there was some shift. That also was a policy shift. It also has to be the case that industry and policy and governance, we work across those spheres. And also I would say with civil society because the way that we distinguish ourselves as Europeans is that we are quite ⁓ diverse.

Lars Rinnan (48:28)
Mm.

Alex Moltzau (48:45)
region, you know, and like we have a lot of languages, a lot of cultures and I think that's beautiful. You know, I think that's also something that ⁓ I saw working in the AI office, so many people from different places in Europe, like whether they are kind of Italians, Croatians, Germans, you know, Swedish, ⁓ Danish. ⁓ I mean, like there is so many different nationalities that come together. ⁓

to build something and I think that's also really quite good. But now with AI as well, when it comes to language understanding and across and translation and a lot of added benefits, think the European Union actually and the European area actually does benefit a lot from these advantages. And when it comes to decreasing the cost of understanding each other across a range of...

of documents and across a range of ways of expression. It actually makes the European project even more viable than it ever was. I think, yeah, because that's that has been a huge thing. I mean, I have to say, like, it's also an operational cost that has to be considered when when the EU works together and also when Europe works together, that there are so many different governments, so many different approaches and so many languages. But I think that operational cost with also now.

advances in language technology is going to be lower and lower. So, I mean, this is not a perspective I've shared anywhere before, but I think it's something that actually makes the European project even more viable than it ever was. And that in a way that respects a bit more the diversity and actually makes ⁓ the different things that have been created a bit more accessible to the general public.

Lars Rinnan (50:25)
Mm.

Yeah, I agree. mean, I think diversity is basically a good thing. It's also a little bit, it makes things a little bit more challenging. Absolutely. I mean, even in my former company in the X NextBridge, you know, we had 13 different nationalities from four different continents. And that was very stimulating, but it's also a little bit challenging because,

Alex Moltzau (50:56)
Yeah.

Lars Rinnan (51:05)
Because of language, ⁓ even though language models definitely, mean, language is basically solved. But then you also have culture, which is harder to put into some kind of algorithm. ⁓ But did you get a feeling that, let's say within the European AI office, ⁓ that this was a concern, that EU was regulating more?

than the US and China? I do you know that both the US and China does have regulation on AI. ⁓ It's different than the European ones. And you probably you know a lot more about that than I do. But was that something that was discussed?

Alex Moltzau (51:53)
Yeah, that was constantly front of mind for political leadership. I can say that. ⁓ mean, and also a discussion with like the office and then the leadership. I think it's the case that they get a lot of people that have and share those concerns. it is the case that the people, at least that I worked with, was listening, you know, and trying also to...

to take that into consideration. But it's, of course, not easy. mean, regulating is not an extremely simple job, I would say. And of course, ⁓ as with horizontal regulations, and you were describing that this applies across a range of different vectors, like different domains, whether it's health, energy, maritime, there is a case that.

On the governance level as well, you have also to create these collaborations ⁓ between different supervision authorities, so that the supervision authority for health, ⁓ for example, has a good understanding of what that means ⁓ in there. mean, when they operate, when you get more computer vision into radiology or when you get ⁓ more kind of like...

⁓ management HR software into kind of the logistics side of things. what does that mean? How can you make sure those products and services are safe, both for the people working at the hospital and the people receiving care? ⁓ I think it's a, but they also like, there's a lot of operational mechanisms as well. I mean, and now I sound like a huge bureaucrat. I mean, wow.

Like the AI board, for example, is like a gathering of representatives from the different member states, but also with Norway as an observer. And Norway and Iceland as observers. ⁓ But they also work kind of very concretely on different topics. And also they have subgroups working on others or like specific kind of shared interests with representatives there as well. So I think it's a lot more detailed.

policy governance work going on on the European level ⁓ on these different topics, which I think is positive. But in the end, it's a difficult topic to handle.

Lars Rinnan (54:25)
Yeah, it's really difficult. I'm definitely of the opinion that you need regulation ⁓ on a whole range of areas within the AI sector. ⁓ So I try to push back when all these Americans kind of push that meme in front of you and ⁓ they just laugh because I think they're missing something important.

Alex Moltzau (54:51)
Yeah, and like just one comment as well is like China is super heavily regulated, you know, it's ⁓ I mean, as as we you know, it's a It's it's way of governing technology to certain things are allowed. Certain things are not allowed. So, I mean, it's it's extremely heavily regulated. mean, like I think in certain ways a lot more than the EU ever will be, of course. You know, so so it's I mean, it's not very invasive.

Lars Rinnan (54:56)
Mm-hmm.

Alex Moltzau (55:21)
⁓ As I see it, the regs that have been created in EU is mostly based on actually trying to make sure there are good products and having standards for it. I'm not sure how invasive you could call that. It's not like ⁓ they will not anytime soon ⁓ knock on your door unless something very crazy happens. It's just to make sure that people have actually documented the products that they are building and the work they are doing. And these safety standards are there in so many other sectors.

I mean, it is such a range of existing practices that are present in a lot of other sectors that the tech sector has not necessarily had to do. ⁓ So it's not kind of ⁓ that crazy when you think about it. mean, it's not that groundbreaking. It's just kind of trying to now also apply those principles to.

to a sector that obviously has a huge impact on our communities and on our society. I mean, it has such a massive footprint. I mean, like on the way we live our lives, it's kind of strange that they haven't had more stringent safety procedures. So we are trying to make sure that now that changes.

Lars Rinnan (56:31)
Mm.

Yes.

Yeah, I think that's good. ⁓ You mentioned China. mean, we have, you know, the past six, seven months seen some Chinese open weight models entering the global market. And of course, yeah, they're pretty good. They're actually very good. They're improving fast. And of course, maybe they'll be increasingly attractive when you know, trust in the US vendors dip But

How does these, let's say, open source, open weight Chinese models complicate the regulatory landscape? And do you think that they're also, let's say, open source, open weight ⁓ models from anywhere is a risk in terms of, let's say, terrorist groups using them for malicious purposes?

Alex Moltzau (57:31)
⁓ Yes and yes. mean, it's, I mean, like, yeah, it's, yes, it's, it's a challenge and yes, like it's, it's a, it's going to be kind of the whole risk perspective is also present in the AI office and they have their own team for like biosecurity, for example, you know, ⁓ and, models as ⁓ David even Harris has kind of also been commenting on this, that, you know, a lot of models are released, but they're often kind of like with

Lars Rinnan (57:33)
Yeah.

Alex Moltzau (58:01)
with these implemented safety procedures, but often after a fairly short amount of time, they are sort of cracked, you know, like, and like you can bypass those safety procedures and also find versions of the models that are accessible on the market where you can just, I mean, use them as you, as you feel fit. And also certain models have, have less stringent safety procedures. I mean, like such as Grok, you know, and like, there is kind of like a level of, of, of safety.

Lars Rinnan (58:28)
Mm.

Alex Moltzau (58:29)
implemented

that is kind of more or less lax, also depending on how that is viewed. For example, when it comes to producing images then of famous people or politically exposed persons, you would probably more easily be able to do that in Grok than some other models. So it sort of depends on the model provider. But also, I guess, ⁓

It is true that there is like this dynamic as well in the world where you have sort of competing factions and when the US is doing something, you know, it's the case for sure that there is a certain timing to this politically as well when Deep Seek releases its models. mean, as you could see from the Deep Seek model being released during the beginning of the Trump presidency and like, and...

Of course, a lot more money is spent than was outlined. And a lot of researchers have shown this ⁓ because there was this sum that was outrageous. Look at how little China spent on AI models. And of course, that was completely fictitious. There was a lot more ⁓ money that was ⁓ spent on this. And it's a huge investment by the Chinese ⁓ government in many ways.

Lars Rinnan (59:40)
Mm.

Alex Moltzau (59:55)
So it poses challenges also because it begs the question of ⁓ which model or which world model do we work from, assuming then that ⁓ it also affects the values, not just the values and the weights in the models themselves, but also what values do we aspire to and which questions.

Can we answer in certain ways and what answers are provided? It is different, right? So mean, like empirically and as evidence, it is very different. Also when you work with the open models or through the apps of certain companies, they answer questions in certain ways. I'm trying to be diplomatic here, Lars. But I I'm also, I have an interest in China. kind of, I have a range of...

Lars Rinnan (1:00:28)
Yeah.

Alex Moltzau (1:00:51)
like of interest, but like I've also been learning Mandarin and like I'm also, I also like to try to understand the Chinese political context. It's not, it's not one of my specialties, but I think it's a, it's a really fascinating, huge country. mean, like also with like a really lot of different, I mean, things that are going on like in tech, tech sphere, but also like when it comes to business and.

Lars Rinnan (1:01:08)
Yeah.

Alex Moltzau (1:01:20)
And I think, you know, how can we not be interested in like more than a billion people, whether it's China or India. And like, feel like we should take a deeper interest in what India is doing. And also with the India Impact Summit coming up, like in February, it's also going to be interesting to see kind of what then does India want to present on this global stage when it comes to AI.

Lars Rinnan (1:01:47)
Absolutely. I believe that India is actually going to rise quite quickly to become the next superpower. But of course they do have some structural problems as well. That's a huge topic. That's definitely for ⁓ another podcast some other time. ⁓ Let's talk a little bit about data. Of course, that is...

Of course, the fuel behind all of this, the fuel behind AI systems are trained on oceans of data. Of course, we you know this. And yet, ordinary people are struggling a bit to meaningfully see or move or delete that data. So it seems like we are feeding the machine, but we can't control the machine. So why is data ownership

still so murky, even after all the different regulations, GDPR, the DMA, the DSA, and now the EU AI Act. I mean, do we really control our own data, or is that just an illusion?

Alex Moltzau (1:03:07)
⁓ it's just an illusion.

Lars Rinnan (1:03:09)
Yeah. that's sad. But do you believe that real data portability will ever exist or will remain a legal fiction?

Alex Moltzau (1:03:15)
Yeah. ⁓

No, I think it's like I said in the beginning, or like not the beginning, but earlier in the conversation. I also said that ⁓ the regulations maybe are not working, but again, it doesn't mean that we should strive for something. it is something that has changed as well significantly over the last decade. And more with like there was a data directive, it didn't work.

to a great extent, they introduced ⁓ general data protection regulations, GDPR, and it has been ⁓ also kind of working, not working. That's a huge academic empirical discussion that, I mean, it goes way beyond cookies, right? But I would say that it has hopefully pushed things in a more rights-based direction, and there has been some fines, and there has been kind of like ⁓ value statements.

an important discussion and like we have gained more access to our data. I mean, I think that would be fair to say. ⁓ Also like, for example, for me on Facebook, I could port all my data away at least like seemingly and that was different before, you know, but at least now it's a bit easier than it used to be, right? At least we can say that, right? Yeah, I've tried it. I mean, like I've tried to download all my Facebook data. ⁓

Lars Rinnan (1:04:42)
Mmm.

Yeah. Have you tried it?

Yeah.

Alex Moltzau (1:04:55)
Out of interest, of course, you know, like, yeah. ⁓

Lars Rinnan (1:04:57)
Exactly. Have they made

it easy to do that or is it quite hard?

Alex Moltzau (1:05:03)
It's easier than it used to be, but they made it hard. So yeah, mean, it's kind of like they, it is maybe in their interest to make it as hard as possible, but it's like thanks to regulatory and political pressure, it has become easier than it used to be, I would say. So mean, like, so does regulations work? I mean, yes, right? I mean, like, yes, it does work.

Lars Rinnan (1:05:04)
Hmm. Exactly.

Yeah, exactly.

Yeah, yeah.

Alex Moltzau (1:05:32)
But have we solved data access? No. It's pretty clear we haven't. And also, there's also new different acts in terms of how we manage data and govern data, like Data Governance Act. We don't talk a lot about that. And it's not my specialty, but there are different mechanisms in that. There is also more sharing of data in the European.

Lars Rinnan (1:05:52)
Mm.

Alex Moltzau (1:06:01)
region with these different data spaces. think one of the most significant one being the European health data space, right? Because if you go to Italy, it's nice that they have access to your health data. If you suddenly have a health incident there, it's also really nice. And then when you travel in the European region, that they can certainly at least make sure that they help you in the best way possible. So I think that's highly positive for Europe as a region. And ⁓ yeah, mean, there's a lot of constructive

Lars Rinnan (1:06:08)
Mm.

Mm.

Alex Moltzau (1:06:30)
work that could benefit us as citizens that is underway. So I'm hopeful, but we haven't solved it. We haven't fixed it.

Lars Rinnan (1:06:34)
Yeah.

Yeah, so there's some positive development. That's really good. Of course, you also have intellectual property rights. And of course, you've seen authors and artists, you know, et cetera, filing lawsuits against AI companies for ⁓ stealing quote unquote, their data in order to make, you know, LLMs, of course, ⁓ image and video generators. You  talked about ⁓ making music, et cetera. So

Alex Moltzau (1:06:59)
Yeah, of course.

Lars Rinnan (1:07:08)
What is your view on this and perhaps what is the European AI Office official view on this?

Alex Moltzau (1:07:08)
Yeah.

Yeah, I feel like I'm blushing Lars, you know, like, because I feel like I just made some AI, I feel like I just made some AI slop myself, my classical music AI slop, you know, that I just uncritically kind of shared. But I mean, for me, it's a case of, I mean, like a differentiator, whether this is like a hobby thing or whether it's kind of something that is used.

Lars Rinnan (1:07:21)
You

Alex Moltzau (1:07:45)
on a really kind of like large massive basis to create like a commercial gain and a copyright infringement. And this is also kind of the case with, for example, open source, open source models when it comes to, for example, being able to create these open source models and like that people can at least use and download and use certain models for themselves if it is like in different contexts on a personal level.

So, but it is different if you create a company and then you specifically kind of use the IP of someone else to generate massive income, right? If it was then based on someone's IP and then suddenly you've generated billions of dollars, as is the case with these AI companies, for example, with books, you know, then ⁓ there was a lawsuit and a successful one, right? So they had to pay out money to the rights holders. And of course that's...

That's quite important, and it's fairly fair that when you create something, which is the basis of all these models, right? mean, it's trained on human creativity. It's trained on human labor. It's trained also with human labor, so that the rights holders are remunerated, but also the people that undertake this work is remunerated. I mean, with the tagging, for example, mean, as you know, there's a huge

Lars Rinnan (1:08:50)
Mm.

Alex Moltzau (1:09:11)
article ⁓ that kind of put focus on a lot of this data labeling work in places such as Kenya, where they were also unionizing against these tech companies to make sure that at least they got a ⁓ living wage. Even a living wage was really not provided with this sort of data labeling activities. there's a lot to say about what is fair in the AI value chain as well.

Lars Rinnan (1:09:16)
Hmm.

Alex Moltzau (1:09:40)
when all of these things are not generated out of nothing. It takes a lot of materials. takes a lot of, and also in the interest of sustainability and climate. know that also AI is very material, right? It's rooted. And I think ⁓ a really interesting book on this, of course, is Atlas of AI. It's maybe a book you've also kind of perused on. And I think it's like,

really quite interesting to try to understand these different value chains of AI, like the materials and the labor and everything that is involved in actually making it happen so it just doesn't become this like, look at this shiny thing that made this thing. ⁓

Lars Rinnan (1:10:18)
Mm.

Yeah, there's

a little bit more to it. Absolutely. But you know, I did see these lawsuits and I saw that some of them were successful, but it's really hard to, let's say, decide on the level of remuneration for your intellectual property. I mean, okay, so you wrote a book. Good. That was one of the...

3 billion books that went into, let's say Gemini, ⁓ alongside with Wikipedia and just about the whole internet. And then it is used in not directly, no copies. ⁓ But how should that kind of remuneration system

really work? mean, should every author get some kind of ⁓ payment ⁓ upfront? ⁓ Are we talking about some form of universal basic income for them? Maybe that's good. I actually think that most artists and authors actually should get some ⁓ UBI. ⁓ Most artists can't survive on doing,

doing their arts and they take other work which makes their art ⁓ not as good as it could be because they don't have too much time to work on their art, they have to survive. maybe this is a huge question, maybe it's ⁓ really hard to answer. Maybe it hasn't something to do with regulation directly, but it goes back to data and data ownership.

those kinds of things. Do you have any thoughts on this, Alex?

Alex Moltzau (1:12:25)
Yeah, I have a lot of thoughts on it. ⁓ I think, mean, like, of course, I mean, this is also what has been a discussion in Norway. I mean, like, as you know, that, ⁓ I mean, the National Library also kind of working with rights organizations to pay them a certain amount to gather certain materials and build certain models, you know, especially with the different media companies as well that have quite large repositories, but also others.

Lars Rinnan (1:12:42)
Hmm.

Alex Moltzau (1:12:55)
So I think it's possible to say that it can be done. But as you say, the question is how much, for example, Spotify and famously does not pay their artists a lot of money. I mean, it's a bit of running joke. ⁓ And I think still they try to gather as much music as they can and then also to remunerate it as well as possible.

I guess, Ecas (?) is mentioned in Sweden, you know, like, and it's, there's also like a large financial upside to this kind of endeavor as well, and also tech companies, ⁓ it moves kind of financial gain, also away from somehow the society also where the value is generated, right? And or the value is provided. I mean, like, so, and...

This is a thing that we sometimes call tax, you know, and it's not necessarily is paid and it's a huge basis also for the Norwegian society. I mean, that we trust the government and we pay our taxes and we receive services and we trust the government based on receiving those services. that, I mean, that would look very different if then...

Lars Rinnan (1:14:15)
Mm.

Alex Moltzau (1:14:21)
the money that was supposed to go in tax and the value created was ⁓ kind of not returning to the area where it was created or where the value was provided. so I mean, like in a way, Norway doesn't have UBI, but we do have a lot of financing for people that ⁓ come into a situation where they unfortunately are unable to.

Lars Rinnan (1:14:33)
Yep.

Alex Moltzau (1:14:47)
To work, mean, which is the welfare protections in Norway ⁓ is kind of a range of very, very strong measures to make sure that people don't fall out of the society. And in San Francisco, ⁓ you are basically going to fall out of society if you make a mistake, right? mean, like it's, it has a huge, huge community of, of, of homeless. mean, like most people that have visited San Francisco knows this, like very viscerally and visibly, you know, that

Lars Rinnan (1:15:03)
Mm.

Alex Moltzau (1:15:15)
it does not protect its weakest citizens. it certainly does not prioritize it. I mean, ⁓ in a way, if the value that is created does not go back to society and we have a general idea that we have to take care of each other, regardless of your background, and that maybe I make a little bit more money, but some of that money goes to taking care of those that are worse off, then...

We no longer have a social democracy, right? I mean, like, and that's what Norway at least is based on. It is a social democracy. is an idea of the labor that is fairly remunerated, not overly compensated. Fairly remunerated, not overly compensated, because we have a society where you are supposed to make enough money, but not so much that it damages the foundations upon which the society is built.

Lars Rinnan (1:16:12)
Yeah, I think that's a good society. Yeah.

Alex Moltzau (1:16:13)
But I mean, of course, this is a difficult discussion. So yeah,

I mean, I quite like living here. So. ⁓

Lars Rinnan (1:16:19)
Yeah, absolutely.

Absolutely. I think this is a fairly good segue also going to, let's say the last question, because as you know, this podcast is called The World in 2029. And it actually feeds into my upcoming book, The World in 2029. So at some point, hopefully, fingers crossed, I'm going to be an author and maybe, you know, thanks.

And maybe Gemini 7 is going to harvest my book. And I want 10 cents from Google for that. Anyway, jokes aside, if you look into the world in 2029 in terms of regulation, what are you seeing in 2029 when it comes to regulation?

Alex Moltzau (1:16:56)
Yay.

It's just so impossible to predict. I mean, I like that you're trying, and that's probably why I will want to read your book when it comes out. I think it's a worthwhile endeavor, but it's really not my forte and it's not my strong suit. I don't think I've had the goal of trying to do this futuring in a good way because usually you need to have such a range of...

Lars Rinnan (1:17:24)
I know.

Alex Moltzau (1:17:50)
of sources to base it on. And what I really respect and appreciate about you, Lars, is that you have this wide vision and you talk to a lot of different people, which also seems to be part of the goal with this podcast is to be able to get a lot of different perspectives that feeds into what you're doing. What I would say, what I would like it to look like, in a way, if I'm futuring a positive scenario.

Lars Rinnan (1:18:07)
Hmm. Yeah, that's right.

Alex Moltzau (1:18:19)
I would love to see that the world is more peaceful. As a number one, I'm entering Miss Universe contest now, Lars. This is world peace. But no, I do want it to be more peaceful. I think we are seeing a lot of securitization, as they call it, in political science.

Lars Rinnan (1:18:28)
You

Mm.

Alex Moltzau (1:18:42)
It's the case that also AI as a field is very securitized now, far more than it used to be. mean, the political discussion is drifting very much from product safety over to security and war. this is just the status quo. think as we have to recognize this geopolitically, that's the case. But I would love to see it go towards something more peaceful and that we could spend a bit more time trying to figure out

Lars Rinnan (1:19:01)
Mm.

Alex Moltzau (1:19:12)
the larger things that could matter to us ⁓ as kind of an overall human society and to have a bit more kind of common projects like tackling this climate crisis ⁓ that is still ongoing and will affect us all and that will make our lives worse if we don't deal with it and ⁓ kind of other kind of global ⁓ pressing issues and matters. ⁓

Lars Rinnan (1:19:31)
Mm.

Alex Moltzau (1:19:37)
And secondly, I think with the regulatory landscape, think we will see it ⁓ is much better organized in terms of ⁓ that we understand maybe some agreement on what measurements and units of measurement that the field of AI has in kind of like certain spaces, like with text or images or voice, you know, and like that seems perhaps a bit more organized and slightly kind of that there is a better way to evaluate it. And when

Lars Rinnan (1:20:00)
Mm.

Alex Moltzau (1:20:07)
There are so many products. There are going to be a lot of bad products, Lars. There's going to be a lot of terrible products. I will just predict that as a scenario, you know? But we have to make sure that ⁓ when we buy or put those products into use, at least in Europe, and I would hope also in Norway, we have a very good ⁓ way to evaluate and test whether they actually work before we spend.

Lars Rinnan (1:20:12)
Hehe.

Alex Moltzau (1:20:32)
hundreds of millions or even billions buying those products and putting them into use in society. So I think we'll see that in 2029. I like I will try to make sure we work towards that. ⁓ It would be nice not to waste like hundreds of millions of products that don't work. that is kind of, I think it's a nice thing.

Lars Rinnan (1:20:52)
Yeah, it is a nice thing. I think a lot of the things that you've been talking about actually gives me hope. think regulation is going to ⁓ improve, hopefully also across borders, across cultures, because we probably need a global ⁓ regulation or let's say at least global agreement on the basis, the core of the regulation probably. ⁓

Hopefully, this is something that we can see by 2029, that AI didn't spin out of control, because I think that could be quite harmful. But maybe 2029 is actually the tipping point where we finally learn how to steer, control, and regulate in a suitable fashion. I would love to see that. And it seems that you agree on that, Alex.

Yeah.

Alex Moltzau (1:21:50)
Yeah, I agree on that. I think that's good way to put it.

Lars Rinnan (1:21:53)
Yeah, yeah, fantastic. it's been a huge pleasure for me ⁓ talking to you, learning more about the European AI office, how you work, how you think, everything going on under the hood, which is something that most people don't get to learn about. So this has been really, really rewarding. So thank you so much for joining. It's been fantastic.

And to everyone listening, if you enjoyed this conversation, subscribe to the podcast, share the episode with someone who's curious about the future and follow along for ⁓ more discussions coming up, shaping the upcoming chapters of my upcoming book, The World in 2029. And remember, the future is better than you think. Thank you.

Good. I think we should stop.


Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.