Final episode on AI and generative AI, including start-up and VC landscape, regulatory & privacy environment and what does the future hold, with answers to such important questions as, can AI kill us? (spoiler alert: yes, it can).
Navigation:
- Intro (01:33)
- Start-up and VC Landscape (02:13)
- Open-Source (08:31)
- Regulatory & Privacy Environment (14:31)
- The Future (21:18)
- Conclusion (31:17)
- Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt
- Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro
Intro (01:34)
— Introduction —
Nuno
Welcome to Episode 44 of Tech DECIPHERED. This is our second and last episode on Artificial Intelligence and generative AI.
Nuno
In the last episode, we introduced AI. We talked about what’s happening around generative AI as well as verticals and what the big guys are doing. In this episode, we will go further into what’s happening in the startup and venture capital landscape, the open source landscape, the regulatory and privacy environment, and we’ll end by talking about whether AI will save all our lives and save the world, or whether it will kill us all.
— Start-up and VC Landscape —
Nuno
Maybe moving to startups, obviously, there’s been a lot of funding into companies that are now at the forefront of some of these big shifts. We talked about stability AI that had raised over 100 million from players like Lightspeed and others. KOTO, I believe as well, that are responsible for stable diffusion. We’ve seen in the past very well funded startups in the AI space not necessarily then scaling or doing very well. But at the end of the day, there’s definitely been a lot of funding. What is the current crux of the matter if you’re a venture capital firm and you’re looking at this landscape?
Nuno
The crux of the matter is noise. You see all these, “Okay, I’m chat GPT for something, or I’m an app that’s going to run on top of existing platforms using generative AI.”
Nuno
Generative AI is the new blockchain. It’s a new Web3. It used to be in all pitches two or three years back, Web3, tokenization, token economics, et cetera. Now everyone’s like, “It has generative AI.” My, again, relatively simplistic view of looking at this is I think of it as an app economy. In the same way that we had the launch of the app store in 2008 and we had mobile apps, initially everyone said, “Oh, that’s not an app economy. This thing is never going to amount to an economy.” It did. We now know that mobile first and mobile app is an economy. We have two proofs of that in this podcast.
Nuno
It is also true that I believe what we’re seeing right now is a similar thing to an app economy. This doesn’t mean that we’re not going to have some significant revolutions around AI and new platforms emerging that everything is going to be based on. I think we will have that as well.
Nuno
But at the same time, when we start seeing people saying, I’m going to use the tools and platforms that exist today to do an application specifically around this, which will be really cool and will take productivity to the next level, most of these will fail, like most apps failed. Some will potentially win.
Nuno
The notion of generative AI first is how I look at it, is for me a bit analogous to app economy like we saw with mobile. Some will rise to the top, very few. Most will fail dramatically. There will be a tons of noise. For us as investors, as venture capitalists, the complexity is to understand how can I reduce the noise level? How can I look at companies that are real and not fraud and not BS? Within the companies that are real, which companies are of a high likelihood of having a shot at this? Are they creating a new market or tapping into an existing market that they can corner?
Nuno
I think that’s the real dilemma of VCs right now. I’ve seen a lot of VCs that have never talked about AI before now talking about AI. I think it’s a bit facetious and a little bit maybe intellectually dishonest at this stage. It’s becoming the new thing everyone needs to jump into AI. I feel there’s an app economy coming. You can invest in some of these companies to the next level and really build behind that.
Nuno
Then second part of our thesis, certainly at Chameleon, is very much that AI is going to be in everything. There’s a lot of companies that we invested in that were not generative AI companies even. They were actually using deep learning methodologies and techniques to do their business, but they were not actually generative AI companies. For them, it’s a feature extension. Having a generative AI proposition to their product line makes sense.
Nuno
It’s not like a new thing. It just makes sense to actually amplify their feature set by having a generative AI play that they can now use to cross sell or upsell on their current and new clients.
Bertrand Schmitt
Yes, it’s clear that as an investor or course, as an entrepreneur, you really have to question where do you want to position yourself in that range of possibilities? You clearly see the biggest tech companies in the world investing like crazy in AI. In some ways that’s scary. Usually you want to invest in space where the cannot invest for some reason or where they are not investing or they are not focused here. Ai for sure, they are investing.
Bertrand Schmitt
Two, you can see some incredibly well funded startups as well, from Open AI to Anthropic. It’s a tough space to be a pure AI player today. Then you can think about investing in infrastructure, and we can see a lot of interesting companies in the infrastructure space from Hugging Face, for instance.
Bertrand Schmitt
There is a space there as usual. There is also, as you said, the question of startups who are just going to do better what they do, thanks to AI, as they will increase their efficiency, as they will be able to propose new features that will make the product more attractive. One question will be, are there opportunities for startups to disrupt existing software? The same way that Saas had disrupted traditional software license, traditional CapEx software, then mobile has disrupted traditional computer software.
Bertrand Schmitt
Is there a way to disrupt some existing space, thanks to AI, I think that would be the question for new version of what we do today, but in a more efficient way, or is it just an opportunity for the existing player to just do better? They have some advantage because they already have customer data, obviously. As an investor, that would be a big question. Do you want startups with AI in your name, or do you want startups that are just focused on the value proposition that finally we can really do better or solve or do at a more cost efficient way?
Nuno
Maybe four or five years down the road, there won’t be any distinction. AI will be literally everywhere. It’s more of, are you an AI first company or not? Are you actually creating a platform or infrastructure on the AI or are you creating an app around the AI? It will be more of a stacked discussion rather than are you using artificial intelligence? Everyone will be using artificial intelligence, I have no doubt.
Bertrand Schmitt
In a similar way, everyone had to go to mobile. It’s pretty rare that you don’t have a mobile offering. The same way that actually you don’t talk about the fact that you have a database running your system. At some point, it was a new thing, having a database, a relational database. Now it’s obvious everyone has one. It’s not a differentiating factor.
Bertrand Schmitt
If you remember 20 years ago, the other new stuff was to have a website. I think some of this, over time, you either adapt or die. For sure, some new entrants will try to try to take advantage of this to position themselves against existing players that are slow to move.
Nuno
Well, we’re getting back to the… I have to give a mea culpa here, and the definition of AI is like electricity is going to be everywhere, or it will be like electricity, and I used to opine that it was a non nuanced definition, but I have to now say I probably agree with him. It will be effectively like electricity at some point, or different degrees of electricity distributed in different ways.
— Open-Source —
Nuno
Let’s talk a little bit about open source, the real open source. We have incredible movements around open source. We see players like Google being a little bit concerned about this. Is there a fundamental moat or not? Is there?
Bertrand Schmitt
I think there was this fantastic article that looks like it was a leak from someone at Google. The title was Google, we have no moat and neither open AI. Acknowledging that potentially all this talk about you need to have a mass, a huge quantity of data in order to train an AI, you need a huge quantity of capital in order to build advanced AI might actually be wrong.
Bertrand Schmitt
That’s a very interesting take because today that’s one of the biggest question. At least if you look at the chat, will proprietary models win from Open AI, from Anthropic, from Google, or will open source models win? That’s a very big question because it will drive very different, fundamental cost situation, but also monopolistic behaviour.
Bertrand Schmitt
One interesting reference point is image. Generative AI for image in 12 months, from open AI being transformational, leading a new wave, to actually stable diffusion taking over and replacing it and being the leading solution, except that stable diffusion is open source. It’s an obvious parallel. It happened very recently. The question is, will the same happen to chat? Personally, I think it’s a strong possibility. It’s clear that some costs are going down like crazy very fast because our new ways to train models that might be more efficient.
Bertrand Schmitt
We are also seeing a standardization of training data sets, making it easier to build models because if you already have the data scrapped in one place standardized, you can really separate the problems. We also seeing new tools to evaluate the capability of models. Obviously, open source product can be trained with user feedback at scale pretty quickly because you can imagine that a lot of people would just want to use the cheapest product and therefore would provide a lot of feedback that can make the product better. What’s your take when you know this?
Nuno
The skeptic product manager and computer engineer in me would say the big guys and the well funded guys will ultimately win. The reasons I say that is I think open source has a role. I think it’s a great forcing mechanism for the big guys to behave. I think it’s a great mechanism as well for certainly data to be treated properly and the federation of data to be a little bit more equinonymous than it is probably today. But the ultimate play is in this world, product management benefits from enlightened dictatorship.
Nuno
It benefits from having clear direction on what do we need to deliver on, how do we need to deliver on it. That’s one advantage of having a company doing it rather than an open source community. A second big advantage of it is resources. If you need resources, and still, resources are needed. Whatever we want to say, do you need less data to train? Do you need less data in general, et cetera. Resources win. If you have billions of dollars that you can throw at it. It’s different than it’s a community that self organizes that maybe raises funding. I think open source has a role.
Nuno
I think open source will continue having a role. I do believe that most of the end user value, B2C, B2B, which is how I think, at least as an investor, will be created through the form of companies. But we’ll have certainly part of the stack will be proprietary. They might reuse open source stuff, but a lot of their stack will be proprietary. Again, I have no data to base this on, just my feeling and what’s happened over the last two big shifts in competition history. But, that would be my two sense on this.
Bertrand Schmitt
I might not be in agreement with you there. We can see how Linux, for instance, won the data center. A gain, more closely, we saw how in image generation, open source seems to have won the day. What’s interesting when you think about open source is that the two biggest open source models are one is backed by the UAE. You could argue that they have a near unlimited funding if they want.
Nuno
But resources are not just funding, resources are also talent, et cetera.
Bertrand Schmitt
But with funding, you can bring talent.
Nuno
I’m not sure at scale, but anyway.
Bertrand Schmitt
The other big provider of open source actually surprisingly in some ways has been Meta. The granddaddy of open source chat programs is LLaMA, partially open source by Meta. This one benefits from insane amount of resource, one of the best AI team in the world. You could argue Meta might have an interest to disrupt what Apple, Google, Microsoft, Open AI are planning to do with proprietary solutions and might have perspective that, you know what, we can play a different game and open source and bring everyone to our computing environment and computing paradigm, and that might be a benefit to them. That might be just a very different business strategy with, again, a very similar funding to the other big guys.
Nuno
We’ll talk about Marc Andreessen’s article or opinion piece on “Why I Will Save the World.” He talks about this thing. I don’t like it because it has Christian undertones that I’m not sure are very accurate to this discussion. But he talks about the bootleggers and the baptists. Baptists in this camp would be probably the open source people and bootleggers would probably be the big corporations. Normally, bootleggers win, I think. He’s obviously talking about the Prohibition time in the US, et cetera. That’s the analogy there.
Nuno
Short of being too philosophical about this, I think we’ll probably disagree on this. I do think that we will have corporations for profit and they will create stacks and there will be probably a lot more innovation in that top end. I also agree with you that there will likely be very verticals, not necessarily even niches, but there will be verticals that might be well addressed by open source platforms. Therefore, we will see the coexistence of both. But my skew is still that the big guys will win the mass market. That’s what I would go with.
Bertrand Schmitt
Let’s see how it goes. What’s interesting is that, I guess, we’ll get some answers in the coming 6 to 18 months. I don’t think it will take a decade to see how it goes.
Nuno
We’ll revisit the topic, yes.
— Regulatory & Privacy Environment —
Bertrand Schmitt
Let’s talk about from a regulatory perspective. It’s a very strange place, I must say, at least from my perspective. I’ve never seen a regulatory interest coming so fast to a new space that barely existed two, three years ago. Was there, but behind the scene at small scale to suddenly becoming so scary for some. We have seen a wide variety of reactions.
Nuno
Can I make a joke, Bertrand?
Bertrand Schmitt
Yes, please, of course.
Nuno
Then you go serious again, but I want to make this joke. I don’t know if you’ve ever watched the IT Crowd, but it’s one of my favorite British comedy series. It’s basically about an IT team within a corporation that is very dysfunctional as always. The IT team is also dysfunctional, by the way. This is part of the roads, if you’ve never watched the last episode, please don’t listen to the next, I’d say, 30, 40 seconds. But there is the last episode, the manager of the team is actually totally incompetent because she understands nothing about technology or software or system administration or whatever.
Nuno
They convince her that the internet isn’t a box. She goes in front of the whole company saying, The internet is all in this box. A little bit by accident, the box falls and breaks. It’s chaos because they thought they broke the internet. The two guys from the IT team are just looking at each other like, “Oh, my God, not only they’re idiot, as everyone believes in this.” They couldn’t make her look bad. Everyone thought that actually the internet was in that box. What you’re about to talk in terms of regulation reminds me of just that.
Nuno
It’s like a bunch of people understand nothing about anything. It’s like, “We need to regulate this. We need to regulate this or it’s going to kill us. It’s going to kill us tomorrow.” It might kill us, but it’s not tomorrow, I think.
Bertrand Schmitt
Yes. I think regulation is coming from multiple sides. From one, we can talk about IP rights. Do you have the right to scrape some of this data? Two, what is the output? Is the output a clear copy paste of some existing content? That’s one big question. I will talk a bit more about it.
Bertrand Schmitt
Then there are privacy regulations, especially coming from the EU. Then we have obviously things around bias. Is it as biased as a human being? Is it as biased as what you see on the internet, or is it even more biased? That’s a fair question. Then there is the biggest one of all. It’s like, “Oh, my God, this is the end of the world. Ai is going to kill us all. It’s coming sooner than you can expect.”
Bertrand Schmitt
That the doomed people, for me, it’s pretty interesting because if we want to destroy ourselves, we have weapons of mass destruction at a scale unimaginable for the past 60, 70 years. Fortunately, that are present among us, we have obviously the risk of virus, the risk of labs making mistakes. It’s clear when you start to read a bit that it’s not very well regulated, these labs.
Bertrand Schmitt
Basically, we are very clear and present in danger that either we don’t think about or we don’t really care or we don’t really regulate that much, or actually, it’s going worse when you look at the past decades. Now we talk about something where there is very little probability and where there is absolutely nothing right now, but you’ll take care.
Nuno
Let’s go one by one. I mean, the privacy issues are fair, but they are privacy issues that exist already, right?
Bertrand Schmitt
There is nothing new. We are talking about scraping the internet.
Nuno
Okay, this thing scrapes more than the other things that exists out there, but the privacy issues already exists. I’d say they are fair, but they are not necessarily new. Maybe they take a different dimension in terms of the fact that the LLMs and everything that is out there are scraping a lot more information. But certainly, it doesn’t feel to me to be a new topic per se. It’s a valid topic, but it’s not a new topic.
Nuno
The world is going to end. It’s something that we need to spend time on. We’ll talk about it in our next section. There are some fair concerns. Everything is rosy, but I think it’s not a fair concern tomorrow. It’s maybe a little bit too doomsday. There needs to be thinking around this. I personally have some thoughts and we’ll discuss it in the next section. I don’t think it’s a hugely valid concern for right now.
Nuno
The IP rights part is absolutely spot on a valid concern. That needs to be absolutely flushed out. The fact that I have access to information that I shouldn’t get access to in any case and then I generate images out of that, et cetera, is a big concern. I think it’s rightful for them getting images to say, Well, wait a second, that image is ours. That’s our watermark. How does this work? Ip infringement, et cetera, needs to be thought through. It’s going to take a while because this technology is advancing very quickly and regulation, as we know, takes time.
Nuno
The IP piece, I think, of the three that we just discussed is probably the most significant one.
Bertrand Schmitt
If I can add on this topic, there has been some interesting news from Japan where they apparently want to provide a free, easy, very protected training ground for training without risk of infringement. That’s an interesting development if suddenly everyone start to train data sets from Japan.
Nuno
Yeah, sandbox stuff. Japan takes the lead on being the sandbox for training.
Nuno
The final piece is we’ve seen some people using any of these three things or the three of things together, privacy, we’re all going to die, it’s going to kill us, and IP rights, using a variety of these three things all three, one of the three, et cetera, to say, “We should pause progress, we should pause next level.” Be careful with the messenger.
Nuno
Some of the messengers who are making these comments of saying, “Let’s slow down, this is dangerous, whatever.” Are people that have actual parochial interests. Are people that have access to platforms that they might themselves create LLMs and create large generative AI platforms out of it. I won’t name names. You guys know who they are.
Nuno
Just be careful when people say, We need to pause for six months, and at the same time they’re hiring talent and they’re taking to the next level within their own companies because they have large data sets and they want to be competitors. Same people are saying, “Stop for six months.” But it’s not stop for six months because we have all these problems. Stop for six months so that I can catch up. That’s not okay.
Bertrand Schmitt
It’s either so I can catch up, so I can stay at the top of the world because I want others not to keep developing. It’s very typical of regulatory capture. You are the big guy. You are a friend with the regulators. You can influence regulators and you push for regulation even before the space has really started to mature, which is quite insane.
Bertrand Schmitt
Again, we have never seen that before. So fast, so early. That for sure for me is very scary because as you say, I think we have to think about what’s happening behind the scene and what’s really mastermind by some companies, by some people.
Bertrand Schmitt
Personally, I would not trust this proposition of both. Again, there might be some risk, but not as of today, not compared to other existential risk for humanity. Right now, actually, and we will talk more maybe about it later, but I actually believe there are strong benefit for humanity on the short term to use AI more, to be more efficient. That will benefit us way more than it can harm us.
— The Future —
Nuno
Maybe that’s the segue to the future. Will it save us? Will it potentially kill us? Could it kill us? Will it be racist? Will it be biased? The answer is, yes to all of those. It will potentially save us. It will potentially kill us. It will potentially improve dramatically our productitivity. It will definitely be biased because it will pick up on all the biases that we have as humans.
Nuno
My co-managing partner and co-founder at Chamaeleon, Dr. Songyee Yoon published a very interesting article on venture beat on that, right on the neocolonization of generative AI because it’s bringing all the biases that exist from Western thinking and methodologies, et cetera, back into the forefront of the world. They’re accumulated, they’re there. The data sets are there, so they’re going to just be brought in.
Nuno
Let me maybe go on the whole, will it kill us or will it change us and make our lives better? It has the potential to make our lives significantly better. It has the potential to make us much more productive. It has a dramatic potential to make us not have to work five days a week unless we absolutely want to, or seven days a week if we absolutely want to, but work three days a week or two days a week.
Nuno
It has the potential to fundamentally shift professions over time. As we discuss today, what we’re seeing today is not general intelligence in the terms of a massive Godlike entity that could see everything and predict everything. It is not out of the realm of possibility that we will see AGI in our lifetimes. Who knows? There are dangers of this.
Nuno
The dangers of this, again, a simple person. The dangers of this is what do we give these agents, these artificial intelligence agents access to. Access to is input and output. Access to in terms of data sets that they can basically scrape, they can look at, et cetera. Access to in terms of outputs, what systems can they touch, can they action.
Nuno
Now, if you give some of these agents access to systems like national grids, systems like whatever, “Can they kill us?” Oh, yeah, they can. But it’s true today. We have embedded software that has been written into medical devices. Medical devices could kill us if the embedded software that was written in it was badly written.
Nuno
By the way, this has actually existed. I remember this was a case study when I was doing distributed systems in college that someone forgot to actually figure out that there was what was called a race condition within code, and actually patients died. There was software written. It wasn’t AI. It was software written. Whoever wrote that software didn’t take into account that there might be a condition under which two different parts of code would be run at the same time, and the outcome wouldn’t be clear, and alas, it happened a couple of times and people died. They were given too much radiation from a medical device.
Nuno
Again, can machines kill? Yeah, machines can kill. Machines can kill in a way that the human is not there killing people? Yes, because the software was badly written or because it had access to things it shouldn’t have access to. So can AI kill? Yeah, it can.
Nuno
It depends on, again, what are you giving it access to, what are you allowing it to do, and what data sets is it embedded from. We don’t need to have a GI for AI to kill us, just to be clear, sadly enough. Self driving cars could kill us tomorrow and they use AI.
Nuno
Again, yes. It’s about risk mitigation. It’s about understanding the code is working properly or not. It’s about explainability. It’s about being very careful with what you give access to your AI agents so that they can’t do whatever shit they want, sorry for my English.
Nuno
You have to have these things in place. Explainability of AI, for me, it’s a pretty critical field. If you can’t explain what the code is doing, we have problems. If you can’t explain to people in a simple manner what does this code have access to in terms, again, of actuation, what can it act upon in terms of output? You have a problem. If you can’t actually explain what data sources is this software actually accessing, you have a problem.
Nuno
I don’t think that problem is necessarily new with AI. It gets amplified with AI precisely because of the things we’re saying, because there’s code being written today that the person who wrote the code doesn’t understand exactly everything that then the code is doing. That’s the problem. Because if the code was relatively well understood 99.999 % by whoever wrote it and whatever manifestations that code has in the future, we’re fine.
Nuno
The problem with AI is you don’t. That’s definitionally what happens after you write it.
Bertrand Schmitt
I actually think it’s true of any complex code today. When you have code built by teams of thousands of people, all over years or sometimes decades like Windows or the software inside a Boeing plane, even without any AI, no one can dare to say that they fully understand what has been written. Otherwise, you will not get a Windows update every three days to correct a bug.
Bertrand Schmitt
At the end of the day, even with automated systems to prevent bug, to prevent security risk, and checking of code and stuff, there are still many issues. You talk about machine searching people before AI. Another good example was a Boeing 737 MAX. The onboard software I managed to generate two crashers. Killing, I believe, everyone on board. Software is supposed to assist pilot, by the way, except that in some situation with some failure of some systems, it will actually decide to crash the plane.
Bertrand Schmitt
Even before AI, we had life and death situation. Let’s not forget that software is already around us in a lot of situations, and we have still managed to find ways to control it. I think with AI, the same will happen. We will find step by step the way to manage, to control.
Bertrand Schmitt
I don’t think anyone will be crazy enough to put AI systems that they don’t understand, that they don’t control, in charge of advanced systems. I think it will lead to a relatively safe development. I think Yann LeCun is very famous, the chief scientist for Meta, to keep talking about the fact that we really don’t have intelligence yet, that we are not even at the level of intelligence of a cat or a little kid.
Bertrand Schmitt
I think for me, that’s always something I try to keep in mind, that there is still a long road to go to AGI. You talk of will it happen in our lifetime? I would think so. It could be wrong, but definitely, given where we are today, it feels like it’s years, if not a decade or two away. That’s another piece to keep in mind. If there are some risk, it might come with an AGI. But but we are so far from it that it’s really not something realistic to be thinking too much about it above all else when current AI system are not a risk or so little risk that it’s what you are used to with software.
Bertrand Schmitt
That actually there is a risk so much benefit. I think you talk briefly about Andreessen Horowitz and some article they published recently about AI saving the world. Maybe it’s a big title, it’s a big claim, but I certainly agree that at this stage of the day, there is way more benefit to AI than the other side.
Nuno
The objection, I think, Marc’s article on the positive side and what it can be used for is very eloquent and very polished and very clear. But at the same time, his article, I wouldn’t say intellectually dishonest, I don’t think Marc is an intellectually dishonest guy, but it doesn’t really talk about the fact of what we just talked about. Software can kill people, right?
Nuno
It’s not even in the wrong hands. It could be just badly coded. It could be a freak of thing. With AI, that issue is amplified, right? He doesn’t really mention that. You obviously don’t want people to say, Can AI just kill us all? In principle, not. If we develop systems with boundary conditions and the way we figure stuff out, like what access do you have, et cetera. As a system, a software system, hopefully people will be thoughtful on how they do that. But there are some risks. Even before AGI, there are some risks. It’s not just AGI that brings risks.
Bertrand Schmitt
In some ways for me, what’s interesting is that Elon Musk has been one of the most vocal opponents of moving into AGI, into proposing for a pose. At the same time, you could argue he’s the one who has invested most in AI with self driving, and he’s the only one combining AI and real machines on the road, two tons machines that can kill people at scale if they want to. That for me is very odd. It’s actually crazy to be frank, that he’s the one pushing back when he’s the one who has been pushing the most, some very dangerous machine potentially on the world.
Nuno
Let’s get this right. We’re talking about someone, I think it’s a bit, to be very honest, a little bit intellectually dishonest. But we’re paying attention. The guy owns probably the car company that’s been the most aggressive about using artificial intelligence, et cetera. With a view to replacing us driving, certainly in live action in production.
Nuno
He is in control of SpaceX, right? He’s in control of Twitter, which has probably one of the single largest data sets of interactions between human beings, news, et cetera. In the world. If all of that wasn’t enough, one of the first investors in open AI, and all of a sudden we’re all going to be killed by it, what is he investing in?
Nuno
Maybe you’ll troll me after this, but I don’t see him investing in a huge think tank that thinks through, for example, technology philosophy and whether everything that we’re actually doing needs to have certain aggressive boundary conditions or not. I don’t see it. Where is it? Where’s that investment? At the end of the day, put money where your mouth is. If that’s the key concern that we have, let’s do stuff around that.
Bertrand Schmitt
Both all of you are in AI investment in all your companies.
Nuno
Yeah, let’s start looking into it.
Bertrand Schmitt
For me, it’s very surprising. It’s actually unbelievable. At the same time, I might be that he’s investing in these AI topics because I think it’s pushing things, it’s pushing the envelope. Actually, maybe I would say that at least at Tesla, it feels pushing too hard versus what it can actually do. I feel personally the marketing is really, for sure, way too aggressive about how it’s so called self driving when today it’s not.
Nuno
Yeah, in some ways it’s actually breaking that boundary. It is not. It does things still that are not proper. But anyway, I’m also happy that he is investing that much money into artificial intelligence. Just to be clear, I don’t want to diss the guy. He puts money where his mouth is in terms of that. It’s a little bit intellectually dishonest.
Bertrand Schmitt
In term of building, not pausing.
Nuno
Yeah, in terms of building. But then it’s a little bit intellectually dishonest, like it’s going to kill us all. Okay, cool. But then why are you putting all this money to these things? Anyway.
— Conclusion —
Bertrand Schmitt
This concludes Episode 44. Our second episode in a series of two episodes, 43 and 44, around AI, generating AI. We went through around what it is, what type of verticals is it developing, what are the big guys doing, what are startups and VCs doing, where it is in term of open source versus closed source, and what has been the evolution of the regulatory environment, and talking somewhat about the future and where this can go. Is it going to save us all or is it going to kill us all? It’s very interesting how extreme the different position on that topic and how extreme they are so early in the process of developing AI and AGI. Thank you, Nuno.
Nuno
Thank you, Bertrand.