Is AI All Hype, No Substance? Are there real AI productivity gains in Enterprise? How about SMBs? … and start-ups? Are the layoffs and freezes on new hires justified? Can we go to 4 day work weeks and all will be fine? What does AI after the hype look like? Who is getting value from AI? Why do most AI projects fail?

Our co-hosts:
Our show:
 
Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news

Subscribe To Our Podcast

Apple PodcastsSpotifyGoogle PodcastsTuneIniHeartRadioCastBoxOvercastBlubrryBreakerPodbeanPocketCastsCastroRSS

Nuno Goncalves Pedro
Introduction


Welcome to Episode 71 of Tech DECIPHERED. Is AI all hype, no substance? The early remnants of ChatGPT are still with us: the big shifts, everyone using ChatGPT all the time. It seemed like we were on our way to a full AI world. Then the disappointment: the disappointment of the curse of hallucinations, incorrect information, breaching of copyright, a lot of projects in the enterprise space that seem to go to no fruition. Is it all hype or is there substance behind it?


Today we will be evaluating the AI hype hangover, talking about from models to agents to ecosystems, and the transition that we’re currently in, discussing winners and losers and who’s actually benefiting from this AI revolution. Finally, we’ll end with what’s next: the productivity divide and regulation that we foresee. Some of it is already apparent to us. Bertrand?

Bertrand Schmitt
AI Hype Hangover


It has been an exciting 3 years since the launch of ChatGPT to the general public. It has been a race from everyone from NVIDIA, chips’ designer, TPU manufacturers, hyperscalers, Microsoft, the Amazon as well, and of course, all the models, companies like ChatGPT, Anthropic and others to deliver AI to the masses. It has been an insane level of investment that is still ongoing. We are reaching the stage where electricity generation might become the blocking factor for more AI. It’s an interesting time to take a seat back and think about where we are and where are we going.

Nuno Goncalves Pedro
Obviously a lot of exciting things that you mentioned around the infrastructure layer, the platform layer, foundational models, what I would call AI applications, so the layer that is really riding on top of existing platforms or new platforms as they’re being developed, where there might be some algorithmic shift, a little bit like we had in the mobile app ecosystem that we started having companies that were mobile first.


Also in AI, we will have companies that are AI first, that are using the underpinnings of AI and the platforms and infrastructure of AI, while not being necessarily deep AI companies. That explosion has been really rapid and fundamental. Obviously, on the consumer side, we’ve seen obviously the advent and the usage of ChatGPT, Claude, Gemini, Copilot, all sorts of tools out there that people are using all the time.


At the end of the day, is it really resulting in this tremendous productivity gains that we talked about and that we had foreseen? Is there a fundamental productivity gap? Obviously, one of the interesting studies that has been published, I believe, was it in August this year, in 2025, was the MIT study that was claiming that 95% of AI projects fail, basically. This is obviously more within the enterprise remit of AI, but it’s like, “Why are all these projects failing?” Is this sort of this proportionate versus other IT projects failing within the enterprises? Because We know IT projects fail all the time. It’s like the AI projects is 95% higher or lower than your classic IT project failure in the enterprise space. Does it tell us anything beyond we’re early on, we just don’t know how to adopt it. Does it tell us anything that is more profound to the ecosystem development than we’re early innings, and it’s normal that a lot of things are failing?

Bertrand Schmitt
First I think it’s a very interesting study to talk about. What’s interesting here, you talk about IT projects failing in the past. That’s clear. That used to be a pretty big issue in the IT world of these IT projects failing. We were 50–65% range. You could argue that’s actually what helped propel the rise of SaaS, of Software as a Service, because suddenly Software as a Service simplifies the deployment of IT project, decreases the CapEx, you have to invest before seeing any return, so completely changed the model to make it easier to deploy and be successful when launching IT projects. I think the level of failure of IT projects dramatically fail thanks to the rise of Software as a Service. It’s interesting to go back to AI and to this question, “Do so many AI projects fail?”


If we look at this study from MIT released end of August, early September, when they talk about this high level of failure, it’s truly B2B projects focus on AI, and it’s really the B2B projects we’re talking about. We are not talking about in this study of you as corporate user using your own LLMs, for instance, to help you do your work better. We are truly talking about hardcore large scale AI projects deployed at scale to typically replace some functions in the business or try to dramatically increase the productivity of some function in the business.


There is an acknowledgement in that document that if we are talking about users using ChatGPT or Claude to help them do their work day to day and get support from the AI agent, there was clearly a benefit to these users. The issue was it was not easily measurable to know exactly what was the improvement, because it’s three projects that were done one to one, one user, one agent, in a way not officially supervised by enterprise IT.

Nuno Goncalves Pedro
If I read the study correctly though, it says 95% of AI projects fail, and it’s not how I read the study. The study, from what I can see, was done and analysed 300 plus initiatives, 52 organisational interviews and 153 senior leader surveys, and it’s a funnel. Out of the 300 plus initiatives, 80% explored AI tools is my understanding, 60% evaluated enterprise solutions, 20% launched pilots, and out of that 5% reached production with measurable impact. Is that how you read it as well? If that’s the case, it seems at the end, only 5% got to the end. I guess it’s what it’s saying. It’s not like 100% launched pilots and only 5% of the pilots resulted in anything in production, right?

Bertrand Schmitt
It really depends how this is defined. I am not sure it’s a funnel. Based on what’s written, 5% of custom enterprise AI tools reach production, so basically being successfully implemented. Personally that’s how I read it. What we see is again, there is a stark contrast between this large AI tools project versus just general purpose LLM, just use it directly. Here we see a 40% actually success rate, 40%, four, zero, so very, very stark different.

Nuno Goncalves Pedro
What’s the number? Is it really 95% of all pilots fail? Is it 95% that didn’t go through the funnel, only 5% got to the end and production? Irrespective of that, it’s a very big number. Why would this have happened? I think for me, the causality of it is actually very simple in some ways. It’s on both supply and demand.


I think on the demand side, these enterprises literally have no clue. What I mean by no clue is companies across industries, if we’re not just looking at hardcore tech companies, if we’re looking at companies outside of tech, their level of readiness for any implementation of IT tools in general is normally a little bit substandard. It’s not like these companies are intrinsically digital in nature, even in their internal processes and the use of tools and systems. Now there’s a new system that’s magical. Or the new system that’s magical is not magical at all. It’s just a new system. Things like for example, basic process reengineering, how would you adapt your existing processes to the use of a new tool, are sort of an afterthought.


That’s the core of this thing. If you’re going to get gains out of using tools and changing platforms, and to your point, having highly customised enterprise systems being embedded into the organisation, how does this fit into change of processes? How are people going to do stuff differently from what they do today? If there’s no change in how they’re going to do stuff, it’s like there’s no adoption of the system. If this doesn’t fit any process that is core, people are not going to use it. If they’re not forced to use it, they won’t use it. If it doesn’t fit their process, they won’t use it. It’s like why would this fit and all that. I think that’s more of a demand issue.


Now there’s also a supply issue which is a system readiness, product readiness. I know there’s been a lot of developments with OpenAI developing a lot of tooling. We are investors in You.com. You.com has been one of the interesting companies around the use of data pipes and everything around the apification of the enterprise to really serve AI. Obviously at the end of the day there’s also still an emerging supply side that is not fully ready either. Is that how you see it as well, Bertrand? Or am I being too negative?

Bertrand Schmitt
You are totally right. A lot of companies are not tech native, tech first. Still they decided to go all in into AI projects notably because Wall Street, investors, everyone was saying,” You have to do it. If you don’t do it, your stock is going to crash.” To be clear, that’s really what’s happening in the market. There was this rush to push projects, some top-down approach of, “Hey, I need AI. I have to say something to the street to simplify.” That does not help because this has never been a recipe for success if you look at past IT launch internally.


I think the first thing, it’s coming with the wrong mindset. We are still at the very early beginning. I think we are starting to get good returns on SaaS projects because step by step organisations were more mature, more everything. AI is brand-new, it’s not mature. You are moving full speed suddenly on something that’s not ready and not just not ready, but changing dramatically every 6 months. Whatever tool you picked or vendor you picked 6 months after, might have been discovered, might have been the wrong decision. That’s pretty tough.


Maybe the last piece is that a lot of companies I think focus on very shiny-looking type of AI projects, for instance, focus quite a lot on sales and marketing, instead of focusing more on what AI might be doing actually better: customer service, finance, procurement, compliance, so more back-office type of task. In a way, focus on what’s shiny versus where is AI truly able to make a difference. Obviously some of these projects require change management, all of this, and maybe it was not there as well. I think we have multiple points that make it quite hard for traditional companies to not be able to deliver. I guess obviously tech companies, native AI companies are most certainly doing a much better job than your average S&P 500 company.

Nuno Goncalves Pedro
That then leads to the… If I’m part of the Microsoft ecosystem, guess what, I might as well adopt copilot. We’ll come back to that later, on the winners discussion. At some point in time, it’s like the path of researchers and what AI product we have in our suite of tools that we probably already paying for. Or we might actually just have to pay a slight uplift that we can use and say, “We’re AI ready” and nobody thinks through basics: organizational design, process reengineering, how do I basically pilot stuff needs to be bottom up, highly empower teams with some impact, how do I extrapolate learnings even if I go into a pilot. It’s a total fundamental mess still today.


A lot of interesting things out there. We have acknowledged it’s a supply and demand issue, but let’s see how we got here. We started with this whole notion of we had these models with ChatGPT, with Claude being available to consumers in general and obviously now also to the enterprise. We’re going through an evolution of it. Everyone probably has heard by now about agentic AI in agents and autonomous agents and semi autonomous agents. What do these agents do for us? Maybe some of you have also heard people talking about ecosystems and interoperability and big words like that. What does that actually all mean? What does this all mean? We’ve seemed to have gone from the LLM world, the large language model world, into agents and now into ecosystems. How can we make sense of all of this, Bertrand?

Bertrand Schmitt
It’s not easy. It’s a lot of constant change. It’s pretty clear that just chatting with an LLM might not be enough for a lot of situations. To be also clear, can already deliver a lot of value. I think right now probably where is the sweet spot in terms of getting value might be more difficult to measure because it’s more one to one by employee. There might be real value. I don’t know you, but me, I’m using more and more LLMs to work on some stuff and work as a sidekick. It’s clear that I get huge improvement in terms of quality of what I can do, in terms of speed at which I can move, in terms of insurance that I’m doing the right thing. I think there is a lot of value in a lot of situations at this stage.


To think about more an autonomous company building more agents that can replace human beings, not just work with them, but replace them, you have to go through agents and agents is still quite new. This is stuff that 18 months ago there was no one on it. Since the past 12 months, it has definitely… 2025 has been the year probably of agent and models that do more thinking. That has been really the big difference.


I think with agent what we discover also is that if you don’t have a human in the loop like you have with a chatbot, it’s very, very hardcore because you have to expect that each step of the way of the analysis done by the agent is right. If you have to go to 10 steps to deliver something with an agent, if you are only confident 90% at each step, guess what, by Step 10, the stuff is pure garbage. No question about it, pure garbage.


You need to have new processors to make sure that it stays very high in terms of accuracy at every step of the way. I don’t think we are there yet. I’m not saying we are not going to get there. I’m just saying that right now, except in some narrow sense, some companies that are able, typically focus on very narrow functions, very narrow industries. It’s very hardcore to get to that level of quality that provide real value to a real business.

Nuno Goncalves Pedro
From models to agents to ecosystems


Let’s step back a little bit just to explain to some maybe of our less tech-savvy, AI savvy listeners the differences between the models, the agents and ecosystems. I’m going to oversimplify it, and you’re going to kick my… If I oversimplify too much. Let’s start with the models. The models, like for example, some of you are using ChatGPT. ChatGPT is coming together with a bunch of information on something. It could be deep research, et cetera, but it’s giving you an answer on a specific topic. The large language models are bringing this together for you.


When we say you’re evolving two agents, either autonomous or semi autonomous, there’s some type of action. There’s stuff happening. It might be that you’re trying to book a restaurant based on some analysis that you’ve done. You might be saying, “I actually want a restaurant for tonight, and this is the parameters of what I’m looking for.” An agent could book that restaurant on your behalf. It could be, you want someone to book an Uber for you, and therefore the agent could book an Uber for you using APIs from Ubers in an ideal world. That’s the world of agents, where agents are already doing actions. They’re doing things beyond the world.


Then ecosystems is the ultimate play, where there are several agents at play. There might be an agent that’s actually trying to optimise your calendar and trying to figure out if there are openings in your calendar. That agent is an orchestrator for other agents that do other things, like an agent that books your Uber when it’s time for you to go from one place to another, an agent that books your restaurant, and an agent that books whatever.


We want to end up in a world of ecosystems that stands above all these different agents. I think what Bertrand is saying is, “Good luck to that,” because actually, even in the LLM world, even before we go to agents, even in the model world, when we’re asking complex questions, the models get in their own head, lack of a better expression, and they start hallucinating, and they start basically convoluting concepts. People probably have noticed that. Depending on how you’re interacting with your ChatGPT, for example, at some point in time, you’re getting answers. You’re like, “This doesn’t make any sense.” The answer tree to the whole subject is getting too complex. You’re getting stuff that is literally a hallucination.


Again, even before we move to agents, we should have all of these tiered pieces in mind and knowing that a lot of these answers were being given are not fully accurate. When we move to a world of agents where actually accuracy is really, really important, you don’t want to be booking an Uber if you really don’t need that Uber. That world needs to be a little bit more high fidelity. You need to be trustworthy. It needs to be a lot more trustworthy than some of the stuff that we see today in the model.

Bertrand Schmitt
I think in some ways we are putting the cart before the horses, and we are doing that because there is so much money being put into place. There are so many parties that need to deliver some new projects, technologies, business models to AI to keep fuelling that AI expansion in a way it’s something relatively natural that’s happening. That’s a race in every direction. Some race we are starting with definitely the wrong engine. Some racers might have the right engine, some race because we do intense fine-tuning of the engine, we manage to make it work.


The belief that today you can just combine a very generic LLM/engine on top of complex ecosystem with your AI personal agent starting to rule your life it’s absolutely not ready. You might see the pieces. Someone can do a quick demo that, “Hey, I have an AI assistant that can do everything for you.” In practical reality, it’s still not working, but it’s kind of the game. Typically, there are really been such a high level of investment, so typically it will have gone slower, more step by step, where here everyone is racing at the same time and hopes that you can change the engine in the middle of your flight basically. Many companies are just betting that, “You know what, I know my stuff is barely working today, but in 2 years I’m betting that the underlying LLM, the underlying agent has improved so much that my product will be workable”

Nuno Goncalves Pedro
I think there’s a lot of throwing stuff at the wall, as it normally happens in some of these big waves that we’re currently on. We’re in a big wave, AI is a big wave, but because there’s too much capital, it’s a gigantic wave. There’s more money to throw at the wall at scale. A lot of people are saying, “I’m doing all this agentic work, et cetera.” You’re like, “Wait a minute, your models are so messed up, so your stuff is still not working fully. How do we expand this?” At some point people are like, “Maybe something will work.” At some point, sometimes some of this stuff will actually work.


Now, obviously I don’t want to undermine the fact that there’s some exciting stuff happening in agentic in smaller universe kind of solution sets like in customer service, bookings and stuff like that. I don’t want to diminish that. At the end of the day, if we’re talking about more holistic solutions around the move from foundational models, LLMs, into the agentic world and then into ultimately an ecosystem kind of world, we still have a lot of stuff to nail. There’s still a lot of stuff that actually needs to be grounded and working well.

Bertrand Schmitt
I think right now, some solutions that start to be working, an AI company called Otera, previously known as DeepOpinion, definitely they have stuff that is working today in some very specific use case for banking insurance. They have some incredible product because they have been very focused on fine-tuning for their specific use case and very focused on how to make sure that step-by-step in the AI process you keep very high quality so that when you compound multiple steps, you don’t end up with something very poor at the end of the day. If you have a ruthless focus on specific use cases, I think today, you can deliver some very high quality products.


But what we see often is people promising you the moon and promising to solve very generic, non-very-specific problem. That’s usually where lies the issue. To be frank, we talk about people betting that in two years, things would get better. It’s quite insane how much things have improved the past three years. I mean just the past one year with more intelligence in inference, AI that are more thinking.


I mean, me I can say that the elicitation problems, you still have them, but it’s definitely not the same scales than before. It has been a huge gap, a huge improvement. To think that some stuff will be dramatically different 18 months from now, could be in many situations a very reasonable case to make when you develop a new product.

Nuno Goncalves Pedro
Absolutely. I mean at the end of the day, we’re not saying that everything is hype. Obviously, there’s a lot of hype. That’s a little bit the framing that we put behind us. In particular, in the Interspace, it’s still early days, there’s demand and supply issues, as we mentioned before, this path from models to agents to ecosystems is going to happen. Maybe not at the speed that everyone thinks it’s going to happen. It’s a lot of pieces are still in motion. But we have to acknowledge that there’s been a lot of development that today we can ask these tools to do a lot of things for us.


One of the things, I always give this stupid example, but I’m not sure I’m lazy, but I have my own products, so I often speak in public and I have my keynote products, things that I talk about. There’s a couple of topics that I normally talk in public around product management or venture capital, or rejection, adversity as a path to growth, et cetera. I have those things ready to go, and so when someone asks me to do a keynote that doesn’t fit into a couple of the things that I’ve already developed over time, I’m normally a little bit reticent. I’d prefer to do maybe a panel or a fireside chat rather than just a keynote per se.


Honestly, nowadays, there’s no reason why I wouldn’t. You just go to ChatGPT, go Deep Research, ask for stuff, go into that. Honestly, it can generate for you a mini speech, a presentation, et cetera. Now, to be honest, I need to deliver it. ChatGPT can’t deliver it for me. Whatever comes out of ChatGPT, maybe it’s directionally correct, and it gives me a structure and a thought line and an angle into the speech and maybe 70% of the supporting evidence, but there’s still work that needs to happen after that. But it gets me over the hump of guy. I don’t have like one or two days in my life to just spend preparing my next speech. Just to give a very practical example on how these tools today affect how we work, how we do things, and how we actually interact.

Bertrand Schmitt
Yes, that’s totally right.

Nuno Goncalves Pedro
Winners and Losers


Maybe, let’s talk about winners and losers, right? Obviously, there are winners and losers, and who’s benefiting out of this? And shockingly enough, actually, in this case, some of the incumbents are benefiting, like Microsoft.

Bertrand Schmitt
Yeah, I mean, Microsoft, as we know, has a deep partnership with OpenAI. Being a hyperscaler, they also support multiple products, and not just OpenAI. But specifically Copilot, for instance, which has been the main AI-centric product for Microsoft, has been powered by OpenAI products.


Microsoft, as everyone knows, is in the IT of most companies these days. They’re in a great position to push their product, but also to be already connected to the data sets of their clients, so I think that’s really helpful. Obviously, we have the AWS of the world that is adding AI capabilities. It’s pretty clear that in infrastructure, it’s pretty hard to beat some of these guys, because they have already all these infrastructure team systems in place.

Nuno Goncalves Pedro
Yeah, it’s interesting because Microsoft’s been playing the same play book for a couple of decades now. It seems to work. Let’s go bundle stuff. It’s always like, we upsell, we cross-sell, or we do a loss-leader free EB. I mean, and it just works. It’s just incredible strategy. We’ve just heard about the new structure of OpenAI on the for-profit side, where the for-profit entity, the single largest owner, 27% we’ve been told, is going to be Microsoft out of that entity. Obviously, there’ll be other investors with ownership, 26% will be owned by the non-profit, and then 26%, as it’s being discussed, will be owned by employees of OpenAI.


I mean, it’s fascinating how Microsoft seems to be getting this stuff systematically, and everyone’s now supposedly working or using Copilot, despite all the issues at the beginning that we discussed in prior episodes. But Microsoft’s not the only one that’s winning. We see Google, despite all the crap that they’ve gotten and the bad press, on how they’re behind on AI, et cetera, seem to be doing relatively well as well out of this and certainly playing well in the space.

Bertrand Schmitt
Yeah, Google has turned around. I think they moved from a position where they were seen as being challenged pretty dramatically to being one of the companies leading the pack. In some ways, it’s no surprise. These guys have been behind so many pieces of AI technology of the past 10, 15 years that if one company was able to not miss this wave of LLMs, it should have been Google. In some ways, I would say they are coming back to, you could argue, their rightful place.

Nuno Goncalves Pedro
We’ve been talking about Google being undermined and underestimated, and I think it’s coming now to a play where they seem to be hitting their stride, and potentially actually one of the winners as well in the space. There’s a couple of other incumbents, as you mentioned. Amazon has obviously thrown a lot of resources at this as well.

Bertrand Schmitt
Oracle is moving hard in AI.

Nuno Goncalves Pedro
Indeed, Oracle is moving hard. I mean, if we’re talking about incumbents, we also have to talk about incumbents on the platform and the infrastructure side. I mean, Nvidia is still doing very well, so cross the stack, there’s a lot of incumbents that are doing well.

Bertrand Schmitt
Everyone needs Nvidia. Nvidia is a reference today for AI, to be quite frank. Nothing that comes close in terms of scale and scope of what can be delivered. Nvidia, what’s impressive to me, it’s not new for them. They had this vision of AI for a very long time. They have built complete systems, not just a small piece of the puzzle, a GPU, but everything to the full supercomputers. If one company truly deserves where they are, for me, it’s Nvidia, no question.

Nuno Goncalves Pedro
Of course we have the new winners, OpenAI. Beyond the incredible success of ChatGPT, obviously launching new products now like Sora and others.

Bertrand Schmitt
AgentKit, their new agent solution.

Nuno Goncalves Pedro
Their new agent solution, Anthropic is also expanding in B2B. Claude is still very much center of the game still. There’s a couple of players really at scale that have conquered not only in consumer with a bunch of killer apps, but also going into the enterprise space. As I mentioned, we at Chameleon have invested in a couple of players that are definitely going after the enterprise part of the stack and also the platform side of the stack. A lot of new potential winners emerging here as well. Then there’s the guys behind like big startup, startups at play that are coming. Would you like to highlight any, Bertrand?

Bertrand Schmitt
Again, yeah. It’s tough to talk about OpenAI as a startup these days, but if we look at some smaller companies, definitely the mistral of the world, the perplexity of the world, there are some more focus on some vertical in AI, in law, in finance, in customer support. There are definitely some new players’ infrastructure. You have CoreWeave that went IPO not too long ago. There is quite a lot happening.

Nuno Goncalves Pedro
Who are the potential losers? Obviously, there is the players that are trying to come after those who have raised a lot of money. We’re talking about companies that have tens of billions of dollars valuation or hundreds of billions of dollars valuation at play. They’re not really startups anymore to your point. The guys are coming behind and trying to attack a specific space. Well, good luck to them. For example, one of the areas we’re a little bit more skeptical is new foundational models. New plays around foundational models. There are definitely losers in that space.

Bertrand Schmitt
That’s tough. I would say another category that you can qualify as losers, potentially, it’s companies that didn’t do the switch to AI or were not really AI-native, either startup or bigger companies. If you pick an Adobe that didn’t manage to release some good AI tools or AI improvement of their product. If you take Salesforce, of course they claim to have done a lot as usual, but if you go beyond the marketing hype, they are not seen as benefiting from AI. Actually, there is a lot of talk about, “Hey, you don’t need Salesforce anymore. You can build your own CRM solution just for your needs so easily these days, thanks to AI and [inaudible 00:28:14].” That it has been hurting how their core business has been seen by the streets.


I’ve seen an interesting paper actually that shows that right now, for instance, if you take the typical SaaS business, valuation are crashing because there is no buyer anymore. If you are a pure regular SaaS business, you don’t get high multiple, everything goes for AI. If you are a company acquiring other companies, you want to add that AI flare, that AI technology, that AI new product, you are not going to waste time with the technology of yesterday. I would say in general, SaaS focused companies that have not made a strong move to AI or were not AI native are getting hurt pretty bad. Small to very large incumbent.

Nuno Goncalves Pedro
At the end of the day, the play that we’re seeing right now is around that, either your legacy or not. We’re seeing this notion of leapfrogging. If you’ve been developing your whole stack around a paradigm that’s very different than a paradigm we’re in, you’re going to get outplayed. It’s the innovator’s dilemma all over again. You were an early innovator in the space and then at some point, you might not be, and you miss a big revolution, and then what happens next? I mean, some players like Adobe and Salesforce still have significant market caps, a lot of cash, so they can go to the market and hopefully buy into some of the plays. But to your point, seems like they’ve missed the boat a little bit. They’ll have to catch up for sure.

Bertrand Schmitt
Yes, and it’s clear that, I mean Wall Street is basically grading each company on how AI native they are and that has a big impact on their valuation. It might be okay if you’re not AI native, but at the very least, you need to be pretty deep in the AI bandwagon and be at a place where people really need your products when they do AI. The hyperscaler are a great example. They might not be AI native, but they are critical to the deployment of the AI infrastructure.


And I will say that just changing your .com to .ai…

Nuno Goncalves Pedro
Doesn’t make you an AI company.

Bertrand Schmitt
Exactly. Doing some slick marketing that shows how great you are in AI, I mean clients, Wall Street, I think nearly everyone sees through that. You have to really do good stuff, deliver the goods in order to be truly seen as benefiting from AI. I think three years ago, two years ago, different story. But I think now the customers, Wall Street, everyone is pretty mature and understand where you stand. Either you deliver and show the goods or that won’t fly. Marketing alone is not enough anymore.

Nuno Goncalves Pedro
It’s not. For example, we get that question. We as a firm and as a fund, use AI ourselves. I’m always very clear, I mean, we don’t use AI for everything. We don’t need AI for everything. But there are areas where only AI can be used, like the creation of synthetic factor analysis, the ability to do back testing can only be done through machine learning, et cetera. I mean, it’s a little bit, then becomes a nuanced conversation around what do you need it for, what you don’t. To your point, the opposite is also true. Some people that just do put .ai and all of a sudden they’re AI companies when they’re actually not.

Bertrand Schmitt
Maybe to finish on this, I still… Also now, what you call AI might mean LLMs or agent. Everything that used to be considered AI before, like three years ago, these days might be considered. “Yeah, your old school AI stuff, your machine learning, deep learning. Okay, sure, good.” But what a lot of companies now, our investors, customers are looking for is really that new breed of AI, that LLM-based AI, these agents. That’s what the… I would say the general public as well as customers are calling AI these days. It’s a bit weird for people like us who have been in the space for a long time, how AI doesn’t mean the same thing anymore.

Nuno Goncalves Pedro
Indeed. What’s next? A couple of topics that I think come to mind. The first one is the haves and have-nots on productivity, the productivity divide, so to call it, and the fact that you’ll have companies that will be extremely good at integrating AI, and AI platforms, and tools in their core processes, their organization, the way they do things, and you’ll have a lot of players that again, will be crap at doing it. In the same way that we’ve seen, project adoption, IT platform adoption emerge through companies through the years. Some have gone digitally native relatively quickly or highly digitally enabled, others have not.


And I think this will not be an exception. That’s a little bit what we’re going to see. We’re going to see corporations, small medium businesses, enterprise, et cetera, that will be very, very good at doing this and others that won’t. Again, I think it’s back to basics. I mentioned this at the beginning, but it’s not rocket science, it’s about processes, it’s about operating models, it’s about organizational design, it’s about how do you manage teams and how do you lead teams and how do you scale them, so it’s back to basics again.

Bertrand Schmitt
Yeah. To give an example of how some of the biggest tech companies are managing that switch to AI or that upgrade to AI, I was hearing some stories of managers are forced to reevaluate their team in the context of where can you use AI, how much AI can you use? Can you replace some of your staff members with AI? And this is going to senior manager director level and basically these team leaders will be ranked versus the rest. Are you delivering enough AI-based improvement to your business or not? And if you are not delivering enough, and if you are not able to find people you can replace thanks to new AI tools and projects, then maybe you are the one getting fired.


My point is that, right now, the companies that want to move AI hard in the US are going serious hardcore. You are being stack ranked as managers to find the projects that are going to deliver the most AI in your team, and if you don’t, then somebody else is going to find it and replace you. There is significant initiatives being put in place that are truly hardcore right now. I don’t think it’s happening everywhere. I think it’s more the bigger tech companies in the US where there is typically that dimension of hardcore competitiveness and reimagining your business regularly.

Nuno Goncalves Pedro
So basically, productivity is an issue. There will be a productivity divide definitely. That’s going to be one of our core issues going forward. Regulatory, I know you’re passionate, so I’ll let you go. I’ll let you go at Europe AI Act.

Bertrand Schmitt
To finish on the AI productivity divide, there was an interesting article, maybe I think I shared it with you a few days ago that was comparing valuation and productivity of US versus European companies and there was a very clear divide happening in terms of which ones were truly benefiting more from AI. If you don’t do much about it, if you are not hardcore about it, things won’t change, and you will be left while others are running.


In terms of regulatory, I mean in Europe it’s pretty bad. I mean it’s just so bad. I’m not sure I have all the words to convey where it’s going, but I mean Europe has launched their AI act, was it one or two years ago? It has been put in place this year. It’s in different phases, so there will be new phases next year. It’s huge trouble. I think most entrepreneurs I know in Europe and in general are really truly not happy about this. They might not be able to access the latest model. A lot of complaints is being added to how they should deliver AI and EU is going to, as usual, this type of work is just going to help the incumbents block innovation from startups.


So ultimately, instead of creating a fertile ground for European-based AI startups, it will help end up cementing the position of the bigger US AI-centric companies. We’ll see how it goes, and I hope that maybe you will listen to the voices from their startups that are asking them to slow down, simplify, delay or cancel some of these regulations. But where it is going is a pretty bad place.

Nuno Goncalves Pedro
Indeed. Regulation is not cool, and I think more foundationally to the whole world. I think there’s a little bit of a conversation that needs to be had around whether or not these AI ecosystems will emerge through, for example, M&A or not, whether there’s fundamental taxonomy shifts to the world, and what I mean by that is if the next ERP, the next CRMs that will conquer the world will be new AI first, AI apps rather than the existing ones. You were alluding to Salesforce earlier on, Adobe as well.


Are we going to have an AI app world that shifts the balance of power that we have today, where the big behemoths actually being challenged in some areas that they thought they would not be challenged in? I think there’s a lot of exciting stuff happening. I mean the ecosystem creation piece is definitely exciting. The AI app, AI first piece is also exciting. That’s certainly one area that we, as a firm, are spending a lot of time on. We think there is a fundamental shift happening on the app layer. Not everything, not all pieces of the taxonomy, but there’s definitely a lot of exciting pieces that are up for grabs right now.

Bertrand Schmitt
The app layer, not just above the LLMs, but above agents, above systems of agents, where you have a very specific focus to solve a specific type of issue, function, or a specific industry. I think there is a lot of opportunities, and what’s interesting also, is that we can see that some of the more general models, when you try to use them in industries where there is very little documentation that is publicly available, you can see them fail completely. Because at the end of the day, it’s based on how much data they can collect that is relevant to an industry, and if this data is not publicly available, if you don’t have books about the topics or if the books are 20-year-old and not representing the state of the art, as AI, has no way to guess what it should be saying.


So I think that’s something that is lending itself really for companies that are focused on specific industries, can build deep ties, can surface relevant data to train their systems, or can build up new specific set of data just to solve these specific problems. There will be opportunities, will it be as large as a commercial AI model? I don’t know. But there is always that question of how do you get the data or how do you build new data that doesn’t exist in the first place or is just in people’s brain.

Nuno Goncalves Pedro
Indeed. Very exciting times ahead of us. I feel we’re still ways to go for AI to be as pervasive as Andrew Ng has mentioned as the next electricity. But when it takes over like Mars Law says, it will be probably be more substantive and more game-changing than we had anticipated in the first place.

Bertrand Schmitt
To go back on one point concerning regulation, it’s not just the EU, but different countries, different states in the US are looking into regulation. In the US, there is an effort from the federal government to try to simplify AI rules and regulation by trying to push for federal level AI initiatives. Especially that in the meantime, quite a few states, from California to New York, others Illinois, are trying to push their own AI initiatives. And I think that would be pretty bad as well. Because if suddenly, you’re a startup that has to deal with 50 different state-level regulations around AI, that’s pretty big trouble. I hope it won’t go there else again. It will be just one way to help the bigger guys establish themselves even more deeply. Because as usual, the smaller guys will have trouble to deal with 50 different regulations.


Let’s be clear, it’s not that regulations are always bad, but you have to limit how much you regulate. We are very early interstage, there’s still a lot of investment. One could argue that it’s actually a time to reduce regulations. Because if you look at the energy needs of AI, if we don’t simplify and remove some regulations in energy production, we are going to hit a cliff pretty quickly, actually.


I think we truly need to find a better balance, and we have to be careful with that temptation to regulate when also at the end of the day, when you look at the regulators in AI, it’s really sad. I mean, these people have no clue. It’s not as if it’s full of smart engineers who have great perspective. It’s usually fear mongerers, people who have very little clue about what’s happening. Regulators, I mean state level assemblymen or women who truly have no clue. I think we need to give time to the industry.


Also, we should use what we already have to regulate. I mean, there are a lot of laws and rules already in place in our countries that can definitely already be applied to AI. Let’s make sure we apply what we have first, and we truly find issues before inventing issues that might be there one day. In order to conclude, I think that as every new big wave in technology there is a big hype, sometimes, a disconnect between the marketing, the fundraising, and the reality.


I will say there is still a big difference if you compare to the 2000s. In the 2000s, there was very little money made on the Internet. Everything was too slow, computers were too slow, there was no mobile, there were a lot of limitations around what was being built, and it took very long time before to get there in revenues. Everyone still remember neopets.com and others. Here, it’s a bit of a different story because we see real revenues. AWS latest numbers are pretty insane, Microsoft is doing extremely well, OpenAI is maybe the fastest growing startup ever in terms of revenues, and Nvidia is making huge business. So it’s not as if they are selling some wind and there is no revenue.


I think we’re at a stage where there is some disconnect. We are pushing so many things that some things might still fall short of the expectations. At the same time, I believe that there has been strong progress, and I’m quite confident we are still on the road to bigger success. Even if, again, there might be some risk with regulation, there might be some risk with lack of energy to deliver truly on some AI projects. Thank you, Nino.

Nuno Goncalves Pedro
Thank you, Bertrand.

Discover more from Tech DECIPHERED Show

Subscribe now to keep reading and get access to the full archive.

Continue reading