The truth about Artificial Intelligence and Generative AI. This is the first of two episodes on AI.
Navigation:
- Intro (01:33)
- What is AI and AGI? Why now? (02:07)
- Setting the Record Straight (08:55)
- Verticals (20:30)
- Other AIs (28:48)
- The Big Guys (31:20)
- Conclusion (38:38)
- Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmitt
- Nuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedro
Intro (01:34)
Bertrand Schmitt
Welcome to Tech DECIPHERED Episode 43. This would be the first of a series of two episodes on AI, AGI, generative AI. A lot has been happening in the past six months and we felt it was a great time where not everything is clear yet. The fog of war is still intense. There is probably a little bit more visibility into where things are going. It would be with pleasure that we’ll talk about this deeply fascinating topic and for sure one of the topics most discussed today in tech.
What is AI? What is AGI? Why now? (02:07)
Bertrand Schmitt
Nuno, maybe we should start with trying to define what is AI, what is AGI, what is generative AI?
Nuno G. Pedro
Easy task. AI is what is in the name. It’s artificial intelligence. It’s typically seen as a branch of computer science that is looking at creating mechanisms within machines that, in some ways, are similar to human intelligence or practically speaking, would refer to human intellect.
Nuno G. Pedro
Now, as we know, machines can’t think. That’s still true today. So they do this through very complex mathematical models that get implemented normally through software and hardware combinations. Then within artificial intelligence there are different fields of artificial intelligence.
Nuno G. Pedro
In the good old days, people used to talk about weak AI versus strong AI, which is more general intelligence, where weak AI is normally more focused within a specific field of solution set. General AI and strong AI will eventually become our overlord and think better than us. Nowadays you will hear a lot of different things around artificial intelligence. You’ll hear machine learning, you’ll hear deep learning, you’ll hear about natural language processing, computer vision, et cetera.
Nuno G. Pedro
All of these fields are fields of artificial intelligence that intend to emulate what we as human beings do. So computer vision basically would look at the automatic analysis of things that get processed through vision. Could be video, could be pictures.
Nuno G. Pedro
Natural language processing is looking at the interaction of machines and computers with natural languages and human languages, the language that we have. Deep learning, I would allege, is a subfield of machine learning. There’s still a huge argument on that or whether deep learning is a different field or not.
Nuno G. Pedro
I normally see it as a subfield of machine learning where deep learning normally uses things like neural networks—we’ll talk about neural networks later on—which are trying to emulate how our brain structures thinking effectively. In a nutshell, AI is a field of computer science. It’s an evolution of computer science. Machines can’t think for themselves, so they do this through very complex algorithms and techniques that normally use a lot of mathematics and quite a bit of data.
Nuno G. Pedro
Although we’ll also have a discussion today on how much data do you really need. Are we past the times where you need massive amounts of data or not? In some cases, these techniques and algorithms also need to be trained. There needs to be some sort of training mechanism, potentially even human in the loop, basically telling the machine, whether it’s classifying things appropriately or not.
Nuno G. Pedro
For example, in computer vision, is this really a monkey or not really a monkey, if you’re trying to classify a monkey would be an example of that. But again, that’s in generic terms what artificial intelligence is.
Bertrand Schmitt
Yes, and to build on what you just said, interestingly enough, the field of AI started probably at the same time as computer science per se started so in the 1940s. It’s a space that’s been alive, I can’t say well all the time, but definitely alive and ticking for decades. Interestingly enough, it has probably been a field that started, I don’t want to say too early, but definitely more early than we had the computing power to achieve what we were dreaming.
Bertrand Schmitt
That has probably created a lot of AI winters. If you talk to people experience in that field for the past decades, they have known some boom and some incredibly long period of bust, 10 years, 15 years where no one would want to invest in anything called remotely AI given some past experience of promising a lot and under delivering.
Bertrand Schmitt
I think things changed around 10 years ago with the advent of the latest GPUs from Nvidia, with the advent of new coding paradigms like CUDA from Nvidia as well, that let harness the power of GPU for this type of task much more easily, and obviously some new techniques in deep learning that let you train better and at more scale.
Bertrand Schmitt
Of course the availability and advantage of digital data at scale because since the 2010s we have the internet, we have books, we have content, we have audio, we have video, we have photos, we have everything online. Suddenly accessing data that you can use to train at scale model became finally much easier than it used to be, so a big dramatic change. What we are probably mostly going to talk about today as routes 50, 70 years ago, but really was enabled in the past 10 years.
Nuno G. Pedro
To be clear and just picking up on what you said, these have mathematical roots in things that have been around for many decades. Neural networks are not new. We’re now talking about convolutional neural network CNNs, recursive RNNs, adversarial.
Nuno G. Pedro
We’re going to talk about transformers. Transformers is probably something more recent. It’s an adaptation which actually is credited with Google, which is funny because it’s deeply used by OpenAI, but Google were the guys who came up with it. But in general, if we look at the field, it’s been around for a long time. The mathematical basis of the algorithms and techniques that we use in AI today have been around for decades.
Nuno G. Pedro
To your point, what has fundamentally changed? If I had to synthesise it and summarise it computational power, obviously with the advent of GPUs now there’s even ASICs so there are specific semiconductors that are very, very focused on the process saving of certain techniques of AI. Computational power has definitely changed. Availability of data at scale and the ability to process that data at scale and access data pipes has obviously changed a lot.
Nuno G. Pedro
I would say networks have changed as well. Latencies have come down, so if you want to process stuff in the cloud or even in your own processing power, in your own device, that has obviously simplified the whole story of it.
Nuno G. Pedro
In some ways, it’s brute force. If we think about it, it’s like a lot of data, a lot of compute, and it’s brute force. Now we get AI. I think this is an important point because this will come back to why our AI agents or our AI overlords will not kill us immediately because it’s still brute force. They’re not really intelligent, they’re just doing stuff. We’ll come back to AGI, to general intelligence later on, but let’s leave that positive note for now. They’re hopefully not going to kill us anytime soon.
Bertrand Schmitt
Brute force is a good point because ultimately a lot of researchers argue that we are still very, very early. In many ways, if you look at the way human baby animals are able to do stuff that AI still cannot do today, they always point to the fact that we can learn much faster with much less data, in some ways. At least, that’s one way to look at it about the world around us, the current way we train this machine.
Bertrand Schmitt
In a way, it is definitely a different type of intelligence we are building today with what we call AI. It’s not human level intelligence and it’s not trained the way you would train a human being. It’s very different based. That’s something to always keep in mind when you talk about AI.
Setting the Record Straight (08:55)
Nuno G. Pedro
What about Generative AI? We’ve all been listening to it. I’ll open the hostilities and then you can tell us the real truth about generative AI. I’ll open the hostilities by saying that in my opinion, generative AI is not generative at all. It’s a very poor choice of words that someone at some point made on what generative means. I think it means generative within the context of the neural networks that it’s running on.
Nuno G. Pedro
But it’s not really generative. It’s more of an aggregation of things. It’s something that comes after something else that makes sense for that thing to comes after something else. That is true of images, it’s true of text, it’s true of a variety of other things. Sadly, that’s what even GPT stands for, Generative Pretrained Transformers. That’s a cool name.
Nuno G. Pedro
I think that’s the first thing I would like to debunk. Generative is not generative at all. These things are not creating things. We’ll come back to that later on, regulation and a bunch of other IP issues. But what is Generative AI, Bertrand, as we see it today?
Bertrand Schmitt
Yeah, that’s a good point. I think it always go back to how do you train these models. Have they truly invented something from scratch, from zero? Is it built on, or near entirety of human knowledge? At the end of the day it is built on the near entirety of human knowledge and definitely way more knowledge than any human being could have gathered among their life, at least consciously or at such a scale.
Bertrand Schmitt
The idea here is that you learn a lot about specific topics, be it coding, you would scrape at scale, different website focus on coding. It could be a GitHub, for instance. It could be a stack overflow in term of data and we can talk later, but regulations are changing, term of service are changing of this website.
Bertrand Schmitt
But you scrape everything that there and when you receive requests for support, for help, when you are automatically trying to complete a computing task, you are gathering in a way at scale that data and getting it out from the system adjusted to the local need, and that can be very powerful. But basically it’s transforming that existing knowledge at scale to a specific context and trying to find the best fit.
Bertrand Schmitt
As of today, there are a lot of shortcomings. Some shortcomings are that it’s simply… In some cases actually it has been proven, it’s very close to copy-pasting some examples and that’s obviously a big issue, especially if we think about IP concerns.
Bertrand Schmitt
Another piece of the puzzle when you are generating is obviously in that context you’re as good as your source. If your source is extremely biased for some whatever reason, you might have some real issues in term of output. One question will be are you as biased as your source or are you even more biased? I have seen some interesting analysis where in some cases it seems that the output was actually even more biased than the source.
Bertrand Schmitt
That’s another interesting question. Do you want to unbiase your source? Do you want to stay as biased as your source? That raises a lot of questions, including ethical questions.
Bertrand Schmitt
At the end of the day when we go to maybe what’s most exciting these days in term of Chat AI, one thing that is quite obvious in many situations is that it’s not trying to really understand what you’re asking. It’s trying to guess what’s the best word that should come after your question, and then the best word after that, and then the best word after that.
Bertrand Schmitt
In some situation it might look very appropriate, but in some situation, of course, it makes no sense. I was looking one of the latest running joke it’s spell this world in the opposite direction. No current chat program is able to spell a word in the inverse direction of the word. That shows how some very, very basic stuff and obviously a lot of calculations are up and wrong, some basic stuff are still not achievable by such an approach.
Bertrand Schmitt
The other piece maybe to finish is when we see generative AI apply to chat, for instance. At the end of the day, it sort of average, the average answer to your question as it is on the scraping environment it was used on. If it’s scraping Wikipedia or the internet, it will try to find you the average answer you can get on the internet, on Wikipedia, on whatever. That means that if the outside world is biased, is wrong, or is very partisan, it will be some sort of average answer of what you can find.
Bertrand Schmitt
In some ways, the power to the people who are spreading the more information on the internet, they might not be right, they might be wrong, but they will be the one winning that war to influence indirectly some of these agents. That’s something to really keep in mind. Right now there is no reasoning, there is no trying to make sense from basic axioms. It’s all about simplifying, averaging at scale what they can find in their scraping.
Nuno G. Pedro
Yeah, this is mathematics, probability, statistics picking up on very large data sets. You guys probably have all heard about this. There’s large photo data sets, large video data sets, there’s large language data sets, the famous LLMs, Large Language Models.
Nuno G. Pedro
It’s picking up these big data sets and then trying to figure out, this is probably what should be next in that sentence. That’s the answer you’re getting when you’re going on to ChatGPT. As you alluded and quite well, that doesn’t mean it’s truthful. You guys can run areas of expertise that you might have run searches on Bard or ChatGPT, whatever tool you want to use, and you’ll notice this. You’ll notice that in areas where you have deep expertise, the answers you might be getting back are actually wrong, or they might be mostly right, but there’s something in it that’s wrong.
Nuno G. Pedro
There was a study done recently, I can’t name the name of the hospital, where they did some checking of using what would be the advice given, given certain symptoms, treatment of certain illnesses, treatment of certain conditions. The conclusion they got to is, by and large, the artificial intelligence agents are basically I’m communicating through, let’s say, a chat bot.
Nuno G. Pedro
The answers I was getting were really very good, were very empathetic by the chat bot, which was cool. They were actually, for the most, spot on correct. But once in a while, the process that was prescribed by the agent, so the agent in this case is acting as a doctor. The process prescribed by the doctor for the treatment of that specific issue or condition, once in a while there was a step in it that would cause the death of the patient. That’s a bit of a problem, right?
Nuno G. Pedro
You can’t be right 97% of the times and then be wrong 3%. On a lesser note, I did this search. I remember now if you go and do the same search, it won’t show up like this. This was still on Chat GPT-3, not on GPT 3.5 or 4. But if you look, for example, how many venture capital firms are there in the US and how many venture capitals are there in the world, you would immediately realise it’s wrong. It can’t be possible that those two answers are aligned because the numbers I was getting back were both from 2021 and the numbers didn’t make sense, because if you just did the math, it didn’t make sense.
Nuno G. Pedro
But it was worse than that when I ran this on GPT-3. The numbers I was getting was getting sourced to a study from a specific publication, in this case, Crunchbase. The dangerous thing is that Crunchbase never published a report with a number of VC firms that exist globally that year. That’s when it gets really scary, is you’re asking questions and they’re giving you answers that seem very assertive and palatable. But basically this is just false. It’s literally fake news, because they’d never published a report on that.
Nuno G. Pedro
My advice to everyone is, I believe the tools that we have today with GPT, be it GPT-4, with Bard, with Claude, with all the tools that we have out there, just be cautious in how you use them. Triangulate the information, go back and search for the information. Can you find the report that it’s quoting to you? I see it as a directional augmentation layer. It’s great. It provides you, in particular, I think this might be the size of the market, or this might be actually the competitors in this space, or this might be actually the number of actors and players et cetera, et cetera, but it’s directionally correct, it’s not fully correct.
Nuno G. Pedro
The same with, for example, using stable diffusion when you’re generating art, et cetera. Be cautious that you’re not using something for your own purposes. In particular, if they’re commercial purposes, that is reusing stuff that actually is copyrighted, that is coming from another source, et cetera, et cetera. Again, it’s a great augmentation layer. It’s directionally interesting in most cases correct, in most, most, majority, not always, but be careful what you use it for. Don’t make life or death decisions based on it. Don’t generate entire thesis on investment or thesis on your strategy as a start-up based on it. Just cross check numbers you just triangulate, et cetera. We’re so far from having very truthful information being provided by these tools, by these platforms.
Bertrand Schmitt
Yeah, I think at this stage it’s mostly untrustworthy. It depends, obviously, on your need. If you are trying to generate a poem, you probably don’t care. If you are trying to generate some type of image, as long as the human beings look normal and you have something that looks good, you’re happy. Obviously, there was a lot of issues, especially in the past, about showing you the wrong number of fingers, when you generate fingers, for instance, that sort of stuff. But it’s getting better. But for sure, there are a lot of instance where you are looking for specific information where it’s just completely wrong and totally untrustworthy, at least at this stage of the game.
Bertrand Schmitt
We have been using it at Red River for some automatic competitive analysis. For instance, one thing we saw is that it was working quite well, actually. But where it was not working quite well was when you ask for a top 10 or top 20 of competitors in a small market, in a small deal, and then suddenly what you notice is that, yes, some are very good and then some are completely off. You realise that it’s trying to find your top 20 competitors, but if your market is too small, after it has found the four or five obvious ones, after that it stopped making them. It’s right trying to find whatever.
Bertrand Schmitt
You have to be very careful about understanding where does it stop working. Another basic example, I think not anyone can check to ask one of these AIs about themselves. “What do they know? What is your bio? What have you done in life?” I don’t know for you, but for me, it was completely off. There would be some truth, and there would be alpha fit, where there was nothing true. It was talking about companies I never built and I didn’t even know about. It’s pretty bad. Obviously there is a lack of source, if it’s giving you a source, but actually it’s not connecting to what it’s telling you, that’s not better.
Nuno G. Pedro
Yeah, I just search for mine right now, and it’s still incorrect. It’s directionally correct, but it’s incorrect. It’s saying stuff that I was an associate at McKinsey, which I was a Senior Expert at McKinsey. I’ve been doing Chamaeleon apparently since 2016, which, again, I’ve not. I’ve been doing it since 2021. Strive Capital since 2018, I’ve not. I’ve been doing it since 2010. It’s interesting.
Bertrand Schmitt
I mean, as we have seen, this stuff keeps improving over time.
Nuno G. Pedro
It’s directionally correct. It’s not totally, totally wrong, but it’s like, “Okay, cool.”
Bertrand Schmitt
I think it’s more a question of not just criticise, but be realistic about what you can expect and use these products intelligently. For some use case, it’s perfectly good. The minute you want to be precise, the minute you have to depend on it, the minute the life of someone depends on it, then you have to think very carefully, am I using the right tool for the right job?
Nuno G. Pedro
At the end of the day, if you have expertise in the domain, really exercise a lot of critical thinking. If you have some expertise or some understanding of it, this is not to replace critical thinking. This is to increase productivity. You can go faster because you can get direction on stuff, you can double click on stuff quicker. To your point, it’s about horses for courses. It can really augment you, it can make you go faster but it’s not going to replace you exercising critical thinking, doing triangulation of data which is a good practice in any case, even in the good old days it was a good practice. That doesn’t change.
Verticals (20:30)
Nuno G. Pedro
Maybe we can go into verticals things that we are seeing around the use of generative AI and verticals that we’re seeing in terms of use. We’ve talked about chat. Do we want to talk a little bit more about that?
Bertrand Schmitt
I think in term of chat it’s obviously a good question. The dream has been for a while to have your own personal agent helping you on a different task and replacing in a way or giving you your own personal assistant. That’s the dream of many people. Obviously it can do some of this. There have been some very successful example of a ChatGPT helping you build your dream travel in a specific location, finding restaurants, finding hotels, finding stuff to do. There are some use case where it can be incredibly useful.
Bertrand Schmitt
Again, I don’t think anyone would want to do all by itself. Chat is definitely a big part of the game. It’s also interestingly like a new UI. If you remember the chat discussions five years ago, chat as a UI to replace apps for instance, it didn’t work at the time. The ability to communicate with a computer was definitely very difficult. Now you can see that there is a better ability to communicate with a computer. Again, the question is communicate about what and how can you double check what is being said to you? I am a firm believer there are some use case where there is a real true benefit, and somewhere it’s more limited.
Bertrand Schmitt
I think some benefits for me for instance will be to generate a loan text form from valid points in a specific ton of voice for instance, so you don’t need a ghost writer anymore, you can have your own personal automated ghost writer. Or the other way around to try to summarise text, make it more bullet points. I can see some use cases where it can have real value. I’ve seen some startups trying to generate automatically some slides. For instance, you talk about a topic to have summary and explanation and it can generate you slides, the bullet points to the images, to a specific agenda for the topic.
Bertrand Schmitt
There are some use case where it can help you start with a template in some ways that you can then adjust. Here it’s all about saving time. If you save 50% of your time by doing that, that’s great. That’s already a win in my book.
Nuno G. Pedro
Yeah, and maybe to another obvious vertical out of this is coding. We’ve seen developers coders tap into figure out pieces of code that they could reuse, et cetera. There’s some complexity to this. Some pieces of code might actually been used somewhere else. It does require expertise. The person who’s actually getting the code should understand what the hell the code is doing. To your point, it’s about reducing time too. It’s about, “Okay, can I reduce my time to develop this specific module, this specific part of code.” It’s really about productivity at the end of it.
Bertrand Schmitt
Yeah. You have also fantastic example for documentation if you want to document code, for instance. Apparently some of these models have been really well trained in term of commenting code. That’s definitely one use case. I have definitely heard reports that some developer finds themselves 2 X, 3 X more productive thanks to this tool. When you think about it, we have been used to autocomplete, for instance, for a long time. That’s all about saving time. If you can do even more than autocomplete but fully fill some code and work.
Bertrand Schmitt
If you ask it to review again, it detects automatic some bug and it corrects itself, and you do some iteration and it gets better. That’s just amazing. That’s probably one way to do it. As you say, you have to review, you still have to apply some additional best practices in term of security, in term of coding style. Another good example actually is applying uniformly coding style. That’s much easier with such a tool.
Bertrand Schmitt
Another interesting work was to convert from one language to another. This one for me was particularly interesting because there are a lot of projects where you need to upgrade your environment, change OS, change, develop a language for instance, or upgrade to a new version of that developing language.
Bertrand Schmitt
If you have the ability to accelerate that task by 9 X 10 X, this can change dramatically how you see some of this project. Stuff that was impossible to think about in term of upgrading suddenly becomes doable that could change how you architect complex system and how you manage them over the lifetime.
Nuno G. Pedro
I think about this in a very simple manner. I try and bring it to first principles. For me, what this is allowing is an extra layer of augmentation and productivity in the same way that, to be very honest, software brought a layer of augmentation in its early days. In the same way that mobile phones and smartphones having these little computers in our hands created a new wave of us having access to information. This is creating yet another level of access to information and productivity.
Nuno G. Pedro
We’ll talk about other items like venture capital firms and startups and what’s happening in this space, et cetera. But for me this is like a platform on which we are having another degree of augmentation and as any platform, it has great positive things and it also has some negative pieces to it as well, as we were saying it’s not fully trustworthy. It will probably propagate quite a lot of lies in the next few years and things are incorrect, so fact checking will become more important and all of that. But it is just another layer of productivity that we’re adding to our stack in this world of technology development that we’ve been undergoing over the last few decades, century, I would say.
Bertrand Schmitt
One interesting comment I saw from some people was as long as you consider it at best as an intern, that’s a great tool. Basically what they mean is that you cannot trust the work. You have to have very limited scope. You have to review carefully, potentially rewrite some stuff. You have lower expectation. But if you think that it’s truly a replacement for a full, high-quality software developer, you are probably at this stage today, totally wrong.
Bertrand Schmitt
But if you think that you can empower and improve the efficiency of your existing developers by providing them a near-free intern, then that’s a big difference. That’s a big difference. That’s something that was not practical before.
Nuno G. Pedro
It’s, at the end of the day exactly that. It’s sort of a first draft. I think it’s first draft, how I think about it. First draft text, first draft code, ideas, increasing productivity. Maybe talk about other verticals, first draft music, if you’re maybe a musician or want to go into music. First draft image or design that you want to use for your new app or for your new website. It’s a first draft. It shouldn’t be end of it. It shouldn’t be, “Okay, I’m done. I created a new logo for my company. It’s cool.”
Bertrand Schmitt
Yeah.
Nuno G. Pedro
Magically then you notice we’ll get back to that later there’s a little bit of a watermark in there from Getty Images or something.
Bertrand Schmitt
Yeah, there were some examples where the image were coming out with the original watermark. Image is very interesting because that can be used, I think, pretty well to apply, for instance, some consistent style. You might have an original set of image and you want to say, “You know what? I want to apply this type of style to all this image.” This would have been a work that would have required in normal times a lot of work to have graphic designer graphics do that work on some images and change the style and, “Okay, you want a Picasso style on all of this? Okay, I can do it, but will take me days.” Here you have it in seconds.
Bertrand Schmitt
I think that definitely for some stuff, some brand new opportunity where it’s truly from zero to one, meaning that you could not imagine doing it at scale and even situation where what you achieve is good enough. If you need a picture for an article to convey a message, you don’t need to go too crazy. Some easy AI generation trying to ask three times different prompt and correcting it a bit might be totally enough versus trying to find something for free that doesn’t really match, or spending some money with a real graphic designer, but who can afford that?
Nuno G. Pedro
Again, horses for courses, as always. But it’s a great moment that we’re living in, despite all the negatives that we just mentioned. A couple of shout outs in terms of platforms out there. I think we’ve talked about stable diffusion from stability AI. They’re having their own issues as well, but obviously massively used for the creation of images. We talked about DALL·E as part of obviously OpenAI. We’ve talked about ChatGPT, and then GPT-3, GPT-3.5, GPT-4 from OpenAI. We’ve talked about Bard from Google.
Nuno G. Pedro
There’s a bunch of tools out there. Also important we’ll come back to the whole notion of open source versus not open source to specify that even OpenAI, although it has Open in the title, is actually seen today as a proprietary platform, not as an open platform.
Bertrand Schmitt
Indeed.
Other AIs (28:48)
Nuno G. Pedro
We’ll talk about some movements that happened around OpenAI later on because it was initially not-for-profit, and now it’s unclear what it actually is, maybe it’s a for-profit. But there’s a lot of great platforms out there that really can enhance your work. A lot of great things as well have happened in other fields of artificial intelligence. The work that DeepMind initially had done around gaming, that DeepMind has done around biotech, will come back to DeepMind. Obviously. This is a company that was acquired by Google, I believe in 2014, that now has been merged.
Nuno G. Pedro
We’ll talk about some of the movements that Google has been doing around that. A lot of really interesting things happening beyond the more generalist use cases of this, like generating text, generating code, generating images, things that are very, very specific to specific fields and verticals that have created quite a lot of value.
Bertrand Schmitt
If we don’t just talk about generative AI. But to be fair, the topic today is mostly around generative AI. There are non-generative AIs that have been quite transformational. We have seen DeepMind in Biotech, in gaming, providing some incredible advances in the field. Obviously, we have AI around self driving. Tesla is probably leading the charge. It’s still probably not yet full self-driving, whatever people say today. But it’s definitely improving over time and it definitely leaps beyond what we were used to even three, or five years ago.
Nuno G. Pedro
Yes, exciting days. I’m a computer engineer by background. I remember developing what we would call Mickey Mouse type AI applications and software pieces. I remember developing this game, which I think it was called Connections, the game. Very simple game, where you have to go from one side to the other of the board. I was shocked that at some point the game was beating me 10 out of 10. I just couldn’t beat the game anymore.
Nuno G. Pedro
It was just sheer brute force. It was functional programming using heuristics. Functions that figure out are you closer to the end goal or not? It’s just basically the computational power of the machine was just beating me because it was going down all the brand branches and it was just, “No, this is the branch I need to go through and I’m going to kick his ass.” I’m pretty sure the machine didn’t think kick his ass because machines don’t think as we said at the beginning. But it’s a little bit scary sometimes when you see it in action, even if you were the one that coded it in the first place.
Nuno G. Pedro
I think part of the challenge of a lot of these AI platforms and this new revolution of AI that we’re seeing is something like explainability. Can we explain what the actual algorithms are doing? At some point they become so opaque that even the developers might not fully understand what the code is doing. That’s when we might get into trouble. But before we get into trouble, let’s talk about the big guys. Who are the big guys right now in the market?
The Big Guys (31:20)
Bertrand Schmitt
Good question. I guess for that we can use some leaderboards. For instance is the LMSYS leaderboard for Chatbots. Here it’s ranked by elo ranking and as of May 25, which is our latest ranking, we have ChatGPT from OpenAI as number 1. Then we have another startup as OpenAI, very well funded startup and actually built by ex OpenAI people, Anthropic, with their cloud model, Claude V1, Claude-instant V1.
Bertrand Schmitt
Then we have Chat GPT-3.5 by OpenAI and then we have others. Vicuna, we have PaLM from Google. We have another Vicuna, we have Koala. We have quite a few others. I want to note that has been another one that was recently released an LLM from the UAE and we’ll talk more about it in term of open source versus closed source. Another LLM called Falcon 40B, meaning 40 billion parameters is doing very well. There is a lot of action from either big startups, big enterprise or actually government funded labs.
Nuno G. Pedro
Very, very exciting times. If we look at the usual success, we’ve already talked about OpenAI with ChatGPT being at the forefront of it. Obviously now at release 4, which you can get access to if you subscribe for the premium payment every month. The Microsoft partnership that was announced initially was a potentially 49% buyout and it was positioned as an investment by Microsoft of around 10 billion. I’m not really sure if we know the exact amounts yet, but clearly Microsoft is putting a lot of its capital behind it and it’s really using it across the board for Bing, for Microsoft Office and a variety of other tools.
Nuno G. Pedro
The whole notion that OpenAI at its inception was more of a not-for-profit organisation and now it’s really becoming more for-profit organisation. That in and of itself creates an arms race, obviously Google, because Google are the AI guys. I mean, they came up with a whole transformer techniques. It’s only fair that they have a stake in it. I’m being facetious, of course. We’ve talked about Google in previous episodes. They probably have some of the top talent globally in the artificial intelligence field. They’ve been pioneers in the creation of languages, techniques, et cetera.
Bertrand Schmitt
One point is that Google probably has the most at stake. Google has the most to lose because in some way, a cheap interface might be seen as a perfect replacement for a search box. Instead of having lots of answers, lots of ads just have a clear, straightforward answers as long as you can trust it, obviously. That’s Google. That’s in the most risky position of being disrupted. Ut’s their whole business, basically, because 80% of Google revenues, profits are coming from search. They better be good and react fast on this or they really at risk to lose their cash cow.
Nuno G. Pedro
We saw how susceptible they are. I think the point you made, Bertrand, is spot on. How susceptible they are in terms of value. They rushed Bard to the market, and they lost 100 billion market cap that day, because the Bard made factual errors and stuff like that. By the way, one of these tools is making factual errors just to be clear. I’ve started actually using it side-by-side with GPT-4.
Nuno G. Pedro
I find actually Bard sometimes is more accurate and more interesting, so maybe catching up, I’m not sure. There’s been a lot of movements. They put AI at the centre of everything. There’s even the meme of the summary of Sundar’s statements at Google IEO, where it’s AI, AI. I think he said AI many times. Probably one that shouldn’t be really neglected is the formal merger of DeepMind with Google Brain.
Nuno G. Pedro
DeepMind was an acquisition, it was ran at arm’s length, and Google Brain was effectively ran by Jeff Dean, the incomparable Jeff Dean, and was more focused on internal things for Google and next level things for Google. They’ve now merged. This new entity is now led by the former CEO of DeepMind.
Nuno G. Pedro
A bit unclear on Jeff Dean’s role. He’s chief scientist. Seems like he reports still to Sundar directly, so it seems like a two-in-a-box structure, although he’s not formally the CEO, but Jeff does what Jeff does, I guess. We’ll see what comes out of this. Over the years, I kept hearing these stories that there was this conflict between Brain and DeepMind where they just had different philosophies.
Nuno G. Pedro
They competed it for resources, they competed for a variety of things. I think Sundar said, “No, you guys just going to have to align and work together and figure this out, because this is it.” But it’s a big deal. It’s a big deal that they’ve merged those two entities effectively. Very significant entities.
Bertrand Schmitt
Microsoft for sure. We talk about their partnership with OpenAI, as we have seen with the last build conference. Sundar Pichai is also all in on AI. It’s AI everywhere. It’s ChatGPT integration with every Microsoft product. It was not just about Bing, it was about Office. It’s not just office, it’s Windows. It’s really an integration of AI in every pieces of every product. It’s significant investment from the biggest players.
Bertrand Schmitt
One, we didn’t talk much yet, but I think we showed this meta. They have an incredible and incredibly strong AI team and department led by [inaudible 00:34:55]. They have applied this at scale obviously for their advertising work, but also for their headset, their Oculus headset and to great success to do a lot of things. One interesting things and we talk more about it, but they have been in some ways maybe surprisingly leading some of the open source efforts in AI. So taking a very different approach.
Bertrand Schmitt
Maybe one actor we have to talk about obviously is Apple. Even with the last WWDC I cannot say we have seen a lot. I think Apple approach is first, nothing is open source. Two, we barely talk about AI. Actually, it looks like they made a special effort not to pronounce the word AI once. They would talk about deep learning, More importantly they would just talk about how some features are better. Better predictive text input for instance, and that’s it.
Bertrand Schmitt
Or they would use for the new headset, the Vision Pro, they would use AI obviously to recognise your fingers or to recognise your eye movements more efficiently and very precisely. It’s more an AI behind the scene that is making some new experiences possible. But it’s really a behind the scene approach, and let’s not share too much with anyone.
Bertrand Schmitt
Obviously, one interesting piece is that most Apple products have very big AI capacity in term of learning or inference. In term of hardware they are, you could argue, leading the charge in term of how much AI optimised hardware and AI capable hardware they have put in the hands of everyone.
Nuno G. Pedro
Last but definitely not the least, let’s not forget the Chinese. Baidu has been doing things around AI for many years. There isn’t a huge amount of noise yet, but we’ll see at some point whether there will be or not. Tencent, given their gaming background, et cetera, I’m sure has done significant things around it. Then probably the most used AI-enabled app in the world or one of the most used, TikTok from ByteDance.
Nuno G. Pedro
interesting to see what’s going to happen around that. It’s not necessarily generative AI in the sense that we are discussing in this episode, but by the time we launch the episodes there might be some announcements from some of our Chinese brethren.
Nuno G. Pedro
Conclusion (38:38)
This concludes episode 43. Our first episode on artificial intelligence and in particular generative AI. In this episode, we touched upon the definitions of artificial intelligence, AGI, why it’s meaningful now, why AI didn’t take off 10, 15 years ago. We also set the record state on what’s happening with generative AI. We discussed key verticals where AI has had a significant impact. We ended by talking about what the big guys, the Google’s OpenAI’s Microsoft’s of the world are doing.
Nuno G. Pedro
Next episode will conclude our two-part episode on artificial intelligence and generative AI. Thank you, Bertrand
Bertrand Schmitt
Thank you, Nuno.