[DataFramed AI Series #4] Building AI Products with ChatGPT
Joaquin Marques started working with AI back in 1983, so he has more experience with AI than almost anyone on the planet. His prestigious resume includes putting his AI skills into practice at IBM, Verizon, and Oracle, and now he runs the father-son AI consultancy Kanayma, where he builds AI products.
Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
One of the problems everybody has is having enough examples of data to be able to train the neural networks for specific tasks not related to ChatGPT. And so let's say for example how one of my questions is how many different ways to ChatGPT can you ask about what was the most profitable month of the year? For a specific month, how many different ways can you ask that? And instead of me spending days or weeks, you know, writing down everything that occurs to me, I have ChatGPT generate that data for me. And then I take all those 100, 200 questions, whatever. They are all asking the same thing. And now I go and use it in my chatbot, make sure that it comes up with the same right SQL query every time. Usually, it doesn't, so then I need to take care of it. Essentially you can generate hundreds of different prompt variations in order to make sure that regardless of whatever a user inputs into your chatbot, you're going to get a consistent correct answer.
Currently, I have a strong interest in XAI, AI explainability. In the case of explainability, the algorithms that work best in explaining themselves are things like decision trees, force, that type of thing. But if you think about the complexity of certain problems, and we already have examples of those types of problems in math, there are some math problems that have been solved with computers that no human understands in detail. We just take it for granted because people have checked specific portions of the code. But nobody will ever understand them because they involve thousands of steps. And the same occurs with a decision tree or a forest in that it may have thousands of decision points to give you an answer. ‘Find this particular variable between this range and that range’ and then you repeat that a hundred times, who's going to understand if you have a hundred variables?
Key Takeaways
Use AI products such as ChatGPT to help you test your own chatbot solutions. If you’re building a product that answers a question that could be asked many different ways, ask ChatGPT for all of the different ways your question might be asked. Test the output questions to spot potential holes in your own code.
In order to take full advantage of the ChatGPT when prompting, you have to speak its language. Use it for what it was intended to be used for, approach using a chatbot like you would a human, as that's what it's be trained to imitate. Set up your prompt with the right context, avoiding certain terminologies, and not making it too complicated.
When approaching building AI products yourself, take the time to figure out whether you have all the appropriate pieces in order to build a solution. Consider questions like, do you have all the components? Are all of the intermediate problems solved already? What components will actually solve the problem? How might you need to string them together?
Transcript
Richie Cotton: Welcome to part four of the Data Framed AI series. This is Richie. Perhaps the biggest development in AI over the last year is the explosion of tools that make it easier to develop applications with ai. That means that building AI into products is now feasible for almost. Any company. Today's guest is Joaquin Marques.
He started working with AI back in 1983, so he has more experience with AI than almost anyone on the planet. His prestigious resume includes putting his AI skills into practice at IBM, the Horizon and Oracle, and now he runs the father-son AI consultancy Kanayma, where he builds AI products. In today's episode, we'll cover ideas on what to build with ai.
The details of how to build AI products and how ChatGPT is making chatbots better. I'm keen to dig into Joaquin's 40 years of experience. Hi, Joaquin, thank you for joining us on Data Framed.
Joaquin Marques: Thank you for inviting me. It's a pleasure to be here.
Richie Cotton: Yeah, really. I'd like to dive straight in. So, let's talk a little bit about your work creating generative AI applications. So, can you tell me what are the most common use cases your customers have for generative ai?
Joaquin Marques: Yes, certainly. Right now I'm devoted to chat bots because of chat gbt. It represents a unique opportunity because the chat bots that we did before chat GBT did not use ai. We created different scenarios for dialog... See more
And special content for each scenario. So it was quite involved. That whole task has become much easier now and more creative. And we are not pigeon, hold on, specific scenarios and specific sets of questions and dialogues. So now it's, much more natural. But we have other challenges. So, chat bots most recently.
And then before that I used generative AI to generate SQL queries to answer English line questions. Okay. And I'm using that in my chat box now. Basically you ask a question, it translates it into sql. And goes to the database, gets the answer, submits it to the chat bot to surround it with some nice context before answering back.
And one other project that I was involved in involved computer vision and what I was doing was basically taking video from cameras where older people lived in their houses. And if, for example, one of them fell okay to detect the fall and be able to react and have the software call for help.
So that, that's one where the generative AI was basically take generating video because video is very expensive and you cannot have old people falling on purpose. It's hard to get actors and we'll go for that. So, you can generate some specific scenarios and then have the software learn what a fall looks like and, it's very useful cuz then you can train the neural network to recognize the think
Richie Cotton: all those three examples of, the chat bots, the natural language interfaces to sql and then the computer vision, they're all fascinating examples of using ai. Maybe we'll start with the chat bots. So, you said that using chat g p T, this has made it easier to create chatbots cause you don't need to worry about the scenarios. Does it also have a benefit for the end users as well of the chatbots?
Joaquin Marques: Yes. It is much more natural for them. For example, one of the big challenges and it combined two of the scenarios that I was talking about the data that the user is interested in. Is generally not the generic data that was used to train ChatGPT, for example. It's specific data to their business. And therefore you cannot ask chat G b T for it. And on the other hand, you cannot feed your data into chat G P T because of privacy concerns. So you have to have a local database that you can query. But then the trick is how do you do it? Do you have chatGPT make the call for you? Then you have to supply the chat G B T with the context of the question so that it understands where you are coming from and what is it that you refer to with certain specific words. And then. That helps it to generate the sequel query, including the specific points that you want to make. For example, you could ask, gimme the sales for March, So where March, 2019 goes in the query, it needs to make that linking chat is fantastic to do that. It's a very difficult problem to solve otherwise, and it's solved for you. It's just a matter of okay, using your imagination and you may need two or three questions back and forth. A small piece of dialogue with chat g p t before you get the answers that you need. And so there therein lies the secret of getting this chat bot to work the way that you expect them to work.
Richie Cotton: That's absolutely fascinating that I'd love to get into the details of like how you'd been dealing with this sort of these privacy issues and using local data to get around that. Maybe we'll talk about that later, but for now, just can you tell me a bit more about the different technologies you are using to build these chatbots? So you mentioned GPT, but are there any other technologies that you think are important here?
Joaquin Marques: I'm currently using land chain, which allows you to create basically a sequence for back and forth and get several pieces of software involved, not just chat G B T so that for example if you don't want any bad feedback, therefore you are supposed not to use bad words, then you could filter the bad words upfront and also instruct chat GPT not to use any bad language. So the involvement of these other tools has to do with filtering, with correction with preparing the context to make the query And then t comes back with how you need to make that query, and then you make it of your local database, not ChatGPT. So chatGPT ends up putting together the question exactly the right way that a programmer, for example, or a SQL specialist will do. So that's one. One other interesting aspect is I have to do a lot of reading. So, I've put together a system by which I get the PDFs, especially technical articles. I feed them into a vector database. Together with the original text. And then I can ask, okay. Well, in the last three years, gimme the list of all the articles on a specific topic and it comes back with it. I have some background on this because in the nineties I was the chief architect for Info, which was an IBM product, the world's first customized news in scientific article type of search. So you would give it the topics, you're interested in the context and it will come back with articles, with news items that concern that topic and that specific focus that you may have. So it, it's the same idea, but now we can do it much more sophisticated. And let's say for example that I'm investigating something like. For real I'm now into Xi Xai explainability. There are hundreds of articles already. So to create a survey of them, I feed them and then I ask specific topics and it uses those articles as the source and it's outside of chat. So it's called an embeds or vector database.
Richie Cotton: That's very cool stuff. So I like the idea that you are using AI to do research for you in order to build more AI applications. That's exactly a good sort of virtuous circle there. But yes. So it seems like Lang chain and vector databases seem to be like the big technologies that are important here.
Joaquin Marques: ChatGPT has its own I've also used Pine Cone. They're all fine pieces of software.
Richie Cotton: Excellent. And I think so a lot of companies at the moment, they, they really want to get involved in generative ai, but they're not sure whether they need to use these other pieces of software that other companies are making or whether they should build things themselves. So I guess in terms of your work, why do organizations come to you as a consultant rather than building things in-house?
Joaquin Marques: When they realize that it's not just having the university training, but also the experience. Thankfully our universities are graduating thousands of data scientists, but it's hard to find somebody with more than maybe 10 years of experience.
I've been lucky enough that I started doing this in the eighties, so I've learned through experience and I've been exposed to all of these technologies and absorbed them on my own as opposed to having that experience at a university and having that many years experience gives me perspective because, for example, what we were doing that was state of the art in the eighties and nineties has been completely superseded.
Some of the ideas are still there, but they're much easier to implement than back then. Back then we had to write everything from scratch, including vector databases. And expert systems, for example the neural networks back then, we knew about them. We had a good idea how to put them together, not the modern models that have been discovered in the past half a dozen years, but some simpler models.
And they were impossible to run due to memory limitations of the hardware at the time. Okay. And they were too slow Okay.
Richie Cotton: So I'm glad things have move on from the 1980s. It's good progress. Absolutely.
Joaquin Marques: But one of the things that clients realize is they may try one or two projects. Many of the AI projects fail because you are asked to do something that if you know it's very hard to do but you don't have the experience to know it, then you go ahead and do it, try it.
And AI is. Not ready to solve that particular problem that particular way. But if you have the experience, then you can, and I've told the clients that's not possible to do. It needs research. So it may be possible to do in six months, but currently it's not. So you have to know the current technology.
You have to know what the hurdles are for each type of technology as well. What can be done, what cannot be done. So
Richie Cotton: just related to that how do you go about deciding what makes a good AI project? Like, how do you plan this? How do you decide whether the, this is high priority, whether it's feasible?
Joaquin Marques: I would say a combination of knowledge and experience the clients hang on to. Everything that they read in the news. Okay. And then people have a tendency to think that they can get away with a lot more than it's possible given the technology. So for example, there are people that want chat g p t to do some reasoning for them.
Well, it turns out t p t does not do any reason what it does is predict as they say the next word. But not only that, but predict the context. And if you give it some context before you create the prompt, it helps it quite a bit to constrain the possibilities. Alright? And this context becomes quite complicated depending on.
How you ask the questions, the sequencing, all of that. So for example, if you ask a question in one context and then you go and ask another question in a completely different context and you don't set it up before that, before changing context then you get the wrong answer because it assumes a history as part of the dialogue you go through.
Okay? So if you want to change the way that you are thinking about something, you need to help it. And that's part of the prompt engineering challenge that we have currently. And since chat, in a way, it's a black box, even though we understand the mechanism, the transformers it is a black box because by the time you feed those billions of documents into it to create a specialized model that contains all of that information, you really don't have a very good idea of what could come up.
In the past we used neural networks like, the ls, the long, short term memory. Okay. Lsdm that were capable of memorizing what had just come before. So in a sentence if you were parsing one word, it knew what had gone before three or four or five words before. But that's not enough because you could continue in a crazy way that was completely out of context.
And, In some science fiction novels or comedies or whatever, you can never predict what comes afterwards. But with Transformers, you read not just a whole sentence, you could be reading in parallel, a whole paragraph. So you get both the context of what happened before and you get the context of what's going to happen later.
So, it uses this attention mechanism to do that, and it makes it so much more powerful because it's like having a second reading of the same text. You've already been there, now you go through it again, but doing it all at once and you get much better results. And that's partially part of the improvement.
And one thing that we've observed is this idea, it was just an idea that there would be emergent behavior if you fed it enough data, It was not obvious that would occur, but the mechanism itself, the transformer is easy to understand. But what's difficult to understand is if you fitted enough data at one point, it starts coming back with responses that surprise you.
So that makes it incredibly interesting because all of a sudden you realize, oh I thought for example, that it would get that question wrong, and it turns out it gets it right, or maybe you didn't ask it correct. So you need to work on your own prompting. So this is
Richie Cotton: really interesting the idea that first of all, you need to worry about your prompts and, but also sometimes the responses you get are gonna be surprising.
So, in general, how can you evaluate the quality of responses from a text generation
Joaquin Marques: model? Well, for example, let's say a client has a specific type of business and you want to supply a chat bot for managers of the stores for that client that have, let's say, a high school education. So they usually don't have a course in statistics to understand, the intervals of confidence.
In the data that they are getting back. I tried to solve that and partially solved it a few years ago, doing visual statistics, creating curves and showing where they were in the set of curves so that they could say, okay, we are above the medium, but below the 75% percentile, or we are above the 75 percentile, but below the 95 percentile.
But if you are anywhere between 2.5% and 97.5%, you are within the usual high confidence interval. So you can see your curve where you are today. And visually you say, I'm doing great. If you are a certain above a certain level and you don't have to understand the details, well, it turns out a chat box is a lot better.
You ask the question in English gets back to you. It tells you how you are doing, how your business is doing at a moment in time. And if it's not doing well one of the things that we do is to feed back a predictive answer. For example, tomorrow is going to be lousy. And also a prescriptive answer, which is what you have to do is start selling coupons right now.
Okay? Or put an announce something or call a nearby business. Let's say if you're a parking lot and you know you're not going to do enough business tomorrow, you call some nearby places that will attract customers and split the parking with them.
Richie Cotton: Okay. That's fascinating. The way you think about like, how do you interact and provide more information.
So you're saying it's better to just give it more information rather than give it a specific task to do.
Joaquin Marques: Yeah, exactly. Or you feed the spec the information that context information and unbeknownst to the user. So from the user question you interpret and say, oh, okay, he's talking about this particular context using some keywords, therefore I need to give, chat this preamble with information that it needs.
For example, in the case of the SQL databases, we don't feed the data. We only feed the the schema for the tables, and then it figures out how to do the SQL from the scheme.
Richie Cotton: So you're not providing any actual data to the ai you just provide, this is the structure of the databases, what's in each table, and then it can figure things out from there.
That's very interesting. So does it make a difference when you're building these AI products, if you're building a complete AI product from scratch? Or if you're deciding AI features to an existing product?
Joaquin Marques: Yeah, it makes quite a difference. The product that I design for IBM in the nineties took two years to do okay to design and to code and all of that.
We had to do everything including the networking software and design the databases the vector databases to do this and create new technology. I got three NLP patents out of it. It was quite an effort. Nowadays, I don't need to do that. A lot of the products are open source. I can just use them.
My only criteria is can I use them in a commercial product? Or do I have to ask the client to buy or license the product? That's the only thing. So nowadays things are much easier. The problems we have to solve are at a different level. And as I said it, it's a question of prompting, but also architectural design because you cannot ask details of your business again from chat G p T, you need to get chat G p t to figure out how to ask your local database for the details.
Okay. So
Richie Cotton: that's really interesting. So, saying, well, prom promptings obviously a bit of a challenge, but also figuring out the architectural design. Are there any other big problems that you need to consider when you're trying to devise an AI project?
Joaquin Marques: Yes. First I have to think about it for.
A day or two, then I need to figure out, are their components, are all the intermediate problems solved already? And what components will solve it? Okay. And how do I need to string them together? And if I can figure that out, then I tell the client, yes, go ahead and I believe this is the effort. If I see that there is a gap somewhere and it needs research, then I tell the client right now we need to cover this gap.
but I tell them upfront so that they know. Some of them decide, okay, we'll wait six months and see what happens. O others say, okay, we'll pay you to do the research and put it together, in which case it becomes their property.
Richie Cotton: Alright.
That's interesting. So, can you gimme some examples of success stories where you've built an AI product for a customer and it's had a big impact for
Joaquin Marques: Yes. When I was head of consulting and chief Data Scientist at Oracle Latin America we were challenged with some problems with it's not that we were not selling, we didn't know what we were selling and in what quantities cause we were selling so much.
So things got lost in the details. This was a while ago. And basically what I did was to travel to several places, talk to the salespeople talk to the marketing people, took a look at the last four years of sales and then. Using at the time some a technique called svm Support Vector machine.
Using that technique, I basically was looking, okay, when we sell a set of products, because ibm I mean Oracle sells usually sets of products for a corporation. It's not just one. And for example, databases are sold with almost everything because they're required. And we were targeting, okay, what's going on with the sales of databases?
The salespeople were taking quite a while to prepare the materials they needed to approach the customer, the potential customer. And let's say they, they used four and a half days to prepare half a day. To go and present it to the customer. The proposal I was able to reduce by identifying patterns.
For example, what products sold in combination and what products, if you sold one, you never sold the other. So those types of patterns. Okay. I was able to decrease the preparation time to basically zero because it was based on previous sales experience, and I could tell them through the ai, do you need to bring a technical team with you for this particular customer in combination of products?
Or can you go on your own being a salesperson or do you need backup of some sort? And then what is the best way to engage the customer if they decide to go ahead? You bring a team for a series of meetings before a final commitment or you don't. And we were able to basically reduce the time that it took to prepare to visit a customer to only half a day, visit the customer.
And the AI just gave them, a list of what had been sold, what we were selling, and what else we could offer if they went for it. Okay. And also the strategy that you needed to use and pointers to previous sales so you con could contact the salespeople and get their wisdom about this particular customer.
that was a pres. We had a predictive part. We predicted which customers will be more receptive and the prescriptive part, which is how should you approach the customer. That, that was quite a success.
Richie Cotton: I really love that story cuz when you started, I thought, oh, you were just gonna say you'd built a recommendation engine, but actually it was improving business processes, so just speeding up the operations and also finding more effective ways for you to sell things as well.
So that sounds like it had like an impact on many different
Joaquin Marques: levels. Yes. It did and it was quite successful. Excellent.
Richie Cotton: Just on the flip side though, have you ever had any cases where you've had a customer that's been very excited about generative AI and then after talking you've decided actually that's not the right technology for their problem?
Joaquin Marques: Yes, and mostly in the past I worked at Cognitive Systems, which was an AI company offshoot of Yale, department of artificial intelligence. And we were doing natural language processing back in the eighties. And we created systems that would basically, for example, at a bank would receive bills of landing which are documents that the ships arrive with saying this is the cargo that we bring and these are the permissions you need to get, and these are the charges for bringing the cargo and how to extract the cargo from the port.
Okay. And, but there were many different types of bills. So we created a system that would categorize them Okay. And put them in specific bins to be read and either accepted or rejected by specific people that had particular knowledge of that. Okay. So that, that was an interesting one.
And it was challenging because, for example, we had to have several thousand. Documents and we use techniques that are they are called case based recently. Okay. That was at the time that's been superseded completely. Other aspects more recent is, for example, to, to use Bert, which is a type of neural network to do reduction.
Okay. Chat bot chat g BT can also do it nowadays. And a lot of people, for example they do reduction by just erasing the words that you want to keep private. Okay. And I discovered that instead of doing that, if you use special characters surrounding a label, you could say social security Id goes here.
As opposed to blanking it out. And then if you use the output that has those labels into a vector database or even chat itself, if you could fine tune them. Then it starts looking at the patterns in terms of, okay social security IDs could go in this position or that position, not just blanks.
Okay. So it would start seeing behaviors of where people tend to use social security IDs, for example, in their correspondence.
Richie Cotton: Okay. And then from there what was the use of that? Was this just to provide better reduction then, or that more cultural sounding?
Joaquin Marques: Yes. And then to be able to predict when you are not sure that these.
Sequence of numbers is some international ID or a social security number. If you know the patterns in which a social security number pops up, you may be able to do a presentation. Okay? This is not a social security number. is something else. And tag it according. Okay?
Richie Cotton: Okay. Oh, so it might be like a phone number instead, but you need
Joaquin Marques: to contact We, we humans attach metadata when we see something and we know, and for example, if you hand redacted and you cross it over, but you know it was a social security number that will be useful later on.
If you do this day in and day out, you'll recognize the pattern. And that's like a meta level of knowledge that one would feed.
Richie Cotton: So I'd like talk a little bit about how generative ai can democratize access to data. So I guess one of the big sort of potential benefits of this technology is that some tasks that usually performed by a data team can now be performed by N one, like even if they don't have a data background.
So have you seen any examples of this?
Joaquin Marques: Well, I've heard and read about some examples, but they are limited to particular contents. Cause everybody's trying to sell these tools that supposedly don't need any code. They're fantastic if you apply them to use cases that everybody has sold. Okay.
Because that's what the tools are for. But if you have anything even slightly complicated, then they, you need to program, you need to add your own code into it to make it do what you wanted to do and not what somebody thought you would want to do always. So I have not seen anything that's impressive from this point of view.
Now the democratizing of the data, there's several different layers because, for example, if you are talking about personal identifiable information like pii then you need to determine, okay, you can look at your own personal data, but you cannot look at anybody else's. Or if you high, you are high up in the tot pole, let's say at a bank, you may be able to use somebody else's personal data, but it depends on your rank and your particular function within an organization.
And all of those permissions need to be set up even before you can make the query. Alright? So what you get is what is mainly due to the security process you go through rather than the chat chatbot itself. So the chatbot has no dec it, it assumes Okay, you ask a question, you already have the right to ask.
So by the time you get to it, you have to go through all of security layers in order to make the query. Now the democratization of data in terms of, for example, you could go and. Completely obliterate all the data that's p i, no matter what. And then you feed it or you put those labels.
Okay. Like I was saying, social security number, personal phone number, or cell number name of a company without mentioning the company and so on. That's democratization. Anybody can read it. They're not damaging anyone. Some people could guess who you are talking about, okay. From the context. But that's one aspect where human intelligence, it still exceeds anything that chat G B T can do.
So in that sense, yes, you can democratize data. You can actually exchange information without giving away any secrets. So well
Richie Cotton: related to this, I was wondering, you've been working on natural language interfaces for SQL queries, and has that helped provide access to data for people that couldn't normally get access to those databases?
Joaquin Marques: Yes, because they will need to be experts in SQL in and have the access rights to go into the database to start with and then know how to asset in, see and some of these queries get very complex. So yes. Now as far as further democratization I think that very soon we'll be able to exchange information between companies, between people.
In such a way that we are basically guaranteeing that no PIIs are being revealed of any type, or for example, that the exchange fully complies with IPA or some other standards. So I believe these things are going to be possible very shortly.
Richie Cotton: That's really interesting, the idea that you can safely exchange data from one company to the next and not have to worry about those data privacy issues.
In general where do you think organizations should make use of generative AI just to improve their data capabilities?
Joaquin Marques: Well, one of the problems everybody has is having enough examples of data to be able to train the neural network for specific tasks, not related to chat. And so, let's say for example, how one of my questions is how many different ways to chat G P T, can you ask about what was the most profitable month of the year for a specific how many different ways can you ask that?
And instead of me spending days or weeks, writing down e everything that occurs to me I have chat j PT generate that data for me, and then I take all those 100, 200 questions, whatever they are all asking the same thing. And now I go and use it in my chat bot, make sure that it comes up with the same right SQL query every time.
And usually it doesn't. So I need then to take care of because language is, most of the times is not specific enough. It's ambiguous and it depends on how you ask the question. That ambiguity is critical. If you don't get rid of it, you don't get the right query. Ah, that's interesting.
Richie Cotton: So if I've understood this correctly, you're generating hundreds of different prompt variations in order to make sure that regardless, whatever a user inputs into your chatbot, you're gonna get a consistent correct answer.
Joaquin Marques: Yes. And if there are ways of asking that always gets it wrong, then one of the things that I do is I check let's say using a pine cone vector database.
Are you asking this type of question? And if the answer is yes, it comes back instead of going to judge t saying could you be more precise?
Richie Cotton: And related to this, do you have to do any like testing of prompt quality? Like do you do ab testing on your prompt or anything like that?
Joaquin Marques: Yeah. Depending on the needs of the client. Yes. So we prepare a whole test suite that we can apply. But again, the labeling and the data generational that I, I had a client an insurance company in Los Angeles that they wanted to be able to get the readings from Dongo in the limousines and taxis that they insured, so that they could see first whether the drivers were behaving themselves while driving.
Okay. And secondary if there was an accident that we would be able to detect it. But after more than 10,000 messages, we only found 13 of them that were true accidents. So every time I try to train the neural network, it will say, well, it's just 13 of them. Le let's assume they don't exist. So it would never classify anything as accidents.
So this was a huge problem, and it's very hard to fake data from an accident. Okay, slamming your door in the car, it's fortune right there and there. It's the same as, for example, a bike bumping against the door. So how can you tell the difference? And if you can't, how can you produce more accidents?
So I went to the website the government website where they test the cars and got their accident data, transformed it into the right format as if it came from and then fed them. But those added a few dozen examples, not enough. So in that case I was lucky because I use a mathematical technique called the fast food yet transform, which is used in physics, and the fast food yet transform, changed it from acceleration versus time to acceleration versus frequency.
And the frequency gave it a telltale sign of the accident. There were some peaks at certain particular places that only occurred in accidents. Then I fed that result from the fast Fourier transformed into the neural network, and it could easily tell one from the other. I was lucky though, because nowadays we could use generative AI to actually produce more examples of the accident patterns without having to be lucky and get a transformation that give you Yes, absolutely.
Richie Cotton: So, that used the Fast Fourier transform to like detect accident data. That does sound like quite an intuitive, or not an intuitive, like a novel sort of leave of the imagination for how to solve that, but yeah. So more generally it does seem like, generative AI is really good for creating synthetic data.
And I do like the idea that you can use that if you do have a problem with class imbalance, where you're trying to detect rare
Joaquin Marques: risks. Absolutely. that's one of the big pluses of Yeah.
Richie Cotton: does it change the kind of skills that you need to work with data? Now the fact that you can do different things with ai?
Joaquin Marques: Yes. Cause you are working at a different level, in order to take full advantage of the engine, let's say chat, G B T, you have to speak its language and use it for what it was intended to be useful. It's not like there's a lot of APIs you can go through. It's really how you set up your prompt with the right context.
Avoiding certain terminologies and not making it too complicated. One other thing that I've noticed is, for example, if the database has too many tables, then it either takes too long or it gets it wrong because there are a lot of joints to do. And that takes time by itself. If you create a query yourself, But it takes time to set up correctly.
So chat check it is not as good. So, you would make the table simpler. All data analytics type, where you convert a relational database into a data warehouse type with logs of attributes together in a thickness seal table of as few tables as possible. and you get better results. So you have to make accommodations for the limitations of the chat bot.
Richie Cotton: Okay? That, I think that's a useful tip to know is that if you're trying to generic these SQL queries lots of joins, then it's not gonna work. So, at least at the moment, you
Joaquin Marques: may get lucky and it works if it has a very close example in this memory. But if it doesn't, which chances are it will not, then it may easily get them wrong.
Or more sensitive to changes in the wording, the English wording behind English query,
Richie Cotton: ah, yes. Okay. So the language used in the database matters perhaps cuz G p t in general is better with English rather than other languages. Is that Yes, correct.
Joaquin Marques: For example if you are asking something about a specific record in the database, then I found that using the word single is crucial.
You say single, it knows it's one record. If you say something else or, or avoid the word single, it may give you the sum of a particular column
Richie Cotton: because things are moving pretty fast. There's so many developments going on right now, it's quite hard to keep track of everything. So are there any generative AI projects that, or tools that you are particularly excited about at the moment?
Joaquin Marques: Well, as I said, right now I'm into chatt PT in Lang chain. But I'm also looking at other aspects. I have a strong interest in as I said in Xai explainability. And for example, in the case of explainability the algorithms that work best in explaining themselves are things like decision, trees, force, that type of thing.
But if you think about the complexity of certain problems and we already have examples of those types of problems in math, there's some math problems that have been solved with computers that no human understands in detail. We just take it for granted because people have checked specific portions of the code but nobody will ever understand them because they evolved thousands of steps.
And the same occurs with a decision tree or a forest in that it may have thousands of decision points to give you an answer. And the best that we can do is, well, if you find this particular variable between this range and that range, and then you repeat that a hundred times, who's going to understand, if you have a hundred variables,
Richie Cotton: absolutely.
Can become incredibly difficult to explain what's going on in, in particularly complicated models. Yes. And,
Joaquin Marques: And we may have to settle for more generic answers if the only way that we can explain it, even with the most explainable algorithms is by providing hundreds or thousands of decision points.
Richie Cotton: And so related to this, I get the feeling that over the next few months there can be a lot of things that claim to be exciting ai, but maybe aren't. So do you have a way of deciding, like, or any heuristics for deciding like what is a good quality AI tool or company versus what's just, I dunno, cashing in on the
Joaquin Marques: hype?
Well, usually I read the scientific papers in Ariv, but not the press. I mean, I also read the press to see, but I don't believe it until I see the background to the idea somebody has done certain experimentation, so research and they have a certain level of confidence that this will work. And then it's a question of reproducing it.
And if the results are jive with the claims in the articles, then I'll try. And as I said, we use as much open source as possible. Open source has been fantastic for everyone. But occasionally there are those gaps that you need to have a client that's interesting paying for the And as I said, the advantage for them is agreement secret cause they paid for it.
Richie Cotton: so I do find that interesting because that you mentioned having to read all these these journal papers because sometimes if you're not involved in academia then you think, well, it's just something that happens in the background universities.
But actually it's a really important part of your, research for building business products
Joaquin Marques: then. Yes and also I read articles on other types of products and the level of confidence that they have. But I do tend to read the literature. And for example when I was at I, IBM, Mt.
Watson Labs, they had all the papers there. I researched as much as there was before I started inventing things with my team. That's the way to do it. You don't want to reinvent the wheel. You want to, as Newton said, step on the shoulders of giants. Absolutely.
Richie Cotton: Alright. So is there anything you are working on right now that you are excited about?
Joaquin Marques: Well, as far as the explainability, It is a big concern of mine because it's going to come up very quickly with chat. Chat. Bt in general can explain when you ask, explain it to me what the last answer. Is or is meant to do. But checking it is a different type of piece because you cannot use chat g b t to check it.
Many times you can make it hallucinate not as often. As it may otherwise by restraining it and making sure that you design your queries so that it's well retained. for example, there is one parameter called temperature that if you set it to zero will only give you answers.
That's 100% sure are correct. In other words, that answer is somewhere in one of the texts that is absorbed. That's still not a guarantee, but yet we live with it. A lot of scientific papers are not of the best quality and that has been proven. So we live with uncertainty and we may need to check multiple sources here and there.
The gbt four can give you sources so you can go and check and make sure that yes, indeed that answer jives with what people are ready know. But in order to create guardrails, to keep an generative AI in, in general, not just jet gbt, but DLI and others from going off the rails, you need two things.
You need a set of rules policies. That keep you within the boundaries and have it as an expert outside that reads the answer. Now, this means we cannot give you the answer word by word because it needs to check the whole answer before giving it to you to make sure is correct. So this will add additional time.
So that's a consideration. You don't get your answer. You may want to go for coffee and come back, alright? But you have an independent check on the answer. Make sure that it jives by all the guardrails that you've set up. And the second piece is a planner. What do I mean by a planner?
Basically something that will say, okay we have these facts and they want to know if we can meet these goals. Starting from those facts. And creates a plan and make sure every step of the plan abides by all the rules. Let me give you a small example. The cannibals and missionary prop crossing
Richie Cotton: the river, river.
I, I don't know. Oh, cannibals and missionary. I think I might have heard with different people crossing the river, but go on,
Joaquin Marques: tell the story. It's a famous problem. It never happened, but it's three missionaries and three canals on one of the river's margins with the boat on that margin of the river.
And basically, if you at any time have either in one of the margins or in the boat itself, more cannibals than missionaries, the missionaries will be gone after a while. So you have to solve the problem by always keeping them in. Equal numbers, both cannibals and missionaries or more missionaries than canals.
And imagine that you were to draw all the different ways in which this can happen. You start with three, three in one margin. In other words, the six people in one margin and with an empty boat on that margin. One possibility is that the boat is in the other margin, so game over, they can't do anything.
But other possibilities lead to situations where you have more cannibals than missionaries. So imagine you have all the possibilities. You cross out every time the rule is violated. So you prune the tree and you use that tree to make the plan. You are guaranteed that you always succeed because all of the dead ends have been cut up of the claim.
Now, this is a very simple problem to solve, but what happens if the possibilities are basically endless and you do not know how it's going to progress? Like in a game of chess, there are so many possibilities. It's inconceivable that you could account for them all and prune all the ones that are wrong.
So you need to set up a dynamic policy that calculates everything, maybe two or three moves ahead from any point, okay? And then you proceed according to the answer by that policy. You, in other words, you prune just a small part of the tree and then you prune again at the next step, and so on.
Okay? We would need to do something like this. In chat g b t to keep it from hallucinating and taking it in the wrong direction. You also need a causal engine or model that, for example, says, okay, certain things are impossible because of natural loss. that you can also use that as a policy criteria cuz you need to check for breaking natural laws when you do things.
Of course it would be terrible if you are interpreting, for example, a science fiction novel or fantasy novel. Cause it would violate but, but you could have your own rules, In an imaginary world saying this is valid and this is not valid, and then have it interpreted accordingly. Okay? That's one of the big advantages.
But if we were to implement a combination of these, I believe that we could keep chat. G B T. Within reasonable boundaries and it will not take you off in a wild chase that least. No.
Richie Cotton: That was a lot to take in, ma. Let me make sure I've
Joaquin Marques: understood
Richie Cotton: this. So, you're saying that if we use say church PT as or another large language model as part of a bigger AI system where you would have some sort of like chess engine type thing where you are pruning decisions in order to limit the scope and also maybe have some kind of factual engine Yes.
I dunno maybe something based on data like B Alpha or even like just some sort of checking of like fact checking thing, then that would provide a better AI experience. Is that correct? Yes. Alright. Brilliant. Okay. I think we think you've just solved the problem of ai. That's brilliant.
Joaquin Marques: Well, no it's going to be quite a tax. Th this is no easy solution to come up with it, but it's worth it. There have been attempts to creating of cause and effect to interpret the world. But it's too complex to be capturing full, but you might be able to do it for specific domains and it's worth it because then the feedback you get from an engine like chat, BT will the.
Surely reasonable and possible, and it will obey the rules that you set up. that would be great. Or he will tell you it's impossible and it will explain why.
Richie Cotton: I think how this leads back to chatbots because in, in a chatbot situation, you really want it to be constrained into the answers it's giving you.
And so having less freedom, I often better in a business situation than having a really broad AI that can
Joaquin Marques: say anything. Because also of people, even people working with with chat bots saying, oh, if they only throw more facts into the equation to train the engines like chat, j b T, Another emergent property all of a sudden will gain reason. if you feed it crap, it will learn it. Okay? And it will take it for granted and will say, well, something marvelous happened and then all of a sudden you were an A and you are in B where you want it to be because of this magic.
That makes no sense. So depending on the problem, you need to constrain it to the right domain and then come up with the rules, the cause and effect rules and the planning engine that will allow you to go from A to B within that world, that context.
Richie Cotton: Fantastic. Before we wrap up, do you have any final advice for any organizations wanting to adopt generative ai?
Joaquin Marques: Well, they should get as much experience. Upfront as possible, as much advice as possible to avoid going in a wild chase. The researchers nowaday they are well trained, but they don't have that level of real life experience. They haven't had, multiple failures like we all have and learned from them more than from the successes.
And to know what to avoid and what to go for and to tell, their companies they work for these may lead you nowhere. It's not a guarantee. And be listened to as well. So if you can tell a client, no, I've done that three times and against my advice, it hasn't worked. Not that I don't make mistakes, I still do, but I make different mistakes.
Richie Cotton: Thank you very much for coming on the show, Joaquin! I hope you enjoyed the experience. Thank you.
blog
DataFramed AI Series: Navigating the Generative AI Revolution
blog
What is ChatGPT? A Chat with ChatGPT on the Method Behind the Bot
podcast
[DataFramed AI Series #1] ChatGPT and the OpenAI Developer Ecosystem
podcast
ChatGPT and How Generative AI is Augmenting Workflows
podcast
[DataFramed AI Series #2] How Organizations can Leverage ChatGPT
tutorial