Skip to main content
HomePodcastsArtificial Intelligence (AI)

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Updated Apr 2024

Photo of Alex Banks
Guest
Alex Banks

Alex Banks has been building and scaling AI products since 2021. He writes Sunday Signal, a newsletter offering a blend of AI advancements and broader thought-provoking insights. His expertise extends to social media platforms on X/Twitter and LinkedIn, where he educates a diverse audience on leveraging AI to enhance productivity and transform daily life.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

I see prompt engineering as a foundational skill that everyone should take time to learn so that they become discerning users of language models that can be super, super powerful and ultimately supercharge the work that they're doing today.

AI literacy fundamentally starts with getting the most out of the current systems on the market. And there's no better place to start than understanding prompt engineering. Getting the most out of the systems, your input is only as good as your output. That is the foundational truth that must be built up from. The next, I think, is a more proactive approach, which is recognizing the current landscape of generalized tools such as ChatGPT, image tools such as Midjourney and audio tools such as say ElevenLabs. Being proactive and playing a core part to help build and determine your own organization's AI strategy and AI roadmap can be a really useful thing to not only get ahead but stay ahead in this wonderful time that we are going through today. What I mean by that is thinking about it from a specific use case lens. What key problems are my business facing right now or am I facing as an individual right now? And what tools can I think of that could help me go from A to B in a far quicker and more effective fashion than if I were to go about it alone? And a simple Google search query or even asking ChatGPT can often uncover non-obvious insights and tools, techniques that you can use to infuse inside of you or your organization's strategy to ultimately go and achieve something great.

Key Takeaways

1

The effectiveness of AI outputs heavily relies on the quality of the prompts given. Investing time in learning and practicing prompt engineering can drastically improve the utilization of AI tools in various professional tasks.

2

For intricate queries or problems, employ chain of thought prompting. This technique, which guides AI step-by-step, can yield more accurate and detailed answers, especially in areas requiring deep reasoning or exploration.

3

Apply the LARF (Logical consistency, Accuracy, Relevance, Factual correctness) framework for assessing the quality of AI's responses. This approach ensures that the information you use or share is reliable and appropriate for the context.

Links From The Show

Transcript

Adel Nehme: Hello everyone, I'm Adel, Data Evangelist and Educator at DataCamp, and if you're new here, DataFramed is a weekly podcast in which we explore how individuals and organizations can succeed with data and AI. Since the launch of ChatGPT, probably one of the single most trending terms in the generative AI space outside of ChatGPT has been prompt engineering.

The act of tuning your instructions to get the best possible response from ChatGPT is treated like alchemy by some and science by others. So what makes the most effective prompt for ChatGPT? Enter Alex Banks. Alex Banks has been building and scaling AI products since 2021. He writes Sunday Signal, a newsletter offering a blend of AI advancements and broader thought provoking insights which we have linked below.

His latest course on DataCamp is on understanding prompt engineering, which we have also linked below. And he consistently shares his expertise on LinkedIn and X. Throughout the episode, we spoke about strategies for building more efficient data, effective ChatGPT prompts, why certain prompts fail and others succeed, how to best approach measuring the effectiveness of a prompt, what he thinks the future of prompt engineering will look like and a lot more. If you enjoyed this episode, make sure to let us know in the comments on social or more, and now onto today's episode.

Alex Banks, great to have you on DataFramed.

Alex Banks: Adel, it's a real pleasure.

Adel Nehme: Thank you so much for coming on. So you t... See more

each the Understanding Prompt Engineering course on DataCamp and you write the Sunday Signal, which is a great AI newsletter that I highly recommend everyone to subscribe to. And you're also super active in the AI community.

So maybe before we get into discussing prompt engineering, walk me through how you got into AI and what led you to create the Sunday Signal.

Alex Banks: Sure. well, thank you very much for having me, Adel. And to give you a bit of an idea of my background and how I got involved in the AI space, I I started creating content at the beginning of January 2022 and I was immediately taken back by the opportunity of AI, where it is essentially on demand intelligence.

And what I mean by that, it is way cheaper and way faster than what you can get from humans. And that fundamentally sparked my curiosity. And what that led me to realize was that if you see the Explosion of stuff people are building right now ultimately seems like AI is the platform that everyone's been waiting for, and I'm a firm believer that there will never be a technological advancement in our lifetimes that will diffuse as fast as AI and the beauty now is we have tools that can augment human potential and allow anyone to become them.

a storyteller. And for me, that storytelling component at all is just such a foundational component. From anything from business leadership, writing, I'm a big believer in that the most important traits of an entrepreneur is someone who can tell a story. And Many people don't think, you know, well, at least for me, I didn't think I could be a good storyteller.

I was always quite reserved as a child, but AI just allowed me to realize that potential to the nth degree, whereby there are now tools that you, I, anyone else can use, like ChatGPT video tools such as RunwayML, image tools like MidJourney, and Coding tools like Cursor dramatically reduce the barriers to create something meaningful.

And that really excites me. And how that led me to get started writing Sunday Signal, Adol, was I was at the beginning of writing on, it was previously Twitter, now X. I'm still trying to get past that, past that naming friction point. And I was starting to look for nuggets of wisdom that I could infuse into my writing.

And luckily, Twitter's great in that it forces you to put your writing into 280 character chunks to really distill meaning, in essence, of what it is you want to convey. And what I realized was that my Twitter feed was just getting more and more noisy in this sea of chatter, and it always felt difficult to cut through the noise.

And one way I overcame that at all was I was creating these things called Twitter lists, which is a great way to create this curated feed that I could digest a lot easier and take my learnings from. And the questions that helps inform that, well, look, who do I trust? Who do I respect? And how do I deploy this in my own writing so that I'm not only telling stories, but also delivering insights at the same time?

And what that led me to realize was that, look, if I'm going to write something meaningful, I may as well do exactly what it says on the tin. And Sunday Signal is exactly that. It gives you signal in a sea of noise every Sunday with my favorite AI highlights. One article, one idea, one quote, and one question to ponder every Sunday, straight in your inbox.

And what I love about that is it infuses really Both my curiosities, number one, be it the cutting edge of AI, and secondly you know, you've got the, the timely and the timeless ideas the timely be it the AI highlights and the timeless ideas be an article by Paul Graham or an idea that can stand the test of time for centuries to come.

And I think for me that is such a beautiful infusion following the Barbell approach of, of ideas. And I think where that leads me to now is really a position where I'm using these AI tools, at least I'm trying to be a more discerning user of these tools as I Keep on infusing what I learned, whether that be off Twitter, a subreddit into my writing and ultimately distilling the signal from the noise.

Adel Nehme: Yeah, and definitely agree there on the signal from the noise. I highly recommend, as I mentioned, Sunday Signal jam packed with information and, you know, value amidst a lot of noise out there in the chatter. And, you know, you mentioned on Generative AI tools being able to reduce the barrier to entry to a lot of, many tasks.

that we thought previously required a lot of skillset or a lot of had a high barrier to entry, you know, as you mentioned, coding, writing creating images, creating videos and, you know, the key to creating, these really high quality outputs from AI systems, rests with effective prompt engineering, right, which is kind of the meat of today's conversation.

So maybe to kind of deep dive into what makes an effective prompt in chat, can you give me an example or a trait of what makes a good prompt in chat G pt.

Alex Banks: Yeah, look Adele, the number one piece of advice I've picked up when writing an effective prompt in JackGPT is that your output can only be as good as your input. And I think that resonates deeply with me, where if I were to ask, Write me an essay on Formula 1. It's going to produce something pretty vague.

It's going to be okay, but it's going to be super, super vague. When I start to highlight my interests, ideas, and preferences, a really clever thing starts to happen, whereby you and the language model start to make new connections across domains, and it learns from you iteratively to get your desired response.

And Perhaps if I could distill that into some of my favorite strategies that I like to use to ultimately get the best results from prompting chat GPT, I think it would perhaps be helpful, be useful to, to highlight some of these. So straight off the bat, clarity, number one, it's absolutely vital to include the relevant context to reduce ambiguity.

For example, if I'm a teacher and I'm maybe wanting to create a lesson plan, I can quite clearly state, look, I am an eighth grade math teacher preparing to teach trigonometry. And what that does is it immediately gives a clear picture to the language model of who you are and what it is you're wanting to achieve following that specificity.

So the more specific you are, the closer you get to your desired answer. For example, following that trigonometry teaching, can well, can you also recommend some creative real world examples to make this lesson on trigonometry more engaging for my students? So, all of a sudden, it has the context of who you are, it has some specificity of the length of your hair, Time the lesson is is going to be going on for and also the number of students Number three and this is something that I think is also quite not obvious is is sometimes keeping the prompt open ended adult So allowing chat GPT to think outside of the box can often yield richer results that were non obvious from the outset.

So look, can you also recommend some creative real world examples so that when I'm teaching this lesson on trigonometry, it can be more engaging for my students? And that's really wonderful, because it's all about sparring back and forth. With the language model and uncovering ideas that you would have never, never otherwise have set foot on.

Adel Nehme: That's great. And then you know, you mentioned a couple of things here on, especially on having an open-ended prompt. Maybe walk me through that concept a little bit more detail. I've never seen someone, conceptualize the open-endedness of a prompt. Walk me through examples of how that looks like in a bit more detail.

Alex Banks: I think perhaps the best example here, Adele, would be to use one of the frameworks that I really enjoy and actually use quite, quite frequently. And it really touches on that open endedness idea quite nicely. And the framework that I like to use is the Persona Problem Solution Framework. And to give you a bit of context for this, Dave Klein, who is also a prolific creator he runs the MGMT accelerator.

he teaches management and leadership to a whole host of individuals. Also an ex Bridgewater, a colleague. So under the reins of Ray Dalio, the prolific Wall Street investor. Anyway, he asked me, Alex, I'm wanting to create a great prompt for my leaders. They've got a host of leadership problems they want help solving.

What prompt can I use to help them answer their questions? And I spent my, I spent some time scratching my head and I thought, why not allow the language model, ChatGPT, take the shoes of the prolific investor himself, Ray Dalio. So I go, look, ChatGPT, you are an advisor to Ray Dalio. You're an expert problem solver for leadership tasks.

and a renowned detailed prompt writer for large language models. And here's this list of problems that I'm wanting to solve. And I go, do, do, do, do, do, do, here are all the problems. And then the really neat bit is this adult where at the end, so I've defined the persona, I've specified the problems that I'm wanting to solve.

And now I come in for the third and final part, which is the solution. And I go, look for each of these X problems, please provide Number one, three relevant online resources to solve the problem, a mental model or framework to deal with the problem at hand and also include an explanation of how the problem is solved using the mental model or framework, and then also a reference to Ray Dalio's book Principles with respect to the problem at hand, and then the fourth and final part for this which highlights that open endedness that you, that you mentioned earlier, Adel.

Provide a detailed prompt for a large language model to solve the problem at hand. And what we're doing here is we are using ChatGPT to essentially prompt itself and to think, okay, what effective technique can I use here to get the, to get the most out of my auto regressive nature of just next word prediction.

And this is me in essence, trying to bridge. the chasm between the simplistic nature of how these large language models think to something a little bit more a little bit more involved and a little bit more beautiful given the current constraints of the data and how these, how these models compute right now.

and what I really like about that is we get to define exactly how the solution looks like. And from what I specified earlier, you know, right near An essay on Formula One. we've gone to this almost opposite end of the spectrum where we're getting the model to self reflect, look inside of itself, and By using, by being inherently specific, we're really making the most and getting the most out of ChatGPT's capability.

Adel Nehme: Yeah, it's interesting you mentioned here that self reflection part because that's what I wanted to kind of touch upon. you know, there's also a few techniques that you share in the course that we're going to expand upon throughout our conversation that lead to that kind of self reflection. And it's interesting seeing how that provides better results, like even stuff such as, take a deep breath or think step by step about your actions, creates that interesting dynamic within the language model that lets it self reflect and produce better output.

So maybe walk through the importance of that self reflection. element and what you've seen while prompting tools like Chachapiti.

Alex Banks: Yeah, it's really interesting and you've probably seen some, some examples pop up, some, some viral examples, you know, number one, take a deep breath, or number two, I'll pay you 100 if you create a great output, and the list goes on and on, Adel. And it's really, really interesting to see, and something that, I think is still very hard to find a clear cut solution to.

All we're doing right now is simply learning how these models behave more and more. And as a result, you get more information to determine how we can prompt the best output. Now, tools like this are asking or at least telling, oh, I'll tip you or take a deep breath. They are giving the language model more space and more breathing room.

And from the outset, that seems quite obvious. But when you really think about it, why would I, as a next word prediction tool, think about creating an answer that is better than if I wasn't getting a hundred dollar tip after this. And right now that is, that is the non obvious bit. And we, we still don't have an answer for that quite honestly, Adel.

But we are gathering more and more information, more and more data, as to what makes these systems perform well. And that part really excites me.

Adel Nehme: Yeah, definitely. And, you know, we're talking here about, a few different ways to optimize your prompts. You mentioned that persona actions, kind of solutions or like persona problem solutions framework. You also mentioned the open endedness clarity, reducing ambiguity I think one big challenge for me when prompting tools like ChatGPT is I don't know how to evaluate whether a prompt is effective or not, What are ways that you can evaluate the responses of ChatGPT and kind of evaluate the effectiveness as a prompt, even systematically and from a, you know, more scientific perspective than just looking at a comparison of how the output is changing?

Alex Banks: Yeah, I think that's a, such a fantastic question, Adel. And I think there's a really simple acronym here that I like to use. to effectively evaluate responses from ChatGPT, and that is LARF. Now, it isn't as humorous as it actually sounds. The acronym is L A R F. And whilst I go into a lot more detail on the course with Understanding Prompt Engineering, I think it would be useful to give a high level overview of how this can make you a more discerning user of ChatGPT to effectively evaluate.

Your responses. So starting with L, which stands for logical consistency. Why I like to start here is, if I'm asking chat GPT, look, what are the benefits and drawbacks of the drag reduction system on a Formula One car. And as you can see, we've got Formula One is quite a recurring theme throughout this podcast, which happens to be one of my one of my favorite pastimes anyway.

Okay. And it states this list and it goes, Oh, it introduces the Oh, it makes the cars top speed higher, but then also on the drawbacks, it says, Oh, it makes the cars top speed higher. All of a sudden you've got this contra this contradicting statement saying that it is also benefit, but also a drawback.

And that's it. Whilst it's quite a simple example, it highlights that models are fallible, they do make mistakes, and using the discerning human eye to review the output and check for that coherence, I think is super, super valuable. Moving on, you have accuracy. And This tendency for models to hallucinate and what I mean by hallucination is chat GPT can often state an answer.

It can often confidently state an incorrect answer. So if you go look who was the first person to walk on the moon and it goes, Oh, it was Buzz Aldrin. Obviously, the correct answer is Neil Armstrong with Buzz Aldrin being the. the second person. So it's super, super useful to cross reference these answers with alternate resources.

For example, you can add things such as, such as the browsing capability, or even use plugins or GPTs that can reference papers and resources that are ultimately infusing the output with, factual data. That can ultimately lead to a better response. R stands for relevance. So this is essentially meeting the context.

And what I mean by that, Adel, is you're essentially ensuring that the response aligns with the context and actually what you wanted to get out. of the answer when you were writing the prompt. So if you're asking for, you know, a list of great restaurant recommendations in London, and it says, oh, here's this recommendation, but it's in New York City, all of a sudden, you aren't meeting the context of what it is that you wanted to achieve.

Now, I think similarly, tools can be a fantastic way of overcoming these limitations, and I'm sure we'll get on to those a little bit later. And then the final part of this acronym, F, Factual Correctness. So, as we're all aware, these models have a cutoff date and when you ask a question without the context of online browsing, it's unable to tell you what happened in January of 2024.

Now it might do, but it would be hallucinating, which we talked about earlier, where it's confidently stating this incorrect answer. For example, you know, who won the world cup or other great sporting events or, or happenings, which took place past this cutoff date. Now, why I think it's really important to understand these is it equips you with where Chantry PT strengths but also limitations lie and by understanding both it really allows you to get the most out of the model and enables you to achieve whatever task or action it is that you set out to do.

Adel Nehme: and you know, one thing you mentioned that I'd like to kind of latch on to and expand a bit more on is the aspect of hallucination with large language models, we've definitely seen that, both pretty funny high profile use cases of large language models hallucinating in public, but also, you know, you know, a bit of a dark side of the aspect of hallucination was that, models like Chachapiti, especially image generation models and video generation models tend to have biased outputs.

You know, if you, maybe that's been fixed now, but a year ago, if you had put in a mid journey, a picture of five doctors, right? Like these five doctors will most likely be of a particular demographic versus another. Maybe walk me through ways that you can leverage prompt engineering to minimize bias and this type of harmful impacts or aspects of the outputs of large language models and AI generation tools in general.

Alex Banks: Yeah, I think it would be perhaps useful to tie this to an example, Adol, and ways you can overcome that and The most prolific that I've seen of recent is the Reversal Curse, which was first highlighted to me in Andrej Karpathy's Introduction to Large Language Models video. And the way this Reversal Curse works is, if you ask ChatterGPT, Tom Cruise's mother, it will respond with, it's Mary D.

Pfeiffer, which is correct. But if you ask, who is Mary D. Pfeiffer's son, Jack G. Petit will respond, I don't know, as a large language barrel, da da da da da, and that usual spiel that it provides. And that's really interesting, because it knows who's who the mother is, but it doesn't know who the son is.

And what does that show? So it shows that Chach Bt's knowledge is very one dimensional. And what I mean by that is you have to ask questions from certain angles, certain ways to peer in and find the answer. And this is, you know, very unlike other feats of engineering, both from a software and hardware lens, because we still don't know exactly how these models work.

And What that shows is number one, an inherent flaw in its knowledge understanding, but number two, how that leads into biases, which are so often a mirror of society and a mirror of the quality of the data set that these models have been trained on. So given that it is ingesting, an enormous quantity of, information that Typically, web docs texts, etc.

When you ingest that, it actually amplifies these biases that are present inside of the data, and these include, you stereotypes, misinformation, asking a question as simple as, who typically cooks in the household, and it responds in a gendered answer, quite clearly represent a bias that has been, it might have been absorbed from historical or cultural data.

And the best way I found to overcome this is to use tools that can include Or at least it can overcome this factual correctness idea. So, I like to use web browsing, because I like to get up to date information. I like to use tools that reference an archive of papers, because then I get to factually correct the answers that I'm receiving.

And what I like about that is, it allows me to overcome these shortcomings. and avoid any imagination of details or facts that can so often lead to incorrect outputs that if used in sensitive settings can lead to often results that could, that can ultimately be quite devastating in, in a sensitive time.

So it's really important that we use the appropriate tools in the appropriate setting to get the answers. that we want?

Adel Nehme: Yeah, and you mentioned tools here, specifically, you know, using chat GPTs, browsing capabilities, also maybe using other GPTs if actually correct. What are some non obvious tools that you've worked with that helped you improve the quality of your prompting?

Alex Banks: Yeah, such a great question. I don't, I think, you know, there are, there are some that come to mind. I think, you know, if I'm looking at, say, the GPT store tools, say, scholar AI, for example, is one where you get to, it essentially acts as an AI scientist where it searches across 200 million plus peer reviewed articles, and you're able to create, save, and summarize citations.

And the beauty about doing that is you get to extract figures and tables from lots of peer reviewed articles. And why is that good? It's good because JCPT is famous for hallucinating, and that's a serious problem when you're wanting to write and develop source material. So by being able to directly.

query relevant peer reviewed studies that can link directly to the data. All of a sudden, we start to enter this new era of value creation with AI, where now, all of a sudden, I can create something really meaningful that is backed by data and factual correctness by being able to tap into this wonderful resource of human knowledge and intuition on the mud.

Adel Nehme: And, you know, I take a bit of a step forward here on our prompting journey, right? Because one aspect of the course that is kind of more advanced prompt engineering techniques is chain of thought prompting. And I've seen, you write about this and I've seen the community write about this quite a lot.

Maybe walk us through what chain of prompting is in a bit more detail, and share us some examples over why chain of thought prompting is so effective at building up effective outputs from tools like Chatshub T.

Alex Banks: Yeah, absolutely. It's such a great tool that you can equip to your arsenal when you're wanting to create great outputs using tools like ChatGPT, Erdal. And so, what is Chain of Thought Prompting? Chain of Thought Prompting is a fairly advanced technique that ultimately takes training a step further, whereby you're not just giving ChatGPT examples, but you're actually providing a roadmap of how to arrive at the answer.

And what I love is that you get to really be almost this guiding hand to the model for exactly where you want to go. So if I'm solving a homework problem or if I'm traveling, or if I'm doing something that is unique and specialized that is typically outside of ChatGPT's knowledge base, I can allow the model to think step by step.

So, there are different ways we can, we can think about this, and we can, Think about breaking it down, Adel. So you've got a zero shot chain of thought where you're going, look, here's this scenario. Just think step by step and go for it. You're almost throwing JackGPT in the deep end where you're getting it to reason, think through, but you, you don't have this predefined set of thoughts for it to reason through.

Now, I'm going to That can be useful. You get to peer inside of the model's thought process verify and trust its conclusions. I feel there are, there are better ways of going about this which involve one shot and few shot chain of thought prompting. So one shot is really just It means one example.

So, how can I provide an example of solving this problem that ChapChickBT can learn from and ultimately inform the output that it's going to respond by? FewShot just means a few examples. I'm going to give a few different scenarios with some nuances and subtleties for how I can reason through this answer.

And training techniques are really, really great because they ultimately help shape the answer that you want to achieve. You're very much, as I said earlier, handholding the model and directing chatgbt to get exactly or at least as close to the answer as you like. And as I'm sure you'll, you'll explore in, inside the course that is understanding prompt engineering.

For example, if we use one of my favorites, you know, I'm, I'm an astronaut in space. I met some aliens. I avoided two. I met three. I said goodbye to four. How many aliens did I meet? And then the answer would be, okay, well, does avoiding an alien mean that I met them? No. So therefore don't include it in my answer.

Little subtleties that the model may or may not have assumed. that acted as an interaction can now be easily verified and confirmed that it is or isn't the right thing to include when reasoning through this problem. And that's the beauty of, beauty of chain of thought.

Adel Nehme: Yeah, and I experienced that beauty as well, you know, that you mentioned here, like, especially for writing tasks, Because you're able to show the model examples of your own writing, right? And it's able to emulate your voice and tone, your structure to phrasing, right? So being able to provide examples is very useful for these types of use cases when working with Chatshub D.

Alex Banks: Yeah, examples can be super, super helpful. And I often use it a lot when I'm crafting emails. So I like to write, write emails in in my own voice, in my own tone. And if I'm wanting to, provide examples and ultimately get a response for something quite complex, I can put in some unstructured thoughts, some unstructured ideas at all, and present some examples of how I typically write my emails.

And all of a sudden, Shadow GPT will sound quite close to how I like to respond, which is a really beautiful thing.

Adel Nehme: Now, there's one thing that I wanted to kind of also, you know, I'd be remiss not to ask you about, given that we're having a discussion here, is that something that I and quite a few people in the Chachapiti and AI communities have picked up on in the past couple of months, which is Chachapiti is becoming more and more lazy.

I'm not sure if you've noticed that as well, where Chachapiti's outputs are becoming more laconic, more terse, like a bit, is not as useful or helpful as it was before. I've even seen the OpenAI leadership team address this in the past. One, have you seen this as something that's been in your experience of using Chatshub D?

And maybe if so, what are ways that we can leverage prompt engineering to kind of reduce some of that downside effect?

Alex Banks: It's really interesting. And I definitely saw a bit of a drop off from when GPT 4 was released to some of the more recent outputs that I was receiving. Now, whether or not that is a, compute constraint for, you know, the hundreds of millions of, users that are querying these models to then, use different or perhaps scale down versions to provide these answers remains a question to be asked.

But what we can think about here in, in terms of getting better responses is to use some of these techniques that we that we highlighted previously. So using chain of thought, using better examples because ultimately that that final stage of training these models before they are put in the hands of consumer, which is reinforcement learning from human feedback.

That's really the best way, you know, using sort of choosing the best responses to, to these answers and getting the model to. Ultimately understand what constitutes a great answer and how can I act as humanly as possible. So using examples can be such a great way to help inform and help steer the model.

in a direction that it can so often go off on a tangent and provide irrelevant and otherwise generalized responses that aren't specific to the problems that you're wanting to solve.

Adel Nehme: Yeah, brilliant. So maybe let's switch gears here a bit, Alex, and instead of talking about, best practices in prompt engineering, let's take a, higher level look and look at prompt engineering as a skill set or maybe as a career path, big part of the conversation last year is that prompt engineering will become more and more of a career path.

And, you know, we started seeing at certain points in time roles like prompt engineer pop up the lexicon, but also in certain job posting. Do you think that prompt engineering will become a viable career path in the future or a skill that just everyone needs to learn, like Googling, for example?

So I'd love to see here, where you sit on that side of that debate.

Alex Banks: I see prompt engineering as a foundational skill that everyone should take time to learn so that they become discerning users of language models that can be super, super powerful and ultimately supercharge the work that they're doing today and We see this through the job opportunities that are advertised today at all, where if you look at, companies such as heavier, which is a company for analyzing and searching across your documents, they're paying up to 250 K for prompt engineers to join their organization and write these great system prompts that can ultimately help steer the model to to be really effective.

in providing great outputs to the thoughts, the reasoning, and the tasks that it is programmed to do. And clearly that has a lot of value attributed to it right now. paying a quarter of a million to be a great writer, specifically for language models, I think is, is fantastic. And it really highlights the value of the opportunity that is attributed to this right now.

I think over the next, you know, Half a decade, it is going to be such a core competency that everyone attention to. If you don't, you will very much likely fall behind. You will be unable to extract the best from these models to ultimately retrieve and receive outputs that are far greater. The mass of your peers.

So for me it is absolutely a core skill that I'm paying a lot of attention to. And at least the conversations that I have with my peers as well adult being a great prompt engineer is something that that is definitely worth your time.

Adel Nehme: And do you think we'll actually see more jobs open up like Prompt Engineer, where the core function or core responsibility of that job is writing effective prompts, or do you think it will disappear into the background of almost most roles today?

Alex Banks: Yeah. So in terms of building, building products on the application layer ultimately the system prompt is really. the core piece of value add that that you provide to your users, right? Where you're taking this generalized LLM and you're steering it to hyper specific use cases. so for anything operating on that layout, I see it as absolutely a core, not only a core competency, but also a core role that is optimized for.

As we think about thinking of future states and extrapolating that across, across future states at all. With the emergence of tools like GPT 5 and as we trend closer and closer towards AGI, thinking about prompt engineering starts to take a little bit of a different shape where all of a sudden these systems are getting more and more intelligent and as we, as we highlighted earlier, being able to prompt themselves all of a sudden starts to become a very scary, but very real thing.

situation. And so I think. that will then get generalized into more and more of a competency that it would be as if you were to use Microsoft Excel, right? It's a tool that has now so many different spin offs and run offs that it just seems second nature. that's how I see project engineering going, where it is such an acute skill to learn right now, but as we learn and as we grow and as these systems scale, it will definitely become more of a common and well understood practice that isn't as highly attributable to where it is right now.

Adel Nehme: And, you know, you mentioned something earlier on, system prompts for building applications, right, which I think kind of segues to my next question really well, which is kind of, Now, there are mainly two profiles, I think, that need to learn prompt engineering. Like, you have developers working with AI models, building AI applications that need to write, system level prompts.

And then you have everyone else that needs to learn, like, the prompt engineering for working with consumer tools like Chachapiti. Maybe focus on the developer persona a bit more deeply. What are the nuances between prompt engineering for, prompt engineering for, building AI applications, and prompt engineering for using tools like Chachapiti?

Alex Banks: Yeah, it's such a great question, and I think there's a lot to consider here, Adel. So, when you think about developers, you know, these guys are writing super, super long prompts, often the context window is significantly longer than, write me a poem on racing. And you're working through a very long series of persona setting, and at least being able to determine a great poem.

We'll use system prompt as an example, you know, using delimiters separating the prompt into easily digestible sections that the language model can use and ultimately infuse into the output that it is, that it is creating. And so when we think about comparing and contrasting those two the former definitely takes a greater depth.

Definitely there is more rigor, but also more. you know, more complexity that is, that is added to prompts of this nature versus getting started on prompt engineering. You definitely don't have to be an expert and prompt engineering for working as a developer is really just an extension of asking chat GPT you know, to, think about drafting an outline for.

My next essay that I'm going to write it all starts from the simple, which then becomes the complex. And when you start to understand that and start to see, okay, yeah, all complex systems start as simple systems, then It becomes a lot clearer for what actually you're writing. And so, when we do start contrasting these together, it's simply an extension of what a really simple prompt used to be.

And all of a sudden, you're using all those, all those techniques that we, that we highlighted previously. Adele, such as, you know, providing examples, the output format, the context, the style, the audience, the length being specific, being clear. You're just bundling all of those different formulas together to create something that is significantly more powerful than writing something quite generalized.

And both examples I think are really great because everyone starts from ground zero. Especially when, you know, when I look back to when I started using ChantGPT. as I'm sure you were, Adol, you know, we were sort of bewildered and shocked at how capable these systems were, especially in a, in an interface that is so native to us as humans that is Chant.

So, when you go from that to all of a sudden using tools, using techniques that can be used to craft detailed exceptional prompts, that's where these, great developer prompts lie. And I think that's a really beautiful thing to understand.

Adel Nehme: Yeah, definitely. And maybe a couple of final questions from my side before we wrap up today's episode. If you were to take, you know, a bit of a step back looking at, you know, we've talked about prompt engineering quite a lot as a skill set, But if we want to take a step back and look at, you know, your average organization and, you know, different profiles it has from, you know, developers, non developers, what do you think are going to be must have AI skills professionals need to have outside of prompt engineering, And, you know, if you were to define kind of what general AI literacy looks like, what would the skills that can make it up look like?

Alex Banks: I think a base AI literacy fundamentally starts with getting the most out of the current systems on the market. And there's no better place to start than understanding prompt engineering. Getting the most out of these systems, your input is only as good as your output. As we started, Adele, that is the foundational truth that must be built up from.

The next, I think, is a more proactive approach, which is recognizing the current landscape of generalized tools such as ChatGPT, image tools such as MidJourney, and audio tools such as, say, 11Labs. Being proactive and playing a core part to help build and determine your own organization's AI strategy and AI roadmap can be a really useful thing to not only get ahead but stay ahead in this wonderful time that we are going through today.

What I mean by that is thinking about it from a specific use case lens. What key problems are my business facing right now or am I facing as an individual right now and what tools can I think of that could help me go from A to B? a far quicker and more effective fashion than if I were to go about it alone.

And a simple Google search query or even asking chat GPT can often uncover non obvious insights and tools, techniques that you can use to infuse inside of you or your organization's strategy to ultimately go and achieve something great. And that could be, Oh, okay. I want to create a promotional video of my tool.

I might use synthesia to create a generative AI video or an instructional video. my users, or it could be, okay, I'm a voiceover artist, but I'm not, I'm, but I'm constrained by the number of voiceovers I can do in one day. Why don't I just train 11 labs to understand my voice really, really well so that I can create 100 voiceovers in an hour.

And all of a sudden you're leveraging not only yourself, but your time. And that gets super, super powerful because when you look at the other side of the fence where individuals are still operating on and haven't heard of chat GPT or haven't heard of generative AI. All of a sudden you feel superhuman.

And that is a really exciting thought.

Adel Nehme: Definitely. That is really exciting. And, as we close out our episode, Alex, be remiss not to ask you, this year is going to be pivotal for the generative AI space, right? GPT 5 is most likely going to be released. Mark Zuckerberg mentioned that LLAMA 3 is being trained. Maybe what do you think the next generation of large language models hold for us, right?

And what type of use cases will they unlock?

Alex Banks: Yeah, I think we're definitely trending to a highly generalist intelligence system. As we get closer and closer to artificial general intelligence, which is. the ability to access a media knowledge work on demand. Okay, Alex, number two, I'm wanting you to write out these reports, respond to these emails, go.

And all of a sudden, it can do it in my voice, in my tone, quicker than me, because it doesn't have to sleep, it doesn't have to pay sick leave, it doesn't have to do anything like that. And what does that mean? And where does that take us? I don't know. Well, I think there is. a lot of opportunity there with respect to intelligence at the edge.

So I see as we are scaling towards AGI, the proliferation of smaller models at the edge that are by an organizational individual lens that are hyper specific and hyper tailored to you, the way you, the way you work, the way you write, internal knowledge, organizational understanding. that is where a lot of.

value and a lot of alpha is yet to, is yet to be exploited. And that part I'm really, really excited about which I don't think a lot of people are paying too much attention to. But I think specific refined models tailored to you or your organization. that are super quick, super

Adel Nehme: It's going to be a game changer. I agree. Okay,

Alex Banks: be absolutely phenomenal.

So that's something that I'm really looking forward to.

Adel Nehme: that is awesome. Now, Alex, as we wrap up today's episode, do you have any final notes or call to action before we end our chat?

Alex Banks: Other than feel free to go check out the course on understanding prompt engineering nothing else to add Adele, you know, it's been an absolute pleasure speaking to you today. If there's one thing that I can stress, it would be, if there's anything that you think about and wanting to get your, your feet wet inside of AI or generative AI.

will have a far easier time getting the output you desire by understanding to control the inputs and write a great input. And the way you do that is by understanding prompt engineering, which I believe is a foundational skill the next five, 10 years. So pay a lot of attention to it, respect it and go and move and move fast.

Adel Nehme: And whoever's listening, make sure to subscribe to the Sunday Signal. And with that, thank you so much, Alex, for coming on DataFramed.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

LLM Classification: How to Select the Best LLM for Your Application

Discover the family of LLMs available and the elements to consider when evaluating which LLM is the best for your use case.
Andrea Valenzuela's photo

Andrea Valenzuela

15 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More