Skip to main content
HomePodcastsArtificial Intelligence (AI)

Aligning AI with Enterprise Strategy with Leon Gordon, CEO at Onyx Data

Richie and Leon explore aligning AI with business strategy, enterprise AI-agents, AI and data governance, data-driven decision making, key skills for cross-functional teams, AI for automation and augmentation, privacy and AI, and much more.
Sep 26, 2024

Photo of Leon Gordon
Guest
Leon Gordon
LinkedIn

Leon Gordon, is a leader in data analytics and AI. A current Microsoft Data Platform MVP based in the UK, founder of Onyx Data. During the last decade, he has helped organizations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence and big data. Leon is an Executive Contributor to Brainz Magazine, a Thought Leader in Data Science for the Global AI Hub, chair for the Microsoft Power BI – UK community group and the DataDNA data visualization community as well as an international speaker and advisor.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

Many of these projects just go nowhere. The use cases have no apparent reason and it's just another tick box for an AI driven project. For us, it's really key to understand how the project's progressing, putting in place key metrics and KPIs to really monitor the project, being able to understand if we are driving value, if the project is on time, in full with no errors, and obviously understand some of the risks associated with these projects as well. This really allows us to keep everybody engaged in terms of where we are, where we're going, and how we're performing in terms of that journey.

There's a massive gray area at the moment in terms of how generative AI and these large language models and even small language models can be utilized in organizations and that they're not just a magic box that you give your input, and what you get is 100 % accurate and it's something just to run away with.

Key Takeaways

1

Before implementing AI, ensure that every use case is tied directly to business goals such as increasing efficiency, profitability, or improving market reach. This alignment helps secure stakeholder buy-in and drives measurable value.

2

To validate AI use cases, begin with small proof-of-concept projects that can demonstrate value before scaling. Focus on low-risk, high-impact areas like financial forecasting or automation of repetitive processes.

3

Virtual agents powered by generative AI can significantly improve efficiency by automating access to large document repositories, but it’s crucial to ensure accuracy and compliance, especially when dealing with multilingual and sensitive data.

Links From The Show

Transcript

Adel Nehme: Leon Gordon, it's great to have you on the show. You are the founder of Onyx Data, a data strategy consulting firm and training provider. Maybe first to set the stage, let's talk about, you know, aligning AI with business strategy. Why is it important to align AI with your enterprise strategy?

Leon Gordon: I think in this day and age, AI cannot be overlooked. Lots of organizations potentially at the moment don't know how to utilize it or how to leverage it and implement it into their organizations. But I think it's now going to start playing a cornerstone just alongside data as data literacy and becoming a data driven organization has now become the cornerstone for the large majority of organizations out there.

I think this is going to be the logical next step. In terms of how we can capitalize on it, and how we can do this. and still keep an eye on ethics and biases that can creep in and then governance as well. So I definitely think this is going to be the next logical step for most organizations.

Adel Nehme: couldn't agree more here. , a big part of the AI hype cycle that we see today is that a lot of organizations tend to be, prioritizing use cases that don't necessarily make a lot of sense, but they just want to experiment with AI. Maybe, how can you make sure that you're building AI use cases that align with the business strategy and are actually driving value, especially in such a nascent field today?

Leon Gordon: In our approach, what we tend to do is try to align to busines... See more

s objectives. Now, the overarching business objectives for an organization could be something like, I don't know, growing their profit by 5 percent becoming more efficient, changing the go to market, etcetera, etcetera. Now, by looking at projects and use cases that align with the direction that the businesses going in from a strategic perspective, you can then start to drive value.

Small pilots or proof of concepts as projects can be absolutely fundamental in really trying to prove out a the use case be the value. And then once you've actually achieved that, then obviously scaling that across the organization.

Adel Nehme: can you Paint some examples of, some useful applications that you've seen that fit that criteria from companies you've worked with or companies you've observed.

Leon Gordon: from a financial perspective, we all know that financial departments are littered with spreadsheets, forecasting budgeting is absolutely, time consuming process and very difficult to get right. Now, by utilizing machine learning technologies, for example, a branch of artificial intelligence, we can start to look at forecasting solutions for organizations, really understanding where they are from their data perspective.

Maybe, like I said, moving along from this very spreadsheet driven kind of process to then looking at, okay, how can we structure this data to begin with? Automation of The data movements we would either call this ETL or ELT, the pipelines to orchestrate that into a governed environment that can then be utilized for potentially some of these machine learning algorithms to look at forecasting as a specific use case there outside of that, I mean, we tend to work predominantly in the Microsoft sphere. Now what we're understanding is that there's a big use case for virtual agents. Now, lots of organizations are very paper driven as well. So I can cite one of our clients based out in Dubai currently, 000 documents that they use for their field team.

Now, Trying to get through and sift through all of this information, which is rich for the organization. It holds years and years of experiences, learnings and training material as well. Being able to condense that, digitize it, and then presenting that back friendly environment, usually a chat bot that could be interacted in, and pick out the nuggets of information from those documentation, again, has become a massively efficient process.

And also, it opens up that knowledge across the organization. In the palm of your hand, whether it's a mobile device, tablet, device, laptop, desktop, et cetera.

Adel Nehme: That's great. And there's a couple of things I want to tease out from what you just laid out. you mentioned proof of concepts there. I think, you finding the low hanging fruit within an organization for, , these AI use cases, whether, generative AI, as you mentioned with virtual agents, or, , more, let's say, traditional, quote unquote, machine learning.

 is actually extremely important if you want to be able to drive value over the long haul because you want to get people excited about these use cases. maybe what's your framework for identifying these low hanging fruit and how would you advise leaders listening in to kind of identify these low hanging fruit and how to best implement proof of concepts?

And the second question I'll have, it's on virtual agents, but I'll come back to that. So maybe, , let's first talk about low hanging fruit.

Leon Gordon: Yeah, no problem at all. So this is great timing. I've just recently released a LinkedIn learning course. And it's titled how to build and execute a successful data strategy. And we literally lay out this framework throughout this course in terms of how you can identify the use cases. The framework for then understanding which are going to be the most beneficial to the organization.

And I use a waiting matrix to do this. I'll go into this in a bit more detail shortly. And then the key thing is how you can continue to measure these activities and understand that these projects are actually delivering, or if they're not delivering and need to kind of pivot early. So for us, as I mentioned before, we really start, in terms of the strategic objectives of the organization.

Now we start at the executive level. Generally, 99 percent of organizations across the globe, if a business has a strategic objectives or a series of strategic objectives, they've been agreed to the executive level and it's being rolled out across the organization. So when you're actually looking at projects that can support these already agreed business objectives, it's much easier to actually get buy in to then move forward with them.

Obviously. Buy in a then being able to look at the costings, for example, the resources required, the risks. Now, all of these different categorizations have weightings that we apply to them. Let's just call it a scale of 1 to 10, for example, 1 being the least and 10 being the most. Now, within our matrix, we then, once we have all of these, , weightings kind of, Captured from the executives and department heads, etc.

We then look to apply a scoring to that based on some of this criteria. Now, generally, we end up with a few projects which have higher scores or lower scores depending on the way you want to weight it. And it becomes fairly obvious in terms of, for the organization, which ones potentially will drive the most amount of value.

Then we get back to the executive team, get agreements in terms of the route forward with these projects, and then obviously put these projects in motion. Now, as I've mentioned, the key here really is not just to have a project. You've mentioned it yourself at beginning of the podcast. So many of these projects just go nowhere.

The use cases have no apparent reason, and it's just another tick box for an AI driven project. Now, for us, it's really key to understand how the project's progressing, putting in place key metrics and KPIs to really monitor the project, being able to understand if we are driving value, if the project is on time in full with no errors, and obviously understand some of the risks associated with these projects as well.

This really allows us to keep everybody engaged in terms of where we are, where we're going and how we're performing in terms of that journey. Now, for us again, like I say, this has proved to be most efficient way. then the best way of action actually ensuring that we deliver tangible projects that see the light of day or being able to pivot early and move on to something that potentially can drive more value as well.

Adel Nehme: There's something that you mentioned on, you know, that entire journey of building an AI use case that keeps everyone excited. You know, we had Eric Siegel on the podcast a while ago, and he mentioned how, even if you do have a machine learning use case that is aligned with business value, if you don't get the stakeholders involved early enough so that you can have an insight over how to deploy it, a lot of machine learning projects tend to be doomed.

Do you want to comment on that maybe a bit more and like how to make sure that whatever use case or proof of concept you're building actually gets deployed at the business value and what are the hurdles to get that happening.

Leon Gordon: So for us, stakeholder engagement is key. Generally, we look at getting it from the executive level and then it kind of feeds down throughout the organization. As I've mentioned, something that really, again, is key for us is that aligning this with people.

Business objectives are already agreed. The company is already moving towards and potentially needs a bit more of a kickstart direction. Now, with the stakeholder engagement, it also trickles down to the heads of department, which then trickle down to through the rest of the hierarchy. So again, you ensure that you have the business on board in terms of the transformation that you're going towards.

Because for a lot of organizations that we work with, These are transformational projects, and there's obviously, some resistance that can come with that. And the key to this as well is to being transparent, really across the organization in terms of what are we looking to achieve? Why are we looking to achieve it?

And what does this mean for each of these different positions moving forward and keeping them updated with progress against this? And this is really why I think that our metric and our weighting system and the KPIs that we introduce ensure that we, a, can really show that this is tangible throughout the whole process.

We track against our metrics show the value that this project has delivered. And then for us, this is really key in terms of getting these solutions deployed, right? Because we have a really transparent story. We can see the value. We can see change that it brings to the organization and how this will benefit them in the long term.

So really, this whole Framework for us is key to getting this information deployed from to finish.

Adel Nehme: , you spoke about virtual agents bit ago about talking about, you know, how, great use case, especially of generative AI, is to, build virtual agents that, automate you can have, you know, a knowledge graph within the organization through a chat bot.

 so, walk us through the nuances of building such use cases, especially with so much hype around, generative AI today. and is there any unique challenges associated with building these use cases or maybe how would you best approach building generative AI use cases as like any of the meta question here? 

Leon Gordon: actually, let's begin beginning. Good place to start. So for us again, identifying the use case. So what are we looking to achieve here? And how can we achieve it is a virtual agent is generally actually required first and foremost. Then again, really being transparent with the organization.

There's a massive gray area, in my opinion, at the moment, in terms of how generative AI and these large language models and even small language models can be utilized organizations. That they're not just a Pandora's box or a magic box, should I say of, okay, you put something in your prompt or your, input and what you get is, 100 percent accurate.

And , it's something just to run away with. And so we identify across an organization where they are in this, maturity curve. in terms of understanding these type of technologies and solutions and then again, being very transparent, we kind of showed them and walked them through what is to be expected, some of the fallacies and nuances to be aware of and what this potentially can mean to their business.

And the other point to look at is from a legislation perspective as well. the risks that can potentially come down the line as we continue to legislate these type of technologies as well. Now, we find that as a foundational layer that really sets the scene for what can be achieved and how we can go ahead and do that.

Then we look to have an MVP. Okay, now what I mean by this is a minimum viable product for us to be able to go out to the client and potentially start with a small subset of documents. We've also been able to look at documents from different languages. And being able to actually prompt and receive a response in different languages as well.

So, for those of you that might not be aware, we do a lot of work the Middle East. And in the Middle East, Arabic is one of the most predominant languages there. So for us, it's really key that have to be able to, , A, ingest documents and be able to index them, and reference them directly , and then also be able to, like I say, prompt and receive responses back multilingual as well. So again, we have to go through thorough testing, from our perspective that all of this information actually matches, what's expected to be retrieved from these documents. And a second hurdle to overcome is.

Some of these answers are actually set in stone. It should be static, right? So, for example, if it's, , company legislation, , for example, specific training material, then we shouldn't actually be giving the LLM or the Gen AI the opportunity to infer the LLM. Information over this response.

So again, that's a key decision for an organization is when you are retrieving this information. Do you expect it to be a static response like you have documented here in the documentation? Or would you like the generative AI to have the opportunity to go wider and infer and enhance response as well?

So a lot of this is where we see ourselves on this journey. The other aspect to look at is that this technology is changing, on a rapid basis. So it's not really a set and forget. So organizations have to be aware that potentially where they're starting, there could be new features that could be changes that they may need to overcome we move forward with these type of projects as well.

Adel Nehme: one thing I want to make sure that we discuss as well as you mentioned a lot about kind of how the technology is evolving, and I think, as the technology evolves, there's a big conversation that's happening about how AI will change productivity in general.

 and, you mentioned here the use case of using a virtual assistant, that kind of augments Maybe how do you see the kind of nuance or divide between using generative tools to either completely automate the business process or augment a business process? What have you seen to be maybe the dominant paradigm?

What would be your recommendation?

Leon Gordon: for me, the first use case is are we just using Gen AI because we have to? So let's not forget that there's a, big use case out there for so robotic process automation that's been around a long time. So do we need to use a hammer when we can use a drill? So that's the, first starting point there.

 really looking at these processes to understand. where they can be more efficient. So really analyzing the processes in detail, documenting those processes because you'll be surprised how many organizations don't have fully documented processes. And then again, really looking toe. Okay, what's the goal here?

So, for example, is it? I don't know. We have somebody who's spending 20 hours a month doing this. One part of this process. Okay, let's look to decrease that by 75%. What are the options available? Mhm. to fine tune this process. What do we need to be aware of in terms of the value metrics, the accuracy required, all of these type of conversations need to be taken before actually going into technical solution and then again, being able to understand if it's been successful, how can it be scaled?

then obviously, how can this be deployed? So I guess to answer this, we wouldn't always just fall directly Gen AI or something similar. Again, it really depends on the process.

Adel Nehme: Yeah. That's great. And then you mentioned earlier in our discussion how data literacy and data-driven decision making are becoming really a cornerstone for organizations to succeed and something that we truly believe in as well. And, segueing here to discuss about skills agenda in general.

as databased technologies become more prevalent within the organization, organization. As AI technologies become more prevalent within the organization, what do you think are skills people need to make effective use of data and ai.

Leon Gordon: I think this is really interesting moment, especially from my standpoint, because, as I mentioned, there's a maturity curve, and I believe currently across the globe, a huge gray area, okay? And it's really between data and AI. If we imagine the curve, we have data here, AI here, and there's a massive maturity gap in between.

And I think a lot of that has been driven by the mass adoption, commercialization, and investment in the Gen AI technologies. Let's just take open AI. Claude, for example, really what, was key for me was actually the interface that was presented. So opening, I obviously use the chat functionality to be able to interact with this technology and that, really opened it up to the masses.

Okay. And I think that's been really being key here because what we've seen is that organizations, and I remember conversations where financial directors, I kid you not have asked me, especially when chat GPT was first launched, , how can I query my financial data? directly using this and obviously ramifications are absolutely mind boggling in terms of being able to put that sensitive data directly into that tool.

What it meant is that organizations wanted go leaps and bounds, up the maturity curve from where their foundational layer was not correct. And so what I think it became for the likes of myself and other organizations out there is very much a consultative approach. Journey to understand. Okay, where are we currently from a data maturity standpoint?

So let's look at data quality governance. Any automated pipelines, for example, where's your data actually coming from? Who has access to that data? All of these foundational questions are really helping the organization understand the value that these requirements have to then be able to drive forward its new technology, like gen AI, et cetera, becomes available.

Now for us, really, this also meant upskilling. It's not just about the executives. it's across the organization. And again, really being transparent in terms of, okay, what AI actually is. Too often we hear, AI is going to take over and rule the world. It's going to replace my do we get universal, um, income, et cetera.

And so really demystifying a lot of these conversations across the organization. But again, really laying out that framework and that roadmap for this is where we are on the journey. These are the key milestones we're looking to achieve and how we're going to get there.

Adel Nehme: And you mentioned here the maturity journey. If you're like really early on the maturity journey, what are, you know, we kind of come back to low hanging fruit, but what are low hanging fruit that you can do to advance on the maturity journey? What are those first steps you need to take?

Leon Gordon: I think depending on the organization size, it's probably, Actually, a really good position to be in because you can be a bit more agile and you can test. You can really test, proof of concepts and pilots drive value across the organization. So again, something key here is if you already have governed data sets and data quality in place across even a small subset of data, that could be your starting point.

Okay, then start to feel out with these technologies. Is it machine learning? Is it Gen AI, LLMs, et cetera, where we can really drive value across our organization? And value doesn't always have to be monetary. There are many other ways to drive value in an organization. Prove that concept out. Get stakeholder engagement and buy in and then it starts to I always, call it the pebble in the water.

The ripple effect once you start to see success, it then spans out across the organization. Investment becomes easier and backing of these projects becomes more fruitful as well

Adel Nehme: that's wonderful. And then, when we talk about the skills agenda as well, one thing that we think about at Datacamp, quite a bit is, , what are the different skill paths different personas need to have within the organization? What is, you know, if you want to create an upscaling program for the entire organization, different people need to have.

Different types of upscaling paths. So maybe how does in your view, data and AI skills that you need to build within your organization? How do they differ by role or team or even maybe industry?

Leon Gordon: agreed. I think very deep conversation, very wide one to go into. And I think that if I can kind of Summarize it at a high level. And I think that where my mind's thinking at the moment is that for the vast majority of organizations we work with, individuals don't really wear one hat, especially in a lot of tools that people interact with as well.

So like the Power BI, etc. Combine multiple personas into individual tools as well. So I think you have to you. I understand where your business looking to go. , I've been now or in the future, build out a team structure that can deliver that. So, for example, I think a business analyst is key to kind of implement between technical and the non technical aspects of the organization, then look at the other skills you need.

So do you need, for example, somebody that can move the data, transform the data, a data engineer? And then model the data. And then do you need to be looking at a I driven solutions that require bespoke deep technical expertise? Or can you leverage existing tools that potentially give you some exposure that type of technology to again, drive value and then look at widening the team.

So I would start off depending on the size of your organization by convincing what you're trying to achieve, how you're trying to achieve it, The key personas that are required to achieve that again MVP and then again how you can then grow what does the next phase of that team look like and then the next phase is it multi skilled is it individually skilled and I think that's the right way forward I think the individuals trying to upscale currently.

And I'm involved in the industry. Really, it's key to hone in on what you enjoy doing. I think that the great perspective from a data perspective at the moment is that there there's different directions you can go into. You have lots of options. So go and try something.

We have technology available to us very freely now. See what lands and then carve out your career from there.

Adel Nehme: you mentioned the gray zone between, data and AI and that kind of on the spectrum of data maturity. I think that Mark is actually a great segue to talk about, you know, the state of organizations being data driven today. we've definitely seen a big push over the past, I think decades for our organization to be, very data driven.

and to make use of data for decision making. you know, you've been in the industry for a while. How have you seen maybe the progress from a bird's eye view of organizations using data for decision making?

Leon Gordon: I think it's been really interesting because, some organizations , haven't moved at all. I think that there's always going to be, room for legacy. think that's going anywhere, which is probably, I don't know if it's been surprising or not. It seems to be kind of foundational aspect and you're always going to get bleeding edge.

, organizations as well that as soon as the new technology becomes available, they're utilizing it, seeing how they can leverage it. And I think that probably what's happened with influx and the growth of social media and also the changes to infrastructure driven by covid is that this is probably become more apparent now.

So we get exposed more , to the technology that's available to the organizations that are bleeding edge. And I think that we probably get a little bit of, FOMO. So the fear of missing out, , which is where a lot of these projects and initiatives are being driven by. And then obviously ultimately.

Fading well, and I think it's key just to keep one foot in the future the bleeding edge, but really understand what this means for your organization. And I know I keep coming back to Adele, but how do you try value using these technologies? I think it's great for awareness. Organizations are now more aware that they need to become more data driven.

And they're more aware of concepts like data quality, governance, security, et cetera, that were probably only held technical audiences and organizations previously. So I think that's definitely a win, but it's really about coupling it together, understanding what this means for your organization now and tomorrow and potentially not.

Trying to keep up with Joneses, so to speak.

Adel Nehme: As AI evolves, do you think AI has potential to lower the barrier for data driven decision making within organizations? I've seen quite a few useful tools like Power BI Copilot or Tableau Copilot, even coding tools, right? They really lower the barrier for non technical folks, right? So do you see as well AI playing a role in, , galvanizing data driven decision making within the organization? 

Leon Gordon: Yes and no. I say it depends. So I think what we've seen if we look at the data available, is that experienced technical people, really shine when they use these tools and copilots, , non experienced, developers, et cetera, don't get as much efficiency enhancement. So I still think that humanity has a, big place to play.

In these type of technologies. And that's why copilot is such a great name, right? So again, something that's going to enhance what you're already doing, , , as opposed to taking away or, it takes away a lot of more of those junior roles. So I think that, and again, we have to really be conscious of the fact that.

It's not always accurate. Okay, so you can try to codify a solution using a copilot and for somebody that's less experienced, they may just run with the answer, not due to diligence and check to ensure that it's actually correct, especially with a language quite complex to pick , like DAX, for example, for somebody very new to that type scripting language, you're going to, Take what's given to you as gospel.

And this is why you really need to experience pair of hands leveraging the copilot.

Adel Nehme: Yeah, that's right. I couldn't agree more that data literacy and like knowledge of data, right, and having those strong foundations will be very useful in pushing back against the machine in a lot of ways and making sure that the machine is, guiding you in the right decision. maybe as you take as well, you've been consulting with a lot of companies, Leon and kind of helping them on their data driven journey.

What do you think are the most common blockers today for businesses becoming data driven?

Leon Gordon: Gosh, where do we start? Um, so, so I think stakeholder buy in , is generally , a pretty big one. again, and then after stakeholder buy in is generally organizational buy in, to be honest. So again, traversing that hierarchy and getting everybody on the ship. so to speak. What we noticed, especially in Europe, with some of the economic distractions over the last couple of years has been budget budget is definitely played a part in taking its toll in terms of some of these, projects.

And like you've mentioned, kind of the historic downfall for some of these projects. Projects. I think McDonald's , may have been one of the biggest, or high profile, stories to come out recently in terms of their relationship with IBM and their drive through AI driven tools that they were working on for the last five years.

And there's now, not seeing the light of day. So I think that , some of the, , historic failures also have been a hurdle to overcome in these discussions.

Adel Nehme: And definitely here I've seen as well quite a lot on this look on the stakeholder engagement is a big one. because, you know, budgets can always be recovered if there's will, right? And how do you get stakeholders engaged when it comes to becoming a data driven organization? Like, how do you get them from a data driven organization?

skeptics to believers. I think that's one of the biggest challenges organizations face today.

Leon Gordon: Absolutely. And as I mentioned, value. So again, what are we trying to achieve? What does this mean to the business? How can we quantify that? And how can we understand that this is actually being achieved? Those are the four key questions that we Answer for organizations. And then, like I say, showcase this through the work.

 being very transparent in terms of the delivery. Are we achieving what we set out to do? Yes or no. There's nothing worse than for organizations to getting close to the end of a project and then The understanding we've done a lot of budget and we don't have anything show for this. So we look at how we can mitigate all of those risks and fundamentally ensure that we're using these data driven and they are driven solutions and projects drive a form of value.

Again, not always commercial value for an organization.

Adel Nehme: One thing you alluded to earlier in our discussion is the importance of data governance and data quality, Especially when building, when you're trying to build use cases or data use cases, you don't have your foundations, right? Like you're building essentially your use cases on a house of sand.

 so , data quality always seems to be a perennial issue. What advice do you have? For improving it, and how have you worked with organizations on improving data quality?

Leon Gordon: understanding where you are , is the best, foundational starting point. So really taking an audit across your data estate, cataloging, as I mentioned previously, where your data, what data you have, the sources of that data. Is it paid, is it a proprietary data?

For example, is it in-house data? And really laying this out, documenting it. And then this gives us really that foundation to then be able to put weightings metrics against the actual quality of the data, how it can be improved. Does it have to be improved internally again? What is the value that improvement?

Is it required? And what's it going to drive in the future? So for us, really, we partner with organizations and we would recommend this. It's quite a laborious task. If you don't have an in house team to do this, it's the partner with third party organizations. really set your stall out and that foundation, as I think you mentioned there, Adele, it doesn't have to be a foundation of sand.

stronger your foundation, bigger the castle the island you want to build on top of it. And it's a laborious task. There's no two ways around it, but it really is the key in terms of. How you can become an industry leader and data driven as opposed to being inaccurate and making decisions that potentially hold high ramifications for your organization.

Adel Nehme: Yeah, and you touched upon this, you know, being a real various task, if I'm a leader within the organization, I want to move fast on our data projects, our use cases, our proof of concepts. There is a tension between Prioritizing data quality and delivering business value with data, right?

It's hard to measure the immediate improvement in business value from just improving data quality without necessarily operationalizing a use case on top of it. So how do you manage that when you want to improve data quality, but you also want to deliver value as quickly as possible?

How do you manage that trade off?

Leon Gordon: Absolutely. So we don't take it as a trade off. We tend to deliver parallel work streams. So with this, we're able to either potentially inform for data or take a small set of data where the quality has already been put in place and then build out, like I said, smaller scale proof of concepts that really showcase that value in parallel while still doing the data quality.

Now, what we find is this gives the organization a something tangible see what the potential outcomes will be and the value that can be driven, but also to take around the organization. Showcase it, get any feedback, and then we can feed this back into the project. So we look to, like I say, drive value at scale, but also ensure that we're delivering the foundational aspects as well.

And this translates into when we move data driven or analytical driven solutions as well. We tend to find it's very similar. With the engineering aspect, moving data, transforming loading data becomes a very laborious task, which doesn't always show value directly to the end user. So for us again, it's really about how we can get those visualizations and analysis in front of end user again in parallel with some of those more laborious tasks.

Adel Nehme: think that's a really great way of kind of balancing that trade. as you mentioned, it's not a trade off, but balancing those two work streams. many organizations you mentioned was also talking about data privacy and also data quality and other aspects as well as data privacy.

And especially related to generative AI, there's a big conversation now about, you know, How should you leverage your data when it comes to, feeding it into a Generative AI API or even a chat bar or a tool like Chattopadhyay? , what have been these conversations like with leaders?

 what's the position that you take? How do you advise leaders to kind of maneuver the data privacy landscape when it comes to Generative AI?

Leon Gordon: . I think one of the most difficult aspects was getting those conversations to happen transparently across an organization. So again, I think Samsung were probably one of the biggest stories regarding this, where they sat some of their workers based on them just uploading information directly into a tool like ChatGPT.

Now, what we're finding in organizations is that if you try to restrict end users, they'll find another way of doing it, maybe using another laptop, et cetera, et cetera, et cetera. So again, really being transparent. transparent, identifying what the risks are to the organization, how to mitigate these risks and how to showcase this across the organization well is a key starting point.

Secondly, , we don't really recommend for organizations to use, commercial tools like, chat GPT, for example, , we would prefer them to use a cloud driven environment like Azure and use open AI APIs in this type of environment to build out secure. Environments and solutions that the company can then use outside of that.

We also we don't tend to work with a lot of organizations to build their own large language models. We tend to find that off the shelf foundational models or even, for example, open source, smaller models like five, for example, Can be fine tuned and trained then fit the need, but to do so in secure environments and then understanding how the organization could build these secure environments, whether it needs to be cloud based on prem, et cetera, and really again, walking them through the risks associated with all of these approaches and looking to mitigate that with them.

Adel Nehme: you mentioned something here on building out your own, cloud environment with an OpenAI API, if you want to use a tool like Chattopadhyay internally. We've definitely seen tensions as well with organizations, , where the population wants to use a tool like Chattopadhyay at work, right?

It helps out in a lot of different contexts, but the IT policy tends to be, that it's a large risk for the organization. Do you find that? route helps ease those tensions and actually let individuals use a tool like Chatspotty safely? Or is there still risks by using a cloud environment that you, , yeah, maybe walk us through the potential risks even by which if you use a cloud environment like Azure.

Okay,

Leon Gordon: generally, it comes down to the end point. So for us again, so as yours is a fairly well, one of the most secure cloud environments available. And so with organizations, this becomes a much easier conversation to have because we're leveraging Microsoft security, compliance and all the certificates associated with that.

Now again, what we tend to find is if this is not secure or there are any issues for the client and their policies, then as I mentioned, we can look at open source models as well. So with these models, they don't have to be available online. For example, they can be deployed locally and obviously interfaces part on top of those.

So it really depends on the organization themselves. But with Jenny are at the moment, we would suggest that there's always a solution. But with those solutions, Do come risks as well, right? So even from an open source perspective, the features, enhancements that you can expect that you're really kind of localized to a small subset of devs generally, as opposed to a wider organization like your Microsoft, etc.

So it just becomes down to the organization, how risk adverse they are what they're looking to achieve and the best environment do this in.

Adel Nehme: As we wrap up our conversation, Leonia, I'd be remiss not to ask you, your career path is a fairly unconventional one. How did you end up transitioning from being a professional football player to working in data and AI. I'd love to learn about that journey.

Leon Gordon: Oh, absolutely. You're taking me back quite a while now. Um, so I guess the best way to answer this is I've always had an interest in technology. I was building computers for from a young age, trying to learn C back in the ball and days, etc. and football just, , took precedence for me, , from that perspective.

So becoming a footballer after leaving school meant that I didn't go down the traditional routes of university, etc. I went in to play football, which unfortunately didn't work out. I wouldn't be sat here today. Now, what this meant is that I put together a lot of qualities personally like determination, self training, upskilling, competitiveness, et cetera, that I was enabled to translate into the data industry.

So I took a temporal, starting doing data entry. I hope, believe it or not, and actually just worked my way up , through the hierarchies. again, upskilling outside of work, taking, courses, going to meet people, networking, reading books, doing it the hard graft way until I got to a point where my experience was able to take me through the levels to becoming, what's the route we took.

So we did, SQL first and foremost, to understand, reporting structures, which then led to the data engineering aspect, transforming and loading the data, which then led to understanding, data warehouse solutions. So, Kimball methodology specifically for me at that point, modeling the data using SAS based cubes, and then obviously moving on to data visualization.

And then from that, you get, quite a well rounded, , Kind of understanding the data sphere. And as I mentioned earlier to people that are involved in data now, you can kind of branch off to where you, find that you fit or where you find your enjoyment being fairly entrepreneurial.

For me, that meant starting my own organization never looking back.

Adel Nehme: that's really such a wonderful story I think a lot to learn from. Maybe what's your biggest advice for folks looking to transition into data and AI that come from, you know, not necessarily the most related fields. I think there's probably the most unrelated field to data and AI is probably football so you're in a great position to answer that.

Leon Gordon: Absolutely. So for me, they have a lot of tools available now that we just didn't have lots of free learning. So the likes of data camp, for example, YouTube, et cetera, et cetera, et cetera. To be able to get this information, they also have access to professionals and via networks like LinkedIn as well to be able to ask for help, look at what other people are doing and obviously find roles.

Now, what I find really key is Delivering projects so that one of the big stumbling blocks for anybody trying to get in industry at the moment is experience now the best way to get experience is to work on your own projects focus on the value you can bring to organizations there's plenty of dummy data available and at onyx data we actually offer a challenge called the day to DNA.

Dataset challenge where we have freely available data available on a monthly basis as well. put that together as part of your portfolio and just go for it. Learn your new skills. Spend the time, hone in your craft. Spend the time meeting like-minded people and spend the time delivering projects. And without a doubt, you will be able to land yourself a role.

And finally. don't be afraid to hear no. I've heard no plenty of times when I've gone for roles. I've been disappointed, but ultimately it hasn't held me back. I've understood why the no was there, what I needed to go away and work on, and then obviously work on that and build on that next step.

And I know that's supposed to be the last point, but I had one more thought come to mind, is that don't be afraid to put yourself out there, regardless of what social media looks like currently. If not, everybody is an expert. There's always something new to learn. There's always new tech coming out. So just go for it.

Adel Nehme: That's really wonderful advice. I can really appreciate that, Leon. And then maybe one final piece of advice as well, since we're talking about, you know, how to become data driven, , and, aligning AI with enterprise strategy. What would be your final advice for organizations wanting to make better use of data and AI?

Leon Gordon: quite similar the last piece of advice. Really, I think that it's a fundamental step forward for any organization, regardless of how far you are on the maturity curve at the moment. Like I say, Understand where you are currently, that's really the key, understand where you're going to get the most ROI or value across the organization, quantify any risks, associated with it, whether it's budgetary, legislation risks, et cetera, transformational risks, and then move forward with smaller proof of concepts that don't cost as much, still show value, and then get buy in across the organization to then go and take and scale.

But again, The key really is, to get started, but to get started understanding where you are currently.

Adel Nehme: That is really great advice. Leon, it was great having you on DataFramed.

Leon Gordon: Adel, my pleasure. Thank you.

Adel Nehme: Awesome. Thank you so much. 

Topics
Related

blog

AI For Leaders: Essential Skills and Implementation Strategies

Learn how business leaders can implement AI effectively and responsibly, gaining strategic advantage and staying competitive.

Kevin Babitz

8 min

podcast

Monetizing Data & AI with Vin Vashishta, Founder & AI Advisor at V Squared, & Tiffany Perkins-Munn, MD & Head of Data & Analytics at JPMC

Richie, Vin, and Tiffany explore the challenges of monetizing data and AI projects, the importance of aligning technical and business objectives to keep outputs focused on core business goals, how to assess your organization's data and AI maturity, why long-term vision and strategy matter, and much more.

Richie Cotton

61 min

podcast

From BI to AI with Nick Magnuson, Head of AI at Qlik

RIchie and Nick explore what Qlik offers, including products like Sense and Staige, use cases of generative AI, advice on data privacy and security when using AI, data quality and its effect on the success of AI tools, how data roles are changing, and much more.
Richie Cotton's photo

Richie Cotton

43 min

podcast

What Fortune 1000 Executives Believe about Data & AI in 2024 with Randy Bean, Innovation Fellow, Data Strategy, Wavestone

Randy and Richie explore the 2024 Data and AI Leadership Executive Survey, the impact of generative AI in 2023 and what to expect from it in 2024.
Richie Cotton's photo

Richie Cotton

46 min

podcast

Scaling Enterprise Analytics with Libby Duane Adams, Chief Advocacy Officer and Co-Founder of Alteryx

RIchie and Libby explore the differences between analytics and business intelligence, generative AI and its implications in analytics, the role of data quality and governance, Alteryx’s AI platform, data skills as a workplace necessity, and more. 
Richie Cotton's photo

Richie Cotton

43 min

podcast

Data & AI Trends in 2024, with Tom Tunguz, General Partner at Theory Ventures

Richie and Tom explore trends in generative AI, the impact of AI on professional fields, cloud+local hybrid workflows, data security, the future of business intelligence and data analytics, the challenges and opportunities surrounding AI in the corporate sector and much more.
Richie Cotton's photo

Richie Cotton

38 min

See MoreSee More