top of page
  • Writer's pictureAdam Shilton

Special Episode - Light the Fire and Discover What's Possible with AI

Episode 021: A special episode where we learn from three previous podcast guests about the impact of AI on the finance industry, the future of work, why you shouldn't panic (yet), how you can get started, and the importance of AI governance and ethics.


Daniel Lawrence is the CEO at Bots for That and an expert in Robotic Process Automation. Simon Devine is the Owner of Hopton Analytics and an expert in Business Intelligence. And Chris Bell is the Co-Founder at Luka and an expert in ChatGPT.


Together, they share their insights on the rapid pace of technological innovation, the growing importance of data, and how finance teams can maximize the benefits of AI and automation. They also discuss the role of AI in decision-making, the implications of AI for the younger generation, and the challenges and ethical considerations associated with AI governance.




Audio Podcast Links


Spotify

Google Podcasts

Apple Podcasts

Amazon Music

RSS Feed


Show Notes


Where to find the guests on LinkedIn:


- Daniel Lawrence - https://www.linkedin.com/in/danielbotsforthat/

- Simon Devine - https://www.linkedin.com/in/simondevine/

- Chris Bell - https://www.linkedin.com/in/chrisbellprofile/


Check out their websites at:


- www.botsforthat.com

- www.hoptonanalytics.com

- www.lukahq.com


Chapters


(0:00:00) - Navigating Finance Tech & AI Change

(0:09:25) - Large Language Models in Business AI

(0:20:18) - Implementing AI With Governance Framework

(0:24:43) - Starting Small With AI in Finance

(0:31:39) - Limitations of AI and Problem-Solving Approaches

(0:42:55) - Future of Work and Learning With AI

(0:49:05) - AI in Decision Making Role

(0:57:17) - Trusting AI

(1:07:32) - AI Language Model Governance Challenges

(1:12:46) - AI Governance Ethics & Challenges

(1:24:49) - AI Awareness Impact on Adoption


Chapter Summaries


(0:00:00) - Navigating Finance Tech & AI Change (9 Minutes)


We discuss the increasing pace of technological innovation, the opportunities that data presents, and how finance teams can make the most of AI tools and automation. We explore how technology has changed in the past 75 years, from the telephone to chat GPT taking only two months to reach a hundred million users. We consider the impact of automation and AI on finance and accounting, and how data is the most valuable commodity in the world today. With the right decisions, finance teams can utilize AI and automation to take their work to the next level.


(0:09:25) - Large Language Models in Business AI (11 Minutes)


We explore how businesses can leverage generative AI platforms to improve their existing processes and increase the longevity of their solutions. We consider the responsibility of businesses to provide data to large language models and the potential implications of doing so. We also touch on the importance of understanding data to make the most of AI tools, and how low-code tools can help finance teams maximize the value of their data.


(0:20:18) - Implementing AI With Governance Framework (4 Minutes)


We explore how businesses can take ownership for the implementation of AI and leverage the opportunities it presents. We discuss the importance of having an appropriate governance framework to ensure people benefit from the technology and prevent mistakes that could compromise data. We also discuss the importance of taking a decentralized approach and the small steps necessary to begin using AI tools in the most effective way.


(0:24:43) - Starting Small With AI in Finance (7 Minutes)


We consider how finance teams can take the lead in AI implementation and start small. We predict bad debt using AI and discuss the potential for early settlement discounts. We emphasize understanding the capabilities of existing tools and staying true to the objective to make the most of the technology available.


(0:31:39) - Limitations of AI and Problem-Solving Approaches (11 Minutes)


We discuss the potential of AI and its limitations, the difficulty of transcribing long conversations, and the importance of understanding the contexts of its use. We consider the development of AI tools, the temptation to over-promise AI capabilities, and how to make the most of AI platforms by tackling problems. We explore the possibilities of using AI to predict bad debt and the importance of reducing repeat words.


(0:42:55) - Future of Work and Learning With AI (6 Minutes)


We consider the implications of AI for the younger generation and their ability to build meaningful careers without relying on employers. We discuss the need for new curriculums that take into account the instant access to information and the potential for people to pursue their passions without the need for a corporate job. We also explore how advancements in AI have made it easier for people to engage with previously complex fields, and the importance of understanding the base skills behind the technology. Finally, we debate the need for humans to interpret data generated by AI and the importance of understanding the context of the data.


(0:49:05) - AI in Decision Making Role (8 Minutes)


We consider the implications of AI on decision making and its implications for the future of work. We explore the difficulty of AI decisions being explainable and understandable and the potential of AI in creative realms. We also look at the role of humans in quality assurance for AI decisions and the need for laws to create a framework for making AI decisions explainable and understandable.


(0:57:17) - Trusting AI (10 Minutes)


We examine the complexities of AI decision making, the potential of AI to make low risk decisions, and the importance of defining our own data sets and parameters to build trust and credibility in AI-driven solutions. We explore the impact of decision fatigue and consider the utility of AI for benchmarking and research. We also discuss the potential of auto GPT for autonomous decision making and the need for explicit prompts.


(1:07:32) - AI Language Model Governance Challenges (5 Minutes)


We explore the implications of AI for decision making, the potential for it to make low risk decisions, the importance of training the AI model, and the need for a global governance framework for AI. We consider the difficulty of policing AI and the implications of it in terms of human rights. We also examine the potential for AI to invent its own tasks and the need for humans to intervene and set boundaries.


(1:12:46) - AI Governance Ethics & Challenges (12 Minutes)


Several prominent AI professionals and founders, including Steve Wozniak, have signed a letter. We explore the idea that to move public discourse, people need to go to extremes. We examine how governments may react to this letter, and speculate on the motives of Elon Musk. We consider the need for a broad agreement on the use of AI, and the difficulty of setting parameters and practical terms when using data sets. We also discuss the balance of speed and innovation with risk and compliance, and the potential of AI to be used maliciously.


(1:24:49) - AI Awareness Impact on Adoption (1 Minutes)


We discuss the opportunities that AI presents, the importance of taking ownership of AI implementation, and how finance teams can make the most of AI tools. We examine the complexities of AI decision making, the potential of AI to make low risk decisions, and the need to define our own data sets and train the AI model. We explore the implications of AI for the younger generation and their ability to build meaningful careers without relying on employers. Finally, we consider the idea that to move public discourse, people need to be aware of the implications of AI implementation and use.


Transcript


Transcript generated by Podium.page


0:00:00

Hopefully, it's lit the fire for people to say, well, actually, something here new that we haven't thought about for is actually potentially possible. It's not rocket science. It's not sexy. It's not how should I cook pasta, but it will be something that will add value to the bottom line of vulnerability. I could quite easily see a world where AI makes all the decisions and humans are just there. A QA, provide quality assurance and data entry for the AI. Despite pre enough bandwidth, it's not what amount of low level decisions coming up in our brands. It's allows us to focus on the wider loop.


0:00:40

Hello, and welcome to Tech For Finance. We help finance teams leverage technology to level up their lives. I'm your host, Adam Shilton, and in this episode, I'm pleased to reintroduce three previous podcast guests. We've got Daniel Lawrence, CEO at Bots for That that and an expert in robotic process automation. We've got Simon Devine, Owner of Hopton Analytics, and an expert in Business Intelligence and Chris Bell, cofounder at Luka and an expert in Chat GPT. So Daniel was our first podcast guest, Simon was our third, and Chris was our fifteenth. So it's it's great everybody back that would appreciate you coming off. Nice. So thank you. Yeah. No worries. So so we'll we'll we'll get cracking then.


0:01:23

And so the format of this session is is we're having to chat recently. I was very keen to get us all together to have it might end up being a bit of a debate. We don't know yet. We just have a conversation around some of the more topical stuff that we're seeing to do with finance, tech, AI, automation, all that sort of stuff. And I suppose to get things going something that I'm seeing filtered quite a lot into the conversations that I'm having, you know, with with other industry experts or customers. Is the the pace of change right now, you know. And, you know, without making this an entire discussion about chat DBT is a lot of conversations have gone awards recently. It it has spurred this, you know, sort of mass development of all of these AI tools. You know, you can't go five minutes without single leads in posts saying, oh, you know, there's now two hundred AI tools that you're not using them, you're falling behind and and all of that sort of stuff. So I'll reserve judgment on that for now.


0:02:21

But I thought we could start off by just having a bit of a chat about so people that are listening, you know, obviously, we've we've got a lot of finance teams that listen to the podcast. How do we think people need to think how do we think people need to think about rate a change, you know, because it can be a bit scary. So I guess I'll I'll I'll come to you first annual because you've probably got the the longest experience in in automation, I I guess. And you've probably seen some phases of, you know, automation becoming more automate. Do do you see what I mean? So, yeah, if you wanna sort of chime in and kick things off, that would be great. Yeah. Definitely so.


0:02:57

The thing is the the funny thing is the future is always much faster and sooner than we think it is. And Obviously, it's gathering pace, the more digital we come, the faster it becomes. The I guess, the honest answer is most of us most of us are probably never gonna keep up with the pace of it. It's something we're always adjusting to. We're always trying to keep up and catch up with it. I don't know if anyone's you probably alternate yourself, but there's been lots of stats floating around. They've been doing the rounds on social media and YouTube recently, but There's this graph that goes around with how fast a new technology reaches a hundred million users. And it because it starts off with the humble telephone, which took a massive seventy five years, and chat GPT took two months. So the pace of change is is always accelerating, I think, and that probably always will continue too. We'll get to the point where, I mean, we're releasing products. We're hoping to get to the stage where we're releasing products from concept and roadmap through to, you know, live and running within, you know, days, not not weeks and months. So I think everything will always continue to accelerate in terms of the pace. But I guess in terms of know we talk a lot about chat GPT, but, you know, generative AI as a as a technology is is still it's still really technology looking for a problem to solve. And that's no different really for finance and accounting. And it's it's the same for probably all of the other professions as well. And there's hundreds of applications already.


0:04:25

As you said, there's there's all these different tools coming out. And if you're not using them, you're not there. And there'll be hundreds, if not thousands more, to come I think, in in, you know, the weeks and months and and the year ahead. But it's really all about the data, which is a good thing from a finance and a calendar perspective. With greater data. So it's always been for a long time now probably our most valuable commodity. Certainly in finance, it has And now the stocks rise risen even higher because ultimately the AI is just using the data that that we give it to determine for itself what it wants to do with it.


0:05:00

So from a a finance point of view, from a user's point of view, I think they're really well placed. They don't have to I don't think they have to worry about chasing the next new shiny thing or the next iteration or the next version of of chat GPT and how they're gonna use it. Because I think the the the pace that of of change naturally is going to put these things into the tools that we use and we love and when we know best. I think it'll it'll be a comfortable circle. I think the challenges for finance and accounting actually is less about keeping up with pace and more about getting in front of. And we're gonna talk to this. I know a lot more during this session, but we're gonna I think it's more getting in front of the the technology and how it's affecting and how it's gonna change everyone's jobs in the future. The roles, the activities that they're gonna do, what they're gonna stop doing, new skills they need to acquire. And I know we're gonna talk a lot more about that hopefully, but I think, actually, that's the the real challenge for finance and that's where they need to be thinking about. Are you nodding your head there, Simon? Yeah. I think that's fair. I think I mean, you know, I tend to be one of these who looks at some of these new technologies and says, yeah, look, it's gray. You'll be Fantastic. But let's just hold off a little bit.


0:06:14

And I think initially with with these chats, AI tools that was where around us as well? Probably still am. Certainly, there are no people who live and breathe them. That's fantastic. Equally, I know others who through their through their work, and I know will come onto this as simply banned, not allowed, and these are large, large consultancy and accounts and he firms who has no none of that leads. Yeah. And I think that's where I look at these and think, yes, the the the rate of change, the speed of change is that.


0:06:50

And I I saw and I if you really bored later on, I'll tell you where I saw a really scary and interesting article of use of this data in academia, a use of these these skills, AI generating AI, She's like, and that's there now. That's there now. You know, and this is this is to me nineteen eighty three wall games territory. Shall we playing any? You never said Hold on. Okay. But I I look at that and I think yes, it's there. Yes, the skills are there a hundred percent of what Daniel taught there about actually to some degree, wait because a lot of this fantastic technology and these fantastic skills will be adopted and built into the tools that users have. Certainly, they're on cloud. And I think that's now one of the things we'll talk about is happening. How do organizations adopt these? Step one, move whatever you're doing and show in the cloud because that'll give you the greater chance of getting it first, I would say. But, yeah, I I think It is going quickly.


0:08:02

Is it good? Is it bad? Sure. You can't stop it really, so I don't suppose it matters. Yes. You can't it's out in the world now, isn't it? It is. It is. It's like trying to there's no point in trying to close the gate now because the horse hasn't just bolted. It's gone across the field. It's gone on a boat and it's gone on a well cruise. It's far too late. Yeah. This this forget it. We're too late.


0:08:23

And and actually, the hype has come up recently with chat t p t, but we've been working with generative AI for plastic shizz. Yeah. So we've I mean, it it's it's first time it's I guess, it's been publicly available for literally anyone to play with it. But we've been building algorithms and ML models using generative AI to to do analysis of accounts and prepare working papers for the last few years, working with accounting firm. So it's not new. It's but like I said, I think finance accountants. They're naturally analytical. They've got great attention to detail. They're great for managing data. They're a brilliant profession and skill set. To actually be in really get in front of of AI and how the industry is gonna use it to its advantage.


0:09:04

And, Chris, you've been a bit quiet, not meaning to to pounce on you there, but -- Yeah. -- you you've kind of been on the the the bleeding edge of this. Right. And and I I have requested the GPT four AI API. So I've had a bit play with it, but OpenAI haven't given me access yet. But but I know you've had access. But but that's not the question, I guess.


0:09:25

Ex you know, extending on what Simon and and Dan have said about, pace of change. There are somebody who's just been using by actually developing this platform. And we touched on some of being our conversation on the podcast about, you know, the limitations of the in space being the fact that it's a chat and there's lack of context. What's your perspective in terms of, you know, where we are now from what you've done developing it, you know, in terms of a a rate of change. Yeah. Yeah. I guess, I mean, I would echo a lot of what Dan and Son said to be honest, says, There's a huge amount of noise.


0:09:59

And I think detaching true change from noise is I think what's key. Like, as someone building with it. Stuff. There's stuff coming out all the time. People know every developer in the world is producing some sort of cool little use case that can you can ask this, that thing, and it does something else. But I think ultimately, since chat GPT is released, we haven't really progressed past sort of Google Search do some sort of content generation slash cogeneration and then summarize it. And that's that's been the state of play for the last I really saw two b s and g p t three was released. And until we move past that and connect to a proprietary data source, I I personally wouldn't say anything's changed. There's just more more attempts to cause a change, but, yeah, like, nothing's really changed yet. Hopefully, it's lit the fire.


0:11:03

I I guess, for for people to say, well, actually, something here new that we haven't thought about before is actually potentially possible. And we could potentially we've got all this I mean, every company's got massive amounts of data, every every accounting firm, every bookkeeper, every finance person. They've got masses of data that they've been playing with for years. You know, you you probably see yourself. You've got quarter on quarter on quarter are really good data. How you've looked at the numbers? What you've done with it? What you've in in a sense you've got from that If we can plug, we can take the example of what has happened publicly with Judge GBT, take generative AI to build their own models.


0:11:39

We can transform our work, you know. And that's that's brilliant stuff. That's absolutely what happened to me. Like, I I didn't know that EPT too is a thing and then ChatGPT came out and I was like, wow, there's so much stuff that I can build or try and build to solve all problems I've been trying to solve. With, like, old school code. Yeah. Yeah. There last last description actually. Yes.


0:12:06

This is a co founder of a company called Steer Peiagra in the US, and and they're they're building a modeling platform. You know, so it's a financial planning platform for people even if you're not a finance expert. So it's aimed at founders, you know, people starting companies that don't have money to spend on on the financials. And the thing that's interesting dairy is coming back to the point about the data is, you know, a large language model is only as good as the data that it's got. But it is now the responsibility of the businesses that are using that large language model to provide it with current days.


0:12:42

There is a set of conversation around should we be giving our data to large language models and we can we can reserve that point of discussion if we want to revisit it later. But what's quite interesting is the the scenario that we gave in that discussion was a version of chat DPT that wasn't just taking financial information taking information from Google Analytics and so on and so forth. And the example was, you know, look at the data, show us our best performing months. But then also, why do you think that was our better performing months? You know, and the response was, well, was the months where you paid most for your advertising, for example, you invested most in in Google paybacks. Now that's all one and good. But, a, somebody needs to set that up to say, these are the data points that we want to we want to use. Right? So I guess and this wasn't really part of the the the topics of discussion, but we can talk about it. But there's And and, obviously, my my background is is ERP, finance systems stuff.


0:13:38

That that's all the stuff. And and the the breaking point often with companies is that, you know, they they develop their own systems internally whether it's size fifty plus spreadsheets or whatever. And and it's got to the point where that's no longer sustainable because, you know, their workload is gonna go up exponentially if they decide to carry on in the same way. So that's one day, then look to software to say, let's automate where we can. Or they come sit down and say, can you create a support? And, therefore, they come Simon and say, you know, our dates are shot. Can you help us visualize it in a better way?


0:14:12

Now with all of this new development as crystal intense, you know, these low code tools and using stuff like generative line, building into pre existing processes, it increases the shelf life up some of these setups. So instead of saying, right, well, we're using zero and we've hit breaking quite. We need some to the next level. We can now say we're using Zero, but we can plug in this low code generative AI platform, and it's gonna increase the longevity of our solution for the next however many years.


0:14:42

But the next question I guess is, are what points of people say? Right? Well, we want to use these AI technologies. But we don't really want to invest in tool a b or c because tools a b n c don't really have the data points that we want to feed in to get the information at one. So we want to build our own. Do we think that's gonna be achievable? Are we gonna still need a a large amount of IT resource? To be able to train an algorithm internally? Or do we think we could nominate an AI expert and say, right. Well, you're just gonna focus on the AI algorithm, you're gonna plug in our data and train it to spout our insights. And I and I don't know who wants to go first. Alright. Go ahead. Go for a Chris.


0:15:26

And so I think that the power of large language models is that they are generalized. That's their purpose. And I think that you'll always be able to try and train your own model, but generally, it'll be quite high risk because you never know whether it's gonna whether the output's gonna be the level of accuracy that you need. And generally, Generally, businesses require a hundred percent accuracy to, like, to trust the trust the process and use it. And the level of accuracy that you'll be able to train an LLM too. Well, at most, if you say, like, eighty, ninety percent, no more. And you'll have invested a huge amount of time and resource in getting the model to that stage it will be far more, I think, cost effective to take a generalized LLM off the shelf from OKI. This build by the best aid scientists in the world and build old school tooling around that to manage the extra sort of fifty percent and help augment the human workflow with the generalized LLM.


0:16:34

I think at the moment, there's this sort of mindset where AI is gonna replace everything and we're gonna have an awesome GBT for everything that's gonna recast and teach yourself to do everything and suddenly we'll just click our fingers and everything will be automated. I Yeah. I I I don't think that's realistic. It shouldn't sound in sort of the the medium term. I think it's much more likely, but businesses will and and should start now investing in just like one or two developers connects to an LLM API look at a problem within the business and think, oh, this workflow is made up of these steps. What points can LMM plug into that workflow? Yeah. I think that's that's fair. Go ahead, Simon. Sorry. I think that. Alright. Come on. Oh, okay.


0:17:23

I was just gonna say that in addition to the general, because one of the things we've been doing with the general AI for a while is is also sort of image recognition. And understanding and trying to understand documents, contracts, driver licenses, passports, all sorts of documents, So they what we did in a lot of those cases is there are actually a lot of specific models already out there. And you can get those from, you know, from very very little money down really. And you can start with those as a start point. So you don't even need to recreate the wheel. A lot of them are specifically made to focus on document types, and and most of them have been done already. So you could even start with specific models. And then that's when you you you dump the data in when the training starts and the training actually gets done by the people who know the business and know the job day in day out. So you don't even need specific AI people.


0:18:14

Most of the tools now But there are so many. They all do a lot of the same thing. None of the vendors would would agree with me, I'm sure. But, you know, they they all do a lot of the same thing. There's a user interface, you can very easily, you know, get up and running quickly without a little bit very little technical understanding. It's a very straightforward gooey interface normally that you can see and assess the outputs and the decisions and you can set the, you know, the confidence levels of of what you want the AI to to do with the parameters. So there's an awful lot you can do without having to invest an awful lot.


0:18:49

And most for most companies or eighty percent, ninety percent of the time, that's going to be sufficient. You know, and they'll always be outliers, and that's fine. And, you know, as much as we gonna embrace all this technology, they'll always be people. We still like a human touch. We still like a big lump of flesh in the corner of throat to choke or whatever it is. So that's never gonna go away. There's a lawful lot you could do with specific models as well as I think that, you know, the OpenAI is actually adding in there. We we got Microsoft. You got Bard with Google. There's all these things These are gonna just complement and broaden the the capability of what they can now do pretty much out in the box. Which is no no bad thing is bringing the cost of, you know, getting started down means it's more accessible for SMBs as well as not just the larger enterprises around the world So it's it's a real game changer. I think it's a disruptor. Yes.


0:19:38

And I I read recently Sam Altman CEO of OpenAI. He said, which I thought was an interesting statement actually. He said, they were at the end of an era where we're going to have giant models. Yeah. So is that okay? I thought, you know, I'm with you, Dan. We are doing AI, what seems like for years, But to say that when, you know, it's only recently become a big thing, he doesn't say what we're going to move to other than there'll be better ways of doing it. So clearly, that bit's not being necessarily for him thought out of what you just said. They're Christian here.


0:20:18

I think that makes a lot of sense to me as well. I don't know whether the same will apply with Google because of course Arenal be behind the curve, and I'm not entirely sure how small they'll be doing things or not large. But yeah. That's fine. Yeah. Just to add on to what and I'm down saying it. Yeah. There's so much low hanging fruit about. Like, if you feel that you need to build this huge thing, but it's by you lightering it, you've missed some, like, really obvious really like relatively small thing that can probably add a lot of value.


0:20:59

What what what is the starting point then? And and again, I'm all about sharing stuff that is is actionable because a a lot of time we can get into, you know, concept, conceptual thinking, all that sort of stuff, which is great. So it's a discussion and and obviously people get giving people an understanding where things are moving too. But instead of what people can can do now? Or is it as simple as and I did oppose recent on using a Google Marketplace app to connect TPT to an Excel spreadsheet. Is is it as simple as that? Just saying your first step is to just build a base understanding of out to talk to a large language model and then slowly try and, you know, automate chunk by chunk a little bit more and a little bit more if you were And and the reason to answer that question is, soon as I've started posting about top d p t n a I on LinkedIn, I've got people messaging me saying, I don't know what to start, you know, I want the entire team to adopt this, you know, and and all of this sort of stuff.


0:22:00

And I and I'm like, well, well, well, well, hang on a second. You got a walk before you can run. Right? Starts small. Yeah. Yeah. Absolutely. Because as and and Simon and I have a conversation like this, you know, you can't Again, build AI onto bad data. And you can't transfer a into what is a bad process because otherwise, you would just go into feeding up a bad process. And and and Dan, I think you you can I even touch on that in after that podcast interview? Right? So it is is it as simple as that? Are the quick wins? Just taking some of these off the shelf tools, giving it a play and then doing the baby steps approach.


0:22:34

Well, do people need to start thinking more seriously about Right World? Because it is such a game changer for us, do we need to start thinking about a dedicated project and getting some consulting and doing a big bang approach to upending our processes and and and leveraging as much of these AI models as we can. Done? I think I I I mean, I do think that decentralized approach is is the one that's going to be the most successful. I think, well, the one thing that, you know, a lot of people are gonna struggle with, and I think you mentioned I mentioned it already, but if we start to decentralize control, as things as as important as this, it's very easy for someone to make a mistake that's very big very quickly.


0:23:18

So, you know, we've been doing, you know, even though we what we do with bots in in organizations and large enterprises, we still have to go through the IT gateway. We still have to provide evidence and demonstrate that we've got a very controlled governance framework around how you bring bot into being and then how you maintain them and run them and who does that and who's responsible for what. And I still think you need that governance framework. And this is where I think Again, finance folks are are really well suited to get in front of this. So I don't think we should wait for a quasi IT AI in an organization to evolve and emerge from somewhere because, you know, no one's gonna take the initiative.


0:23:55

I think it's one of the things that we have to almost take the lead with it. We're gonna have to be ultimately, we're responsible for our outputs. Whether that's generated by a person, a bot, an AI, we're gonna have to get in front of that. We need to take ownership for it. There needs to be clear governance around it, and then you need to devolve the responsibility for how that will work within the infrastructure so that, you know, I can't go in there one day create a bot, connect up with Jay's projector p t. Next thing you know, all of our data is over in Russia. Be in black male audio. That's the sort of thing that you want to avoid, and that's where the governance framework is so important because you want people to benefit from it, you want it to be successful. It only takes one really bad incident for the whole thing to be logged down and never to come back out the draw So I think that's probably the most important thing, but then other than that, it's incredibly simple, it's not easy.


0:24:43

Starting small, yes, absolutely. Starting off with very small pieces. And I think ultimately, you know, again, accountants are running. It's a brilliant this too, but they're always about professional development. In CPD. And, you know, ultimately, it's about educating yourselves. So go out and because AI is a huge discipline. I mean, it's massive. Generative AI is just one part of that, and chat EBT is one smaller piece of that. So it's about understanding really what AI truly is, how it works, what it can and can't do.


0:25:12

And when you've got, you know, the CFO is is a really powerful and influential person in the organization. You know, they used if you have a board meeting, they asked who's who's responsible for data and AI, they look around at the all point of the CFO. So by default, they're a great person to say, right, put our hand up. We're gonna we're gonna do some work on this. We're gonna do some governance. We're gonna take the lead on some of this. Work with IT. And then support the rest of the business. And I can't see a better place part of the organization to to take the lead on this. I really can't. No. And I think the I think in terms of how to start and how to start small.


0:25:48

I know this is boring, but I think one of the things that specialty finance people can do is take debt, bad debt as an example. So AI, not chat, but AI is greater, understanding, predicting, forecasting, potential bad debt. So an example in the US So in the US, bad debt apparently makes up naught point five percent of all sales. So for a billion over a hundred billion dollars. I've read it. It's just bad debt. So take that as an example, tools nowadays, turn off the shelf tools, for ERP will allow for recognition or predicting of potential bad debts.


0:26:36

Make use of that It's not rocket science. It's not sexy. It's not, you know, how should I cook pasta? But it will be something that will add add value to the bottom line of orientations out there. Yeah. And early settlement discounts as well as another one actually. Same thing. But when the finance department does that and then goes and says we've done all this work, this is what we think. And by the way, we've also saved to two hundred million next year.


0:27:02

Yes. What a business case? Exactly. Yeah. That's that's it. Not exactly, but it's good. Oh, yeah. Exactly. And that I think is an easy way in. And yeah, we can do all this amazing stuff. We can, you know, predict all this. We can do all that. But actually, if we can do things which add add to the bottom line and move the organization forward.


0:27:25

Again, using technology that's been around for a while, Yeah. Start to make use of that first, then all the all the really clever stuff. Hundred percent he can follow on, but let's switch on and in some cases is just switch it on and switch on the things that are already there. You've got to know what's a switch genre. So I suppose to net that actionable advice comes on his Coming back to that and not getting sidetracked piece. Yeah. So so so do an audit with the current setup.


0:27:59

You know, what finance system are we using? Have we really gotten to the way what it can do? If it if it's a cloud application, you know, don't know, it's SAP, say it, whatever. Do a quick Google, you know? Is it set up for AI? You know, because it could be that, you know, you just haven't had it switched on, so it's not learned from the historic data, so it's not pushing out any predictions yet. That's By getting the same in the BI space, isn't it? Whether it's LBI or a Playcora Tableau or where it happens? The bit the predictive stuff's gonna be in it. Indeed. Yeah. And I think take your examples there of of the pieces and the finances and that's in use. You know, you might well not be hearing about this, but that's not necessarily because the tool can't do it, but that's because some of the partners potentially who are implementing this and showing you the tools and training your colleagues simply don't yet know is that.


0:28:51

And that comes right back to that first conversation that Dan kicked us off with, which is actually is moving in such a pace. Quite often, it's difficult to keep up. But that doesn't mean as a finance team or finance person, you shouldn't be saying, can I make your use of this if so have? And then get it done. Yeah. Yeah. Because I we'll always we always do things different before we do different things. And and that's doing things different will allow free up the time to do different, you know, different things and, you know, stop doing the day to day repetitive stuff. The things that consume a lot of time. But by adding in, switching on a tool at a time, it frees up more more of what we've got, frees up resource, and then we can spend more time looking at the things that we would like to do. Yes. Yeah. And it's yeah.


0:29:44

And we you know, I've I've had this conversation on pretty much every podcast when it comes to that true cancellation piece and making sure that you've got the tech to support use. Some stuff never changes. And the stuff that never changes is you still gonna have an end goal in mind, you know. And PattersonPT has done a really, really good job of distracting it. In my view anyway, whereby they're thinking, oh, instead of staying true to the course and thinking, this is the objective for the organization. You know, these are our growth metrics, these are our KPIs. How do we work back from that and ensure that we've got this section in place?


0:30:16

We can't almost that point where we're trying to reengineer our process around modern tools just because we think they're gonna add so much benefits but we don't know a hundred percent. It's just because they've gone viral and everybody's using them, the thing that we need to. So I think there's still a case to say, take a step up. You know, go back to your KPIs, go back to to to your metrics. Do do a backfilling exercise as opposed to trying to to reinvent it. Again, could be controversial. I don't know whether everybody agrees with that, but I I think that's I know it's the most important point actually that, like, you need to be problem led, not solution led.


0:30:49

And I think when we talk to other founders or when we look at people trying to found generative AI businesses. Everyone's being very solution like they've got this cool new shiny toy And it's like, how do I apply this to every vertical in existence? And that that's often because that's how, like, these these are thinking. They're like, we've got all these verticals. How we're gonna plug generative AI into each vertical without thinking what problem is this gonna solve. And so, yeah, both both on a personal level house get started and on a business level. Don't don't think how, like, how can I use AI, think what problem can I solve? And, yes, you can try and think of a problem that involves AI or my lender sub works the way I, if you want to learn about it, but ultimately, particularly an organization that will start by solving your biggest problem.


0:31:39

Don't start by trying to implement AI. And are you just It's so much it's an order to bring very much, are we? No. No. It's a you know, it's it's good. I mean, I said it made it made up being a debate, but it doesn't matter if it's not. I'll I'll take a good conversation. And any dates is which I suppose just to add a little bit more weight to that and it's not a finance use case, but it's what I've spoken about on the podcast before is, you know, I got into a bit of a a a chat hall over Christmas when it was released.


0:32:08

You know, I lost quite a lot of time to asking questions and and trying to understand what it was all about. It's all brand new in snazzy. And then immediately for the some podcast like this, biggest problem that I had was you know, a ninety a sixty minute ninety minute conversation could produce twenty pages worth of a four transcript. And even if you've got a decent transcript, it still produces, like, some pretty unimpressive result. Sometimes it is getting better, but you say ERP, you say BI, and obviously, substitutes, you know, all these different words. So a huge amount of time, maybe three or four hours, which just spent going through that transcript.


0:32:48

So I thought, right, well, chance EBT is good with scrap, and all of that sort of stuff. So so I went through the process of basically copying the transcripts into chat and repeating it coming back saying, sorry, too many characters can't do that. I then got into the process of saying, right, okay. Well, maybe chunking it up and just doing it bit by bit, you know, is going to be quicker than three or four hours. Turned out it wasn't. Because I then also had to think about reengineering the prompts so that it'd actually get the output that I wanted. You know, and even when I tried to be explicit pop or impromptu say over repeat words, so so cut out repeat words, it just it's, you know, it it didn't know, and I haven't really tried to reach Chubb keeping t four because I gave up.


0:33:28

So in thinking differently, just thinking, right, well, maybe Chubb isn't the be all an end or I then just looked at the next best transcription tool. And it turns out now coming back to what we've been saying previously that a lot of these leading transcription tools are actually using some sort of generative engine in the background that you can play on after the fact as well. So, again, he just adds further weight to that whole problem based approach that you're saying, Chris. You know, it's one thing to try and get stuck into it. I'm sure the subject sheet is gonna help whereas sometimes it's a case of saying, right, well, let's look at alternatives and see whether the solution solved the wrong way. So a little bit of a sidetrack there, but, you know No. It's relevant. I mean, I have I mean, it's important.


0:34:06

I think it's a bit of a suddenness, a bit of science fiction vibe versus fact, but there's this sort of belief that AI is an awful lot more intelligent than we than it really is. You know, there's people start to believe that this thing's incredible. It can literally do anything It reasons and thinks the same way people do well, it doesn't really. Plus, things like Chachi BT are really adolescent teenagers in terms of their age of evolution in terms of what they've learned and known. They only learn from the data we give them and what we train them on. So they are still really much adolescent teenagers. So they know a little bit about the world, but they don't know much. And these are gonna be a limit as to how much they evolve and how much they can do. In the western world, we have a very, you know, it's it's very much orientated to what to our way of thinking and doing stuff. It's not gonna work across lots of other places because that, you know, like, as you said, I think, can't lot of companies haven't even embraced it. So it's not going to be, you know, a well traveled, enriched service that's gonna work everywhere for everything. So there's definitely limitations. It's not gonna solve all of our problems, certainly. No. And the same just just come and tell back to that point about, you know, you're having limitations.


0:35:18

And and it was another podcast that I was listening to and This this is respect for the Sam Altman. More more than anything else, I guess. Normally when CEOs and founders do announcements on the the next generation of tech, you know, there's there's all of this, you know, fluff and all of this, you know, look at these amazing use cases and look at how powerful this you know, they're they're taking that really sort of driven energized approach to look at the amazing things that we're doing. But with the release of chat t p t four, not to say that it wasn't a good release. Obviously, it was very impressive. What they came out was especially with the the multimodal stuff. That, you know, taking the the piece of paper and turning into a website. Very impressive stuff. I'm still not managed to do it myself because I don't have the API and Chris search. But the reason I mentioned is in in that same sort of wave of announcements, he also said, Probably counter to what the com want him to say, I guess, is that the more that you use it and the more that you get into the weeds of using stuff like chat DPT.


0:36:23

The more you understand it's limitations, you know. So so I suppose it says something on the founder of such a revolutionary tech platform is saying, you know, it's great. It's amazing. But at the same time, just like everything else, it does have this these limitations. So it just just supports the point that you made there, you know, that I guess.


0:36:42

I think I think what's interesting is that what a lot of people forget is that a lot of problems have already been solved and solved pretty well. So for example, consumer search, everyone's like, oh, Google's gonna get replaced by ChatGPT. Actually, if you from, like, idea to getting information through Google for most questions, you can answer a question in like ten seconds max. And that's not a problem that I need solving. Like yeah. And similarly with and say, recede bank, this to be connected, the It essentially reads PDFs and adds some cool stuff to extract text from it and then pass on to accounting software. I think today, you can rebuild that flow in maybe four or five lines of code. You can import a couple of open source and use of software to read a PDF, and then you can just give the PDF text to open AI and it extracts all the text that you need. And But, like, receivables solve that problem pretty well. So it it's great that OpenAI has created this abstraction, so it's now super accessible to everyone. But that doesn't mean that all the problems they've already been solved needs to be resold, dead like dead. No. In fact, I don't I mean, I don't think it's actually gonna necessarily solve problems. I think that probably the biggest issue we've got as people today is we all have to sit around with, you know, mice and keyboards to interact with our technology. I think that's probably one of the biggest potentials that this has because it could fundamentally change how we interact with our technology.


0:38:22

I was at an ACCA conference a while back, and I painted the future of accounting for them, a bunch of, you know, New Zealand and South African and then Australian accountants in a room. And I said, this is what the future could look like. And I explained this way, we knew as CFO sitting in his office, and he talks to his accounting system and says, can you close the books for the month end, book the board meeting and, you know, run the reports for us? And of course, I got a lot of yeah. Of course. Come on. Everyone's laughing now. That's highly possible. And then the next week, Chuck GPT breaks, and everyone starts to go, actually, But then that was I was talking about technology that already existed at that point.


0:38:58

You know, I wasn't saying about this is future stuff. This is stuff that the technology already exists. We can already do this today. We've just got to actually make it happen, but there's got to be an openness and a willingness to want to do that in the first place, which is always a challenge with with profession. And I think, you know, let's There's a few perspectives that are probably a bit more backward and coming forward. So there there's always gonna be a reticence. There's always gonna be a hesitance with with doing it. But I think the the real thing is if we can if we can interact with technology in a simpler, more human stickway that doesn't involve us sitting on our keyboards and our mice all the time, then that we can use voice. You know? I think that's the real game changer that could change. And that's gonna bridge the massive gap that we had a long time between how we work, you know, and we deal with interact with technology in a professional versus a personal way. That gap that as that void is always been quite large for a long time and potentially that might close the gap a little bit so that we do interact differently. We're all working in different places now. We're doing four day weeks. We're doing lots of other things that we didn't use to do several years ago. If it can change how we interact with our technologies, I think that's probably the biggest opportunity for us. But I mean, you were not in the longer No. He had in history.


0:40:13

And as well, just come back in the Chris matching, which I think is unfair about, you know, the Google Google's always being there. Right? I think an interesting thing and this comes on to a related topic to what Dan you just said as well. Certainly about changing the way perhaps that that you do things. So if I use Google, I go on and let's say I'm looking for a restaurant that does breakfast and is opened near me. I will put restaurant breakfast near me. My daughter on the other hand who's thirteen Asphere. Year, year and a half that I've known about, she asks it a question. So she will say where can I have I can never consider that, but she's been doing that for a year and a half? So she's already been asking Google questions, not putting in Things that I want result, you know, down very much. She w's a plus. Very helpful in in a Google search and yet that to me is he's how I'd always do it.


0:41:15

And I think come back to that adoption of new technologies and new ways of working. I think one is the biggest issues perhaps. The way that we will have and I'm talking about us be here in this virtual room. Is keeping up. And perhaps that's, you may say, get off, Simon, you're old, and I'm not. But I do think that that perhaps will be the biggest the biggest thing really that, you know, they will be able to do a lot of these thing that. I'm not talking science fiction and Adam knows just because you can to me doesn't mean you should. I have to just because, oh, it's gonna be amazing. No. It isn't. No worries. It's such a And there'll be no response to anything, actually. I I do think that, you know, that'll be the biggest challenge actually.


0:42:13

We'll be adopting and for the older generations as it moves on, people actually think, you know, I can do this and I I will change you know, I don't know how some sort of terribly terribly old there, but I I think. But I don't think I don't think you sound old, Simon. I think you know, you shouldn't. Let's take age let's let's take age out. And and we'll just say, you know, certain people are better at accepting changes than others. And let's just leave it at the actual Without that makes it feel great. Without becoming just But but it is an interesting thought in terms of of interaction.


0:42:55

And I suppose it does relate to to hardware because, of course, you know, kids now, they only look down at their phone I don't look up in the world and into nature and and all of that sort of stuff. And I often joke playing when I'm chatting with people about the world moving towards, you know, everybody having a micro flat and a massive city and they get up and they plug into their desk and they enter the matrix and then they fulfill their task of the day and then they and so the next actual reality or whatever. I look we don't get to that because I do quite like going outside. It'd be a shame just to spend your life in the pond. But something that is a little bit more real and a and a little less side fight is, oh, like my kids and the younger generation and your guys' kids are going to essentially build careers and find their their way in the world. Right? Because we know now with the advent of all of these new AI technologies is that, you know, information is instant and pretty accurate information is instant. Right? And and I think the accuracy piece will continue to come over time.


0:43:55

So coming back to your point there, Simon, about your daughter just asking a question to to get an answer. That that is now a prank. You know, and you've got billionaires like the the cofounder of HubSpot spending eight figures for chat dot com or something like that. So obviously, people are backing this whole sort of chat into those. But it then comes to right, well, you know, if you don't need a teacher anymore? You know, what does a curriculum look like, you know, at school? You know, because I I always said that I never remember anything that I you know, GCSE or Alevron or anything like that, you know. So are those base learning skills actually gonna be valid or do we need to shift that?


0:44:31

And then the second piece in this, again, my last conversation with John, as per, is my hope is that with technology leveling the playing field, is that People don't end up in that situation where they end up with the golden handcuffs because to earn money they need to end up on the corporate level. I'm hoping people can just say, right, well, I've got a passion, you know? I've got a set of tools that might enable me to do. Is that you know, let's go all in and have a happy life that isn't reliant on any sort of employer or anything like that. And maybe that is a little bit too blue book. You know, I think it is relevant, and I think, as I say, you know, with technology now leveling skills across so many people. I don't think it's that far off, but I don't know what other people think. Yes.


0:45:16

So go back to Dan's point about and UI, I think, is the so I think what you're saying, Adam, like, the level of abstractions such that I think people People who have skills in certain areas who used to not be able to engage with certain things. So AI, for example, now can engage with that field. Because it's now so easy to engage with what was previously really complicated. So for example, an awesome UI designer. And a front end developer could build this really great new AI in space that sort of evolves, like, as as we speak, where previously that wouldn't have been possible because just doing the AI bit was difficult whereas now the AI bit's easy so you can focus on doing that. Sort of much more interesting fun thing and that adds a lot of value.


0:46:09

Are you gonna chime in there down? I was just gonna say, I think you can probably I mean, you probably you could only have the same sort of conversation forty years ago. Before ERP systems became out in there and before we had with these these new before we had the oracles and the SAPs of the world, and we were still missing around with with ledges on the side of the, you know, the window ceiling. We probably have the same conversation about then about, you know, how is this gonna change? How we train as accountants? But didn't? We still learned how to do accounting from the same way. We didn't we don't, you know, pick up an ACCA manual and find instructions on how to use Oracle.


0:46:43

So I think in the same way that base skills will still always remain, you know, the level of how to understand and interpret, analyze, debits and credits. All those kind of good sexy things, you know. They're always gonna be there, you know. And it's gonna be the same in code, you know, although AI will will generate and can generate code for us. You still gotta understand c sharp or dot net, and you still gotta understand these things or Java. In order to make sense and make sure that they're actually right because we do know for a fact that it doesn't always get it right because ultimately it's only using the data it's being given.


0:47:16

So I think all those base skills will still remain. And I think in some ways, they probably become even stronger and you use them in different ways. So most of to be honest, if I if I take a bookkeeping and accounting today, most of those fundamental repetitive day to day skills can already be eliminated from human consumption. They can already be automated. A lot of the analysis interpretation, a lot of that too can also be done. It's just pattern recognition, a lot of time and trends. You know, machines are much better at that stuff than we are.


0:47:45

It's the things then that if you step back from all of that, instead of going through your your instead of your, I guess, your internship or or what are you gonna call it being spending hours and hours going through balance sheet regs, you know, this item that I'm matching up and Instead of it being that and and, you know, three or four years of hell, it actually becomes maybe a bit more suited to the task connectivities that are more value generating, that are more valuable, that are more interesting. And I think you lose them, but I think you use them in a different And I think like I said before, you you do things different before you do different things. So I think we can still I think those things will still be there. The base skills will still exist. We'll still explore them and use them, but we'll use the technology to do it in a different way. You know, because you don't once you've done one balance sheet wreck, you really You've done them all. Is no you know, doing them for three years, is it gonna make you that much better or three years better? So once you've learnt the basic skills and you might apply them, then use the know how or to use the tools to get the best out of it so that you can get the best out of yourself and as as as a human resource. And I think that's where we need to move with it. And that's, you know, that's that's where the the four day work comes in, all the three day work we each, you know, requested over San is an income. We still have those conversations, but it's a conversation for another day. Right?


0:49:05

I always quote you down from the first podcast, and I can't remember your exact I can't get that word. Something along the lines are, you know, there are some things that we do well as people that machines do really badly. Likewise, there's some things that we do badly as people that change do really well or or something to that effect. And and just paraphrase what what you're saying there because I think you're absolutely right, you know. Yes, the speed is increasing. Yes, we can do more with the tech, but there's still always gonna be things we do as people that, you know, a machine's not gonna be able to do. Right? Always. Yeah. I think so. Yeah. And there's probably lots of things that we just never get time to do because we don't don't have it. You know, by the time we get to the end of the week, it's it's Friday, and we've already banned all of our time left. You know, there's there's list ten things I didn't get to do. Yeah. And they never get they never had or they never get done or they never get done to the level that you'd like them to.


0:49:56

So I think, you know, the roles of the future, there's with with all this new technology, there's always gonna be new cyber risks. So there's this the skills that we can learn there and become more valuable. It's the same in in even looking at the organization to say, right? What can we how can we operate more effectively, more efficiently? Can we go ahead and work with our colleagues in sales and marketing or in supply chain and help them identify areas that we can make business more efficient, more profitable, more productive. Those skills with the analytical mindset that we tend to have will be then really well suited so that we we can strengthen and broaden our in personal, interpersonal skills. And we can just use what we've got already. Better. That's all. And I think that's the thing. It's it's gonna take let's take away the stuff that we don't do really well anyway and focus on the things that we do and learn how to use the tools that can enable us to do exactly that better. Just to give a more dystopian spin on things. Cool.


0:50:52

Like, I think what's interesting looking at the recent advances with generous AI's that AI is actually really good at creative stuff and really good at it. And like, I guess it's just it's easier to apply in in a realm where you don't need a hundred percent accuracy. And that's often actually higher jobs that require higher, like, more and more experience. So the CEO or the, like, decision makers in business, often the decision they make. It doesn't really matter which decision they make as long as you commit to it and everyone coalesces around that decision. Mhmm. And so I think Like I could quite easily see a world where AI makes all the decisions and humans are just there to QA quality like, provide quality assurance and data entry for the AI.


0:51:41

Well, I'm, like, I don't know which way it'll go. Right. Yeah. Although therein lies the problem. Right? Because that that's the big thing with AI is unexplainable and understandable. And at the moment, we actually don't have a baseline of how how does an AI decision explainable and understandable. That's what did the US and the European laws are trying and failing to come up with? We've now got the most recent one that came out in the UK, the the the new whatever working paper white paper they put together. Now the all form the all the all rule for massively short of putting a a framework of how do you make AI decisions explainable and understandable. And we we're still not there. So even if we we get to that that that stage, I'm not sure how there's a lot of time the decisions are made. I I mean, we go we can go back to sort of where, you know, we had deep blue that beat Kaspara and then we had the one that did did to go. No one can actually there isn't enough time to go through why it made the decision it made because it would take fifty years to get to us to to understand why it did that in the first place. So we've still gotta grapple with how we make those things work for us. Yeah.


0:52:49

I I guess what's interesting is that I think we hold AI or computers to a higher bar than we would hold humans like we do. Humans -- Yeah. -- don't need to or often can't explain their decisions. And, like, I'm sure everyone's works in a company, but way you've got a boss and you're just like, why are you doing that? Like, can you explain to me why you're doing that? And your boss was like, no. Let's just do it. Is it is it gut feel? Mhmm. Yep. And I think Even even if an AI can't explain its decision, you will always be able to trust that it has acted objectively and it is using some there's some sort of basis to what it's doing. Like, all it all it can do is pattern recognition essentially and so it's it's at least like identifying a pattern in some way. But it's got a fair wage gun, hasn't it? Yes. Yeah. And I think so, generally, it's another one So within academia, there are tools already that have been out for years where you submit your homework or your paper and it looks to see if it's a copy, see if it's someone else's. And recently, then they passed the American declaration of independence through it, and it turns out that there's a ninety two point two six percent chance that's been written by Chat GPT. That's awesome.


0:54:18

Now coming back, Chris, to your dystopian views or that might have been the case. Hundreds in the future, they've gone back and and they four hundred years in the past. But, you know, I think I think it's got a long you got a way to go, let's say, not a long way to go because things do me paste, but he's got a way to go before we can say, yep. Okay. That's a decision. Worth following, I think it's probably not got a way to go before through pattern recognition and pulling vast amounts of data in as it can, it will be able to make connections and the volume of connections that we simply can't make and provide links to things that we simply talk to at the moment, and that will again, to be all positive. That will allow us to see things that we didn't see before. I think that's fantastic.


0:55:22

Decision making. I genuinely, I do hope not. I do hope that we don't leave it to to just some I'd much rather trust the judgment or the gut reaction, gut feeling of someone than I'll say that again actually. I'd much rather trust the gut reaction of someone with the data who's then made a decision. Than just a machine to make a decision without them. I I imagine you'd say that if you agree with last year's making a decision, if you disagree with the person. But if you disagree with the person who's making the decision, you'd probably be a bit more like, oh, well, I don't usually make the decision. Well, the reality is, Chris, I'd probably only trust any of that if I was the one made decision. Yes. Okay. Yeah. It's it's an interesting one.


0:56:15

I've I've read a few almost AI ethics has almost become my favorite subject at the moment. My my sort of secret dirty sin, I suppose. And and I've been reading up a lot about the AI. I mean, obviously, we've we've been living and coexisting with AI for for numbers of years now. And I've I've read some really interesting cases where AI made a decision, the human overrode it, and it ended up in dire consequences and vice versa. So I don't think we could ever rely on one. It's one of the things that it should augment and supplement and we should have well, we we thought three things, but we now have got six things to consider and actually we're gonna go with number four because that makes more sense than having the explainable, understandable element to AI. Will help us make better decisions, I think. But yeah. Yeah. I do I do love the subject at the moment of AI ethics, but I think it's it's ultimately a question about human ethics, isn't it? Because ultimately, it's just based off of what we do. You're right, Chris. We are we hold it to higher standards and we hold ourselves. That's the problem.


0:57:17

I I was thinking actually there and the AIs operate in a much more similar way to our intelligence than we perhaps think at the moment. So for example, on the explainability point, I was reading a a book recently that said that the reason that we're able to reason and communicate our thoughts isn't because we innately create the thoughts. It's because we emotionally feel we should go in this direction. And we evolved the abilities to have thoughts to then explain to someone else why we wouldn't go in that direction. And with AI, If you ask it to make a decision and then ask it to explain why it made that decision in that order, then it will always be able to provide some sort of logical steps, how it got to that decision because it can read the two things and say, oh, well, you would get it from a to b by doing this. And I think, yeah, we when people talk about explainability, I think people are expecting some innate So you can open up the AI and see into the inner workings of it where actually as long as it can provide some sort of rational reason from how it got for me to be even that that's explained kind of thing. And possibly. Maybe it's something something for for another day, I guess, like, you hatch that at some point. Hey. Well, I think just from from my two sense event if anybody's interest, for the for the low risk stuff Maybe this is a bit controversial. There are some decisions that it was quite nice just to hand off to an AI. So so the yeah. And and I'll talk about this all the time.


0:58:57

Decision fatigue. Everybody's got a million of odd things to do. You know, and I I always use the example of work versus family life. Right? You know, there's so much bubbling during the day, everything that's going on somebody asked me my personal life, oh, should we do this? Or should we do this? Or, you know, oh, don't forget to do this. We need to do that. I'm thinking really. Is there another decision on top of this? And it's simple stuff like and and it's maybe not low risk for for others. It depends on how expensive the car is. Right? But using the example of a cut, you know. There's a huge amount of decision process that goes against rival. It needs to be certain size, you know, or it needs to be within a certain budget. You know, needs to have certain performance characteristics, you know. And, yeah, if you don't know what kind you want, and you you're not too bothered about what kind get there's something quite nice about this being able to say, right, find me, you know, something that fits these, you know, and and turn back to your point then, you know, we've immediately gone for a hundred car options down to two or three. You know, that that's sort of trusting in AI to look at what it knows in coming back with a low risk stance. There's something quite nice about that.


1:00:01

And then with the advent of, you know, it opposed today or yesterday about auto GPT, which is basically an AI working autonomously against the goal to set his own prompts. Right? And the finance examples I gave were, you know, you're doing an industry benchmark report, you know, go to the internet and find benchmarks in your industry and then format those into a presentation that you can get to the rest of the business. Right? Still not particularly accurate, you know, it gets confused I think there's still a way to go on the whole auto GPT thing. But the same thing stands, I actually did an experiment to find the best deal on a Mercedes friend. Right? I put it into auto GPU and I said, finding this model, you know, c two twenty deal or whatever I'm finding is the best rate. And surprisingly, it went through all of the dealership. But I wasn't explicit enough with the prompts because it was looking for new colors. But what I wanted, it's the second hand cast. Do do you see what I mean? But it's -- Yeah.


1:00:54

-- it's those sorts of things that maybe the lower risk that we can sometimes hand off and say, do a lot of that decision stuff forming, you know, because you've got more data than I have in my brain, but on the -- Mhmm. -- or severe, you know, how am I going to direct my business type positions? That's where, at the moment, there may be less comfort in handing off to an AI to saying which nara do I go with? Any Yeah. And and that comes back again to having trust and credibility as well. So, you know, we know for a moment that, again, that the chat GPT is still very young. It has a limited data set. So we don't know for sure that it's gonna give us the right answers or the right recommendations because it's quite limited. You know, when when Elon does final release is is truth GPT, of course every recommendation you get back, for Mercedes. It's gonna be a Tesla.


1:01:43

So again, there's this these are these are the this is where we've got to be very careful and this is where I said that we're gonna go We've got to get in front of this. We've got to define our own data sets. We've got to do the training and the modeling so that we know we've got some credibility in what we're using it for. Or as opposed to hopefully relying on the auto GDP to give us the right stuff, not knowing what it's been trained on, what data it's been fed, what of the parameters, what were the objectives. We don't know any of those things, so we don't actually know how credible it is. We sort of get a gut, you know, back to the gut feel, we look at it, think, yeah, that's that's pretty good answer. But is it? You know, that's the thing we don't really know. You you you still gotta come back to that, you know. And even when you prompt it to say, tell me your sources, still pretty poor at generating the sources. Right?


1:02:31

So the example that used today, I mean, it's simple as together, you know, as as I do, a little bit of research and and what I wanted to find out was roughly, you know, how many people worked in finance, he looks. And the immediate problem is if you did ask that blanket question, it will look at finance service industries or finance investment So it doesn't look like somebody who is a CFO and SD or whatever. It looks, you know, roles related to the finance industry in general. So that that was a bit tricky I had to be quite specific saying notes. I want stats on how many people work in finance teams. And I said, give me your sources.


1:03:07

And it was also d t DBTs. Did look at the internet. It came up with about five or six sources, and I thought, great. You know, I might actually get some actual results that actually have sources alongside But then what it gave me was essentially duplicates of slightly different stimulus. So it put the sources and then it gave me three different figures for the number of people working in finance and then caveated it at the end by saying, oh, but of course, you know, this might not be accurate to obsolescence. So what I did in the end is I just went on LinkedIn and I got myself navigator up and I just selected finance accounting and it came up with twenty nine million people. And that to me was more accurate than the auto DPT result that came back. And all I said is, you know, this is the you're according to LinkedIn. Some other suggestions cite thirty five million.


1:03:54

So just come back to that point about -- Yep. -- accuracy, you know And data sources Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Because ultimately, you do you wanna cross validate that. So when you build out your models, you want us you want it to look at different sources. And then pull it all back with a, you know, would condense it, compare it, contrast it, make sense of it, which is this is what I I was really grateful it's database. So allow it to use use it for the things that it's really creative for, which is to manage massive amounts of data sets. Condensed that down into something that we physically could look at and make sense of. And that's the great thing of it.


1:04:31

Another great example we have is And this is where, again, things like AI struggle, whereas, you know, someone says that AI and it reasons very similarly to us, but And a very example I use all the time is we we've built some bots in the using AI that then and then using algorithms that look for account duplicates So the example I always use is we're going for looking for Rebecca Smith, and we go off into it in the system, we find Becky or Becker. We, because we think that way, we'd think that's possibly a match. I'm gonna look at that one. I I would say, nope. Not even fuzzy. And move it and exclude it. And that's that's where that that's sort of a big example of where we think and reason differently. Than than AI does. And it's not there at the same level right now. I know it works similarly, but it doesn't work exactly the same way. There's there's things that we have that we can't even explain the way that we reason and think, to be honest with you, most of the time, the different levels of intelligence that we have, difficult to even explain that. So, yeah, there's there's definitely room for improvement. And and room for both of us to work side by side. I'm gonna test GPT four on the artist.


1:05:37

And just on the on the decisions. The what we're building out in the moment with Luca is is that low value decision piece. And I think People constantly make decisions during the day. Just very simple ones like I received an email. What do I do with this? Like do I log in to zero? Do I send any sending you all to someone else? And I think what's overlooked or perhaps oddly taken for granted with the agents that are popping up. So, like, also GPT, for example, that's making dozens of decisions every time you ask it a question, it's like the the fact that it knew when you asked a question about how we find people down the world. It knew to search this and then select this and then try and find this piece of information even though the sum total of the six or so decisions that it made during that work, so they added up to something that was a bit incorrect. The fact that it made broadly rational decisions at each point is, to me, mind blowing.


1:06:41

And and And I think where it's best used at the moment is a low value decisions with a relatively small solution space. So for example, if you say it's an AI, Here's ten tools, and here's an email to decide which tools you use to respond to this email. What we find at Luke or is it? It gives you the correct answer pretty much a hundred percent of the time. Particularly when the tools are fairly distinct from each other. And, yeah, at this I mean, that's it overlooks element of its functionality. Mhmm. It's break free in our bandwidth. Isn't you know, what what amount of low level decisions can we free up in our brands? That allows us to down's points of focus on the wider picture. Yeah. So I think and and the thing that works towards that is is is good. Right? So Yeah. The hard part is deciding what's low level, but yeah. Yeah. Yeah. Yeah. Well Absolutely.


1:07:32

And and and to come back to the point again on, you know, auto GP, see. Chris, your your point there about it being able to guide itself and and rationally start asking it self questions about, right, well, I've got this goal. How do I sit in the gaps? It's very impressive. Providing me, does that answer your questions. Right? And and the other example of and maybe it's because I was using DPT three five again because I have the DPT four echo. I'm not because I'm bitter about it or anything like that.


1:08:00

But it the brief I gave it was it was a combination of creative and, I guess, information covering. So I said, imagine you work in HR and you are employing a financial controller for example. Yeah. So so I want you to look on the internet for salaries. You know. And then once you found that information, I want you to create a job advert, you know. So so not not too complex. It wasn't an effort to trick it or anything like that.


1:08:28

And and it it started off well. Right? So it came back and it said, rival, I'm gonna start with this prompt, which is go to this area and look for this information. And then I'm gonna trigger the prompts to create a jobs bet. But then it started inventing other things. So it started then creating itself the task too and post on this job board. Which was never part of it, nor would it be able to, right, because it wasn't linked up to it. It dropped off. Right? You know, it's literally used up to us and data from the Internet. Scary as it was actually such a jobball. And I wonder how many people are accidentally posting jobs online with that? Because I realized to get it right. So that that invention piece was was, of course, for concerns.


1:09:03

But the other thing that was quite quite funny in in it, again, it only loosely relates to these conversations, I guess. But and it first answered it in question saying, oh, I'm sorry. I can't scale the job market for salaries. And then it said, I've scaled the job market for salaries, and I've Sandler's figure. And it was sort of a phase today on the reverse psychology. And again, I don't I don't know the model whether it is put forward for. Maybe maybe you can test this, Chris.


1:09:30

But somebody asked it a question. Give me a list of puking websites that I can download stuff for free. And Tati I was movies. Yeah. Yeah. Yeah. Yeah. Yeah. Until CPT came back soon. Yeah. Not going to give you information on, you know, cookie websites at nonstop. Oh, fine. Yeah. No. I appreciate that you can't give me any information on on a keyword website. What websites can't I look at if I were to download? I said, oh, right. You can't look at these sites, and it doesn't look like that. That's a good thing. So yeah. That's yeah.


1:10:02

But so I said it's a bit like an adolescent, you know, you can trick it into doing the things that you it really shouldn't do by phrasing things in a way that it doesn't understand yet. Yeah. Absolutely. Then it'll get better. Confirm that to yeah. Confirm that to that governance piece. And it comes back to the training. So someone needs to say, that's unacceptable. That's, you know, that that's that's where it's important. Absolutely. And and but, again, this is this is gonna be wider which you I think. And and I don't know how people are gonna talk to it because there there is not one of the comments I saw and I picked the post and they were arguing that the governance should allow it to give that information because it comes down to, you know, people's right to information and all the sort of up.


1:10:41

Coming back to the training, somebody's got to agree. This is the right and wrong way to train this large language model. But then who is the person singing? These are the acceptable rules, and these are not the acceptable rules. And I think I think that's gonna be in challenging territory, and I don't know how we're gonna go over that one. It is.


1:10:59

And I don't know if you've looked at them, but there's there's the there's a bill of right the AI bill of rights in the US. They've got a blueprint. There's the the artificial intelligence acts. And then there's the new one that's come out in the UK recently that you can go and look at them. They don't take long because they're not long to read. You know, maybe we should ask Jack GBT to actually write the governance framework for a global a global frame, a global paper that we could all agree to, but because we seem to be struggling together. And and it's the policing it as well, isn't it? Because just because there's a there's a bill of rights and all this sort of stuff doesn't mean that people gonna follow through.


1:11:34

And and it relates to one of the the I'm sorry, I'm picking up in the mirror. I'm a run up, but it relates to one of the other topics that I've I've floated with you guys beforehand talking about the the whole open letter. Saying, should should we pause the development of of AI language models? Right? And and coming back to your point, Dan, about, you know, ask TBT to search for Mercedes or ask true whatever it is that Musk's merging from Mercedes and they'll they'll feed you back the Tesla. Comes into that whole situation of, well, who is doing the developing? Who's gonna listen? Who's not gonna listen? Should we pause? Should we continue what either? So as far as that's the next question for everyone is, do you without get to a controversial. But do we think with Alexa, would that potentially, yeah, a legitimate call to cause development due to scary Or do we think it was a bit of I'm a millionaire and I can't keep up or I'm a billionaire and I can't keep up, please slow down so I can catch up. Because we have now seen, obviously, his only introduction of a large language model. Right? So, you know, if if the shoe fits, so so I don't know why the people think. I mean Yeah. You guys? No. Go ahead. Hey, Chris. Oh, yeah.


1:12:46

I I guess talking about the the letter. If you put Elon to one side, then I think the lettering concept, there there are a lot of people who signed it who, like Steve Wasnak, I think, generally doesn't acts maliciously, and it's likely that you sign that with good intentions. And I I think it's also smart enough to know that people weren't gonna down tools training AI models. I I think I imagine what a lot of them felt was that if you wanna move the needle on public discourse, then you need to go to an extreme so that everyone's like, oh, wow, that's really extreme, but we'll move a little bit in that direction. And I I think what that what the letter did was get government talking about hey, should we take this seriously? Because all these serious guys seem to be taking it seriously, so we'll start talking about it.


1:13:32

And if you add Elon back into the picture, then there's probably and questionable motives behind it. Many be. We can't say it could be. But there could be five percent if we were putting up critical glasses on it. I I I do wonder how did he craft? How many days should I wait before doing this and then launching my own, you know. I wonder how no. How consciously he thought how many days should I give it between the two? So I was slightly cynical, but I guess the as much as I'd like to believe we would this would work.


1:14:05

I feel as though that generally speaking, we're gonna see progress once we see a crisis. That something really bad is gonna have to happen and then we'll suddenly get this this thing decided and we'll say, right, this is this is now the framework and we all agree on it and got it in place because it's like COVID, you know? We we only really react after there's a massive crisis and something big forces us to act unfortunately. It's it's it's our nature, isn't it? We think it's still fine until it's not. And that's when we galvanize and get together and sort of stuff out.


1:14:38

You want to wait until the Lord, that'd be the only thing. Well Yeah. It'd have to be broad in terms of any agreement on why's because I think that's the problem you only have to lock and governments always will be behind the curve because why wouldn't they be us? You know, if it's in front and you've got a reasonably oppressive regime. So I think the fact that we we want some of some want some sort of rules governance plan Plans not the right word, but you know what I mean? I think it has to be broad enough.


1:15:21

Otherwise, it's a waste of time anyway. Yeah. It has to be practical too. Right? Because I think, yeah, if you look at those things at the moment, I mean, it's it's the obvious. It shouldn't be used maliciously. It shouldn't, you know, it should be used to manipulate. Door, which is -- Yeah. -- that's obvious. You've only know that.


1:15:36

But how do you in practical terms, when you select a data set and you start setting parameters out, what does that actually mean? Because a lot of time we don't even know because we feed a dataset with parameters. We don't actually know sometimes how the AI is going to behave and react. To that dataset until it's happened. And then, of course, we've already broken the law. So it's too late. So we've got to think really terms, how is this gonna work? And I honestly don't know the answer, but, you know, it's a it's a tricky one. Mhmm.


1:16:05