EP 007 - Securing You Company's Future: AI Compliance with Michael Silva

Show notes

In this engaging episode, we had the privilege of speaking with Michael Silva, founder and CEO of Emely AI, and dive into the intricacies of artificial intelligence, focusing on trust, data accuracy, and AI compliance, especially in highly regulated industries.

Show transcript

00:00:02: Hello and welcome to the episode number seven.

00:00:08: Lucky number seven.

00:00:16: which is in, Michael.

00:00:20: Today it's artificial intelligence.

00:00:22: There's other stuff I've moved on past.

00:00:23: IT I still am involved in but AI is current focus.

00:00:29: Michael, introduce yourself.

00:00:33: So my name is Michael Silva.

00:00:34: I am founder and CEO of Emily AI.

00:00:37: We have an AI system that sits in Microsoft Azure to deliver secure

00:00:41: information real time in a very highly regulated way.

00:00:45: I also run an IT organization.

00:00:47: I've had that for 14 years.

00:00:48: That's Archon One.

00:00:49: We focus on cybersecurity compliance for law firms in New York City.

00:00:53: And I've got four boys under eight years old, live out in New York.

00:00:57: and got other interests that don't have anything to do with AI, so we can save

00:01:00: those for the end of the podcast.

00:01:02: But we're here for AI, so we'll stick to that for now.

00:01:05: awesome.

00:01:06: Busy, busy home.

00:01:12: Hello.

00:01:13: Yeah, we were just we actually had a full start.

00:01:16: Oh, I want to say two weeks ago.

00:01:18: So we're going to met each other and we've done these intros.

00:01:20: But fun fact, Mike has three, four kids.

00:01:23: I have three and then Edgar has two.

00:01:26: So we have a good diversity here of of kid representations and busyness.

00:01:35: One say diversity, one say laziness on my side, right?

00:01:41: But yeah.

00:01:43: So, Mike, can you tell us a little bit more?

00:01:47: So one of the things that I think as it relates to AI, just a very highly talked

00:01:54: about topic is building trust with AI.

00:01:58: And I think AI compliance, you know, building trust and they kind of go hand in

00:02:03: hand.

00:02:04: One one of the things that I want you to kind of ask, since you work with a lot of

00:02:08: clients in the highly regulated industry of

00:02:11: kind of, you know, legal, not even regulated, but I would say, where trust is

00:02:17: key in delivering or kind of outputs behind these LLMs.

00:02:23: So how do you go about kind of addressing and helping roll out or helping implement

00:02:30: some of these LLM systems while building trust with your legal customers?

00:02:36: And I'd love for you to dive into any use cases you could speak about.

00:02:42: sure.

00:02:43: So trust is earned in general, right?

00:02:45: That's between people and machines.

00:02:48: And I think that one of the easiest ways to ensure that you're getting a trusted

00:02:52: answer is to start with accurate data, right?

00:02:54: This is commonly known.

00:02:56: industry as rag, but that's a technical term.

00:02:59: If you're not in the industry, you wouldn't know what that is and that's

00:03:01: fine.

00:03:01: Basically what it means is that you're asking AI to give you an answer from data

00:03:05: that you know is true, right?

00:03:07: Also known as garbage in, garbage out.

00:03:09: Okay, so there's a couple of ways you can go about that, but at the end of the day,

00:03:13: it really depends on what you're storing and also how you're asking the question.

00:03:17: So the nature of AI systems today, and I'm not talking about like the 1980s, 1990s,

00:03:23: because yes, AI did exist back then.

00:03:25: I can give you guys a couple of you.

00:03:26: if you like.

00:03:27: The way that we know AI today is generative AI.

00:03:30: So a lot of people are focused on GPT 3 .5 and GPT 4 and nobody tends to think about

00:03:36: GPT 2 or GPT 1 or GPT beta or anything else.

00:03:39: It's just the starting point is 3 .5, right?

00:03:41: That's how everybody mentally thinks about it.

00:03:43: And the model that everyone was exposed to in the last year and a half is a

00:03:47: generative model, which means that it's taking a statistically probabilistic

00:03:51: outcome from a data set and repeating back a version of that to you based on

00:03:57: So where this can go right is if you have data that you know you trust, you can get

00:04:02: back a relatively accurate answer.

00:04:04: When I say relatively, because it's generating it, right?

00:04:07: Where it can go wrong is if you are asking it to answer a question and you don't

00:04:11: understand where the data is.

00:04:15: a system that does not have the capability to say no if it doesn't understand the

00:04:19: answer.

00:04:19: So an acceptable answer from a human in building trust is, I don't know, but I'll

00:04:25: find out.

00:04:26: That's something that you wanna hear, particularly if you have a job interview

00:04:30: with somebody.

00:04:31: What you don't wanna hear is the wrong answer, or a fake answer, or a BS.

00:04:35: That's the biggest, we're not gonna continue this interview type answer.

00:04:40: So unfortunately with machines, there's the double standard of we're very

00:04:43: forgiving of human beings when they make mistakes.

00:04:45: We are extremely unforgiving of machines.

00:04:48: So the odds are already stacked against the machine to begin with, right?

00:04:52: So the question of building trust is kind of a two -part thing, right?

00:04:54: Just to close up on the thought, I mean, the first part is data, of course, right?

00:04:58: That's a very important part of the strategy.

00:05:00: But the second part of it is the human interaction with it and the expectation

00:05:03: setting and really how you're using the tool in as much as the tool and the data.

00:05:09: think expect to...

00:05:12: top of it?

00:05:12: Because I know you talk about data and the importance of having a data strategy a lot

00:05:17: on your LinkedIn posts.

00:05:19: And as you mentioned, garbage in, garbage out.

00:05:21: So how do you ensure you're putting in the right data into the system?

00:05:26: Because if you're putting in the wrong documents and you're prompting the system,

00:05:30: so does AI know any difference?

00:05:33: So if you've loaded that data, does it just query everything that's possible?

00:05:37: Is it?

00:05:38: on the humans to kind of clean up the data and to embed trustworthy data into the AI

00:05:46: system?

00:05:46: Or can you just throw all of the data that you have basically and then let AI figure

00:05:52: it out?

00:05:53: What are kind of some of the best practices?

00:05:56: Okay, so this is a really loaded question.

00:05:59: So AI is not going to know the correct answer, right?

00:06:02: It can't.

00:06:03: What AI can do at best is give you an accurate estimation of a correct answer

00:06:08: based on data that you have.

00:06:10: Now, if you're looking for a degree of confidence, again, to go into the

00:06:14: technical end of things, if you're building it, there's a system called

00:06:17: RAGAS, R -A -G -A -S, that you can look up that will give you a statistical

00:06:20: probability for how close the answer should be to being accurate.

00:06:24: However,

00:06:25: like talking to a person, unless you just trust that person to know what they're

00:06:29: talking about, you cannot assume something is factually true.

00:06:33: And assuming what something is factually true is being actually true is like a

00:06:37: philosophical question, right?

00:06:39: Because it really is by consensus.

00:06:41: That's what we consider to be true, right?

00:06:43: If you look at like the philosophical definition of truth, it's consensus.

00:06:47: The way we use words, which is how LLMs work, to describe a concept.

00:06:51: is the same way that we use words to describe things that we see in real life

00:06:55: that we consider to be true.

00:06:56: Right?

00:06:56: So my shirt's white today, right?

00:06:58: Yesterday it was salmon, which is one of my other favorite colors, right?

00:07:01: But today it's white.

00:07:03: If everybody started calling this a different color with a different set of

00:07:06: words, we would say it's a different color, right?

00:07:09: So the question is really what is true and it's what's commonly accepted as true.

00:07:13: So LLMs aren't any different in that regard, right?

00:07:16: What they're doing is they're taking data that it sees as being true.

00:07:20: and it's returning back an answer that it thinks is true because of what it's

00:07:23: seeing.

00:07:24: So the question is how do you define truth?

00:07:26: That's really the question that you're asking.

00:07:28: And the answer to this question is how are you managing the data behind the scenes?

00:07:32: There isn't an easy way today for a machine to understand what is true and

00:07:38: what is not simply by looking at the data that's behind it.

00:07:41: It just doesn't exist, right?

00:07:43: Because of the reasons I mentioned before.

00:07:45: it's also if you have like this concept of level one and system one and system two

00:07:50: thinking and someone means basically intuition and intuitive thinking and

00:07:55: system two is basically thinking about stuff a bit longer and taking your time

00:07:58: and retrieve the more just use more energy to get the answer basically and like

00:08:08: modern systems they aren't technically capable of system two.

00:08:13: still out there.

00:08:17: That's why your intuition is not always right.

00:08:29: Mm -hmm.

00:08:31: They have a similar problem that they only can rely on intuition if you don't give

00:08:37: them any knowledge.

00:08:40: Like you said, it's a lot about statistics.

00:08:44: From what I learned, it's also you don't have to have 100%.

00:08:50: We already had a complete episode about this.

00:08:54: for it to be useful, but of course you have fields like you're somewhere in the

00:09:00: legal space where it's more than crucial to be precise and to not...

00:09:08: I think there were even lawyers using JGPT for their defense and quoting some random

00:09:19: case that never went by.

00:09:22: So that...

00:09:23: That kind of, that's kind of serious.

00:09:27: Yeah, well, the funny thing that you mentioned there is that specifically the

00:09:30: law and that use case, the attorneys have a very clear directive.

00:09:36: They're responsible for it, right?

00:09:38: In the fine print, and actually the New Jersey Supreme Court Commission on

00:09:43: Artificial Intelligence covers this quite nicely in one of the publications they

00:09:46: produced.

00:09:46: On the first page, it basically says two things.

00:09:49: The first thing it says is, if you're not using AI, you're going to get out competed

00:09:53: by another law firm that is using AI.

00:09:54: That actually says it.

00:09:55: I have it highlighted on LinkedIn post.

00:09:57: The second thing it says is just like a junior paralegal that doesn't know what

00:10:01: they're doing, you're responsible for it.

00:10:04: Right?

00:10:04: So like in the eyes of the law, they see very little difference, right?

00:10:08: It doesn't matter if it's poor note taking, if it's an intern, if it's chat

00:10:13: GPT making the mistake, it doesn't matter, right?

00:10:15: You're still responsible for it.

00:10:16: So this is where I think there's a little, you know, overly embellished concern

00:10:22: around AI taking people's jobs.

00:10:25: because AI, it's not going to.

00:10:28: I mean, some things it might come close to, right?

00:10:31: But these are like really basic routine tasks that are just forcing evolution,

00:10:35: right?

00:10:35: It's not like some threat that's gonna come in and destroy humanity, you know,

00:10:40: unless it's being misused.

00:10:43: Yeah, of course.

00:10:44: Have you heard about the case?

00:10:46: So there is a a new story that came out June of last year where a lawyer was

00:10:54: basically representing a man for an airline, you know, in some some personal

00:10:59: injury suit.

00:11:00: And he used Chad GPT output.

00:11:03: I think verbatim, probably from from its output.

00:11:09: And it cited fake.

00:11:12: use cases as defense too and things like that.

00:11:15: So how do you, so do you feel that, is this kind of what you were talking about?

00:11:21: It's still the problem that the lawyer is really behind whatever the output that

00:11:27: they're putting in front of the judges.

00:11:30: So they're responsible, not chat GPT.

00:11:32: Is that kind of a good?

00:11:35: defined by the law, right?

00:11:36: I mean, the law is not gonna hold OpenAI or ChatGPT responsible, but let's look at

00:11:41: that exact question that you asked.

00:11:43: So you said that ChatGPT was used to cite a case, okay?

00:11:48: Now going back to what I said earlier about where the data comes from, where

00:11:52: would ChatGPT be getting that data from?

00:11:57: I mean, it could hallucinate as well, right?

00:11:59: So if it's citing fake use cases, you could either retrieve information that has

00:12:04: been trained on or hallucinate if it doesn't know the answer.

00:12:07: even do one more.

00:12:10: It most likely hallucinates if it doesn't do any research beforehand, like a Google

00:12:17: search or anything that extends the context of the question.

00:12:20: Right.

00:12:23: Svetlana, you're correct.

00:12:24: But Svetlana, you hit on, right in the middle of your description, you hit on the

00:12:27: real reason.

00:12:28: You said what it's trained on.

00:12:30: So again, the AI systems are drawing from data that they know about they're trained

00:12:35: on.

00:12:35: Do you know what information Chatchie BT35, as an example, was trained on?

00:12:41: Of course not, because it doesn't have sighted sources.

00:12:43: It's a closed model.

00:12:45: There's just no transparency behind.

00:12:49: we know for sure that Chatch EPT originally was trained on Reddit data.

00:12:55: Okay, so think about the average Reddit user as described on Reddit.

00:13:01: Just go onto Reddit and look at how people describe the average Reddit user or the

00:13:05: average Reddit mod, whatever.

00:13:07: They make fun of each other all the time.

00:13:09: Think about the average demographic.

00:13:11: Okay, now you're asking ChatGPT a question from a data set that it was trained on in

00:13:17: large part from that data set.

00:13:20: So does it really surprise you that the information's not accurate?

00:13:23: Like is it really a hallucination?

00:13:25: Like I would argue that you're using the wrong tool, right?

00:13:28: And that the data is coming from the wrong place because that's originally where it

00:13:31: came from.

00:13:32: So about a month ago, $60 million was invested by some nameless company into

00:13:39: Reddit, right?

00:13:40: Recently, who didn't turn a profit for forever by an unnamed company to get

00:13:47: access to the data.

00:13:48: It turned out to be Google.

00:13:51: So why are people suddenly interested in all this public data?

00:13:54: I mean, it's not rocket science, right?

00:13:56: I believe most likely Google is using it to see how people interact versus the

00:14:00: quality of the data.

00:14:01: But even when Sam Altman came out and talked about the release of GPT -3 .5, he

00:14:06: said it's not a finished product, it's a proof of concept, it's an example, it's

00:14:09: not really meant to be public.

00:14:10: And everyone was blown away when they looked in the mirror of their own data and

00:14:13: said, oh my God, it's alive, and it isn't.

00:14:17: And this is the biggest problem, is that people do not understand fundamentally.

00:14:21: how the answers are being generated.

00:14:23: They don't understand where the data is coming from.

00:14:25: They don't understand that things are missing.

00:14:28: It's like the fourth unknown unknown quadrant.

00:14:30: If you don't know to look for a cited source, how can you justify whether or not

00:14:33: the information is accurate based on a cited source?

00:14:35: If you don't know the source is supposed to be there to begin with and nobody's

00:14:38: even thinking about it.

00:14:40: So that's another huge problem, right?

00:14:42: So there's just so many issues with using this as the main interface to answer your

00:14:45: questions that approach is destined to fail, right?

00:14:49: That's why...

00:14:50: this is this fallacy.

00:14:51: That's why these attorneys are getting in trouble because they're starting without

00:14:55: any degree of education.

00:14:56: They're starting depending on accuracy and they really don't understand how the

00:15:00: questions are being answered.

00:15:01: It's like, would you hire a paralegal intern off the street and do this work for

00:15:04: you?

00:15:05: No, but you're doing it with Chatchie P .T.

00:15:07: and somehow you're expecting magic.

00:15:09: I mean, that's where the problems are.

00:15:11: I think in general, I have talks to people expecting the AI solution to work out of

00:15:20: the box with their specific use case.

00:15:23: Why?

00:15:23: Why should it?

00:15:25: It's not trained on anything you've ever done, so why should it just work?

00:15:32: And yeah, that's a general understanding.

00:15:35: But...

00:15:36: I would love to emphasize one point because today is about data privacy and

00:15:41: security.

00:15:43: Public data is one thing, but I've seen also in a lot of your posts you actually

00:15:50: warning people that there is a potential misuse of their data if they are not

00:15:57: careful where they use ChetGPT or AI solutions in general.

00:16:01: Would you love to elaborate on that?

00:16:04: Yeah, I mean, this is unfortunately buried in the privacy terms and conditions of

00:16:07: every AI system that you interact with.

00:16:10: So if you've ever upgraded iOS, this is one of things I always say is the biggest,

00:16:15: you think about the biggest lie that you've ever told, right?

00:16:18: No one ever wants to admit it.

00:16:19: The biggest lie that everyone who owns an iPhone has ever told is when you get the

00:16:23: iOS update and it says this is for security reasons, did you read the terms

00:16:27: and conditions?

00:16:27: I agree.

00:16:30: Okay, I mean, show me honestly, anybody that's actually read it.

00:16:34: So we fool ourselves in our ambition to deploy the latest and greatest and

00:16:39: innovate.

00:16:40: And truly, experimentation is a big part of that.

00:16:43: But unfortunately, when you're looking at the terms and conditions of a lot of the

00:16:47: major AI systems, all of them have frameworks in place, whether it's for

00:16:54: insight or data collection or just outright privacy violations.

00:16:59: We see it.

00:16:59: So you've got to be really careful.

00:17:01: I'm not an attorney.

00:17:02: Okay, so this is where I'll draw my line.

00:17:04: I'm not gonna tell you what AI algorithms you should use or shouldn't use.

00:17:08: I will tell you that if you're getting involved with any system, you need to

00:17:12: understand the algorithms that are used behind it.

00:17:15: And then as a CEO, you should be looking at, okay, what does this mean for us in

00:17:20: terms of privacy?

00:17:21: Now there's some level of implicit trust that you have to have, right?

00:17:25: The minute that data is exfiltrated from your environment, it's somewhere else,

00:17:30: okay?

00:17:31: So you give it to Microsoft, Microsoft's got it.

00:17:33: They're gonna go through and handle it with their terms of service.

00:17:35: You give it to ChatGPT and OpenAI, they're gonna handle it through their terms of

00:17:39: service.

00:17:40: So you have to be comfortable with that.

00:17:42: It doesn't necessarily mean that there's risk, but when you have systems like

00:17:45: ChatGPT where on the front homepage it says, we are training other people with

00:17:49: your data, you gotta know that, right?

00:17:52: that would that be the same?

00:17:56: Would that be the same consideration for people who subscribe to the Enterprise

00:18:02: plan?

00:18:03: according to them, no, but you gotta remember the enterprise plan is different.

00:18:09: It's a different use case, right?

00:18:11: So if you wanted to use it for that, then yes.

00:18:13: But this depends on what you're using it for.

00:18:15: If you're just going in and you just wanna use it for that, then I'm sure it's

00:18:19: probably not bad, right?

00:18:20: You'd have to read the terms and conditions.

00:18:22: I'm not 100 % sure.

00:18:24: My understanding of the enterprise is that they go to great lengths to keep that data

00:18:27: private.

00:18:28: It's still another system though.

00:18:29: You're still transmitting your data off premises, off site.

00:18:33: All right, so if you're looking at like GDPR and all the rest or like any of the

00:18:36: other major compliance requirements that require you to have data processed in your

00:18:40: country on systems that you maintain and control, any third party API call that's

00:18:45: not gonna work.

00:18:46: So how do you tackle this at Emily?

00:18:49: So if let's say, you know, for organizations that really, really take it

00:18:54: to heart, and they don't wanna use, you know, the regular $20 subscriptions for

00:19:01: Chad GPT, and they're not comfortable with the enterprise plans that Chad GPT offers.

00:19:05: So I'm curious, how do you address that with your work at Emily?

00:19:10: It's so simple you wouldn't believe.

00:19:12: You ready?

00:19:14: It never leaves Microsoft.

00:19:17: That's it.

00:19:18: That's the answer.

00:19:20: It...

00:19:21: say if you trust Microsoft and Microsoft's cloud, then you can also trust their AI

00:19:26: implemented on it.

00:19:28: And a lot of people trust all their business data to Microsoft 365.

00:19:34: one of, if not the most valuable company in the world.

00:19:38: Okay?

00:19:39: This is like almost a, question's almost a fallacy, right?

00:19:43: Because you say, well, you know, I don't trust them because they're so big and they

00:19:46: can do whatever they want.

00:19:46: Okay, so you go with a smaller guy.

00:19:48: But the smaller guy's not gonna have the resources that Microsoft has to provide

00:19:52: the level of protection that Microsoft can't.

00:19:54: So do you want a company that's got a greater likelihood of surface area of

00:19:57: attack?

00:19:57: Do you want to take an open LLM, like a truly open LLM, and run it locally?

00:20:02: but then you don't really understand what's going on in that code, right?

00:20:05: Because you don't monitor and deploy and install code for a living.

00:20:08: So you don't really know what that code is doing.

00:20:10: But hey, it's running locally.

00:20:12: My data is not leaving my premises.

00:20:14: Where is it?

00:20:15: You don't know, right?

00:20:17: So this is the problem, right?

00:20:18: Unless you are the guy who's doing it, it's the same problem you have with

00:20:21: encryption, right?

00:20:22: On the security side of things, you know, is it better to have an algorithm that

00:20:25: nobody knows how it works, but you know, once somebody figures that out once, it's

00:20:28: completely correct.

00:20:29: Or is it better if everybody knows how it works and you depend on the security of

00:20:32: the key?

00:20:33: Most people would argue the latter.

00:20:36: It's the security counterpart to the AI argument about data privacy, if you're

00:20:40: really wanting to do it.

00:20:41: So it really is a chain of custody of trust, right?

00:20:44: Where do you want your data to be?

00:20:45: Who do you trust it with?

00:20:46: And then can you put reasonable security controls in place to prevent exfiltration

00:20:51: and manipulation?

00:20:52: So you hit the confidentiality, integrity, and availability that's required.

00:20:56: That's the CIA triad, right, for security of your data.

00:20:59: And of course that's going to involve a third party unless you're the guy writing

00:21:01: your own LLM.

00:21:03: But good luck with that.

00:21:06: Even Sequoia Capital won't get behind you.

00:21:08: One of the largest Silicon Valley investors in the world will not get behind

00:21:11: brand new AI engine algorithm.

00:21:14: They're getting behind companies that are deploying AI.

00:21:17: By the way, we have no investors for the record.

00:21:19: We're not looking for any.

00:21:20: But they're not doing it because the battlefield is just too competitive and

00:21:25: people are getting slaughtered left and right.

00:21:27: And at the end of the day, it's going to be about people that are able to use AI

00:21:32: are going to be the guys that are going to win.

00:21:34: Yeah, and it is a resource issue, you're right.

00:21:38: That's why I personally look like more positive on Facebook being in the open LLM

00:21:45: space because at least they have the resources to provide proper compute.

00:21:51: But yeah, it's definitely interesting.

00:21:54: From a privacy perspective, if I want to deploy my solution, like I have an

00:22:00: internal use case I want to solve.

00:22:05: Having it on Microsoft is one...

00:22:07: but more in general, what are the things I...

00:22:13: to be sure about my data.

00:22:18: Are there some key points you would suggest?

00:22:22: Since you brought up Facebook, let's just take a moment of silence to remember that

00:22:27: Mark Zuckerberg started his company by saying, they trust me dumb schmucks.

00:22:32: Okay.

00:22:32: But he didn't say schmucks.

00:22:34: That's the beginning of his company.

00:22:35: Okay.

00:22:35: If you don't know that, you know that.

00:22:37: So go back and look.

00:22:39: Now they've got a ton of money by collecting people's data and they are now

00:22:43: selling an AI system.

00:22:45: Okay.

00:22:45: Might be open, but hey, we trust them.

00:22:50: Okay, so I'm not huge into companies that start like that.

00:22:55: It's no surprise, I've talked about this at length.

00:22:57: Now, the question that you're asking Ed in terms of the use cases, the biggest

00:23:03: problem that I've seen with deploying AI systems and the easiest thing to get

00:23:09: around in terms of a successful deployment in this regard is to succeed, you have to

00:23:15: start with the use case and the data, not the tool.

00:23:18: And I've said this a few times, but...

00:23:20: it's worth clarifying.

00:23:22: Starting with the product first or tool first approach will fail.

00:23:27: You can't come in and say, I've got AI, I'm going to deploy it into the

00:23:31: environment.

00:23:32: There's this picture I'll have to get to you guys.

00:23:33: It's a cartoon where it's got a guy in the back of a boat with a little martini glass

00:23:38: and he's talking to a girl and there's a flag flying off the back and there's like

00:23:41: 10 people on the front of the boat and the girl says, man, how'd you get money for

00:23:44: this project?

00:23:45: And the guy goes,

00:23:46: I don't know, it was the same thing I was going to do except I just called it AI and

00:23:49: the back of the boat says AI.

00:23:51: You know?

00:23:53: So, I mean, everybody wants to figure out how to do it because they're enamored with

00:23:57: the potential but nobody really understands what they're doing unless

00:24:00: they've thought it through.

00:24:02: So the number one thing is the use case.

00:24:03: What problem are you trying to solve?

00:24:05: Because AI is not the solution to every problem.

00:24:08: You know, it's the old hammer and nail question.

00:24:09: You're running around with the nail and everything looks like a hammer to use, you

00:24:12: know?

00:24:12: Like it's not, it's just not...

00:24:14: it's not always the best solution.

00:24:16: So I think the use cases are really key and you mentioned those before.

00:24:19: So to get into some of those that you asked about at the very beginning of the

00:24:23: chat, what are some of these use cases?

00:24:25: Anything that's generative that you need creative feedback on is a great use for

00:24:30: AI.

00:24:31: Anything where you need to summarize content is a great use for AI.

00:24:36: Anytime that you need to extract out bullet points, rephrase, correct your

00:24:41: grammar, these are good uses for AI.

00:24:44: today without a properly trained set asking it for advice.

00:24:49: Bad.

00:24:50: Asking it to do math for you.

00:24:52: Bad.

00:24:53: Asking it for advice that involves math.

00:24:55: Probably the worst.

00:24:56: Last week I posted up a question, if you guys saw about the towel drying.

00:25:00: It was a very simple question.

00:25:02: I said, this is a very simple question.

00:25:04: If it takes 15 minutes to dry, or sorry, if it takes one hour, pardon me, to dry 15

00:25:11: towels in the sun.

00:25:13: is a one hour to dry 15 towels.

00:25:15: How long does it take to dry 20 towels?

00:25:21: What do you guys think?

00:25:22: Like hour 15.

00:25:23: No, actually, no, it's an hour.

00:25:25: It's an hour.

00:25:25: I'm sorry.

00:25:25: Why am I being stupid?

00:25:28: Yeah.

00:25:28: you think Ed?

00:25:29: no, I think like that's debatable because that's that's one that goes around a lot

00:25:35: of times often with shirts a drive five shirts How long does it take for for 20 it

00:25:40: depends how do we have enough space to do it in parallel?

00:25:43: Then it's still one hour Do you have it sequential then it takes longer?

00:25:47: So yeah, that's that I think an odd example, but yeah in general enumerations

00:25:53: stuff like that's not

00:25:55: AI is not that big in that.

00:25:56: It gets better, but sometimes you give it a more complex math problem and it gives

00:26:02: you the right steps and the right thought process, but gives you the wrong result.

00:26:06: So yeah, that's definitely not a strong suit.

00:26:09: I always see like a rule of thumb, like does it involve language understanding?

00:26:17: Is your task with automated language understanding improved, like automatable

00:26:22: basically?

00:26:24: and could it bring you a benefit in understanding unstructured data?

00:26:30: And extract something structured from which you can work with.

00:26:33: So that's basically how I always viewed it.

00:26:37: yeah.

00:26:38: So I think you're right, and I think that the fact is that there is a clear answer

00:26:43: if you make some assumptions.

00:26:44: The sun is the same, you're not putting the extra five towels in the shade, blah,

00:26:48: blah, blah, right?

00:26:48: All that stuff, the conditions don't change.

00:26:50: So that's where you will get back what I call the confidently incorrect answer,

00:26:55: right?

00:26:56: because you get back the math formula, so Elena, you're correct.

00:26:58: It's exactly how the AI started.

00:26:59: It was your first thought.

00:27:00: It started writing out the math formula, except it didn't know any better, right?

00:27:05: So it just kept giving out this incredibly authoritative looking answer.

00:27:08: And this is where it can kind of fall apart.

00:27:09: So you want to talk about the not so great use of AI if you don't know what's going

00:27:14: on.

00:27:14: And this is where a lot of people get confused.

00:27:17: You ask AI a question.

00:27:18: When I say AI, I mean most major AI LLM systems like Copilot and Perplexity and

00:27:23: others.

00:27:25: the answer they give you back looks so authoritative, particularly if it's not a

00:27:28: topic that you understand, you assume it must be true.

00:27:31: And that's kind of like what Mark Twain said, right?

00:27:33: It's easier to fool somebody than to convince them that they've been fooled.

00:27:36: That's actually not exactly what he said.

00:27:38: It was part of his book, but I have it written out in my Elephant in the Room

00:27:42: post from earlier.

00:27:43: But it's the general sentiment, right?

00:27:45: So the idea is that if you're not comfortable with the subject matter that

00:27:49: you're asking it about, and you get back the answer that looks correct,

00:27:54: you're misleading yourself and you don't even know it.

00:27:57: And I think that's the biggest challenge because once you lock that information in

00:28:01: you go, this is the way it is, you start making other decisions based on that.

00:28:06: And it's hard, it's really, really, really hard for people in general to go back and

00:28:10: to remember where they learned things from.

00:28:13: It's a big challenge.

00:28:16: kind of talked about one of the things is, you know, with AI is that it doesn't know

00:28:23: any better because you've prompted the system to provide an answer and whether or

00:28:29: not it has information to back that information up, it's going to respond

00:28:33: because you asked it and it's going to come up with an answer.

00:28:38: And one example that we've provided before is unlike children,

00:28:42: So who you ask, you'd be like, hey, Tommy, what do you think the subject is?

00:28:46: And they'll look at you weird and say, I don't know, and run away.

00:28:50: With AI systems, they don't do that.

00:28:54: They don't respond by default to say, oh, this is not data that I currently have.

00:28:59: I'm just going to take something that's maybe potentially related and become very

00:29:04: creative and sound authoritative, as you've mentioned.

00:29:08: So it's hard to decipher when.

00:29:10: you the information the output was in fact based on information that was trained on

00:29:15: versus completely like made up.

00:29:19: But one of the things I think we've kind of talked about before is rag and citing

00:29:24: references and citing documents.

00:29:26: Is that a potential way to again, build, like build trust, but also validate or

00:29:35: kind of understand, okay, well, should I be trusting this output?

00:29:38: Is this something that

00:29:40: does look reasonable as an output.

00:29:43: So do you think that that's a good alternative or do you think that we still

00:29:46: have significant work to do beyond the sources citations and the rag paradigms?

00:29:54: Fantastic question.

00:29:56: You're not gonna like my answer, it's both.

00:29:57: I'll give you an easy example, a very easy example.

00:30:01: Retrieval augmented generation reg, where you're pulling data from a document and

00:30:05: you're getting a sighted source, helps a lot.

00:30:08: But here's where falls apart.

00:30:10: You have two versions of the same document with similar content.

00:30:15: One's a month old, one's a month new.

00:30:17: Okay, so now your mind starts racing.

00:30:18: Well, I know how to solve that problem, I'll take the newer document.

00:30:20: What if the newer document has...

00:30:22: have been validated, what if somebody uploaded it and they weren't supposed to?

00:30:26: And it's not, the changes there aren't supposed to be reflected.

00:30:28: You need to use the old information.

00:30:31: Okay, well let me go through change control process.

00:30:33: Okay, well what's the change control process?

00:30:35: Again, now you're getting to data.

00:30:37: Okay, so now the question is it really, is it ragged?

00:30:39: And this is why a tool -first approach fails.

00:30:41: Because you can't go in and say, I'm putting a tool in and it's gonna solve my

00:30:44: problem, because it won't.

00:30:46: Right, the problem you're dealing with is a data problem.

00:30:48: So until you can solve for that human element of data qualification, the

00:30:53: business process optimization, right, behind the scenes, you're not going to fix

00:30:59: anything with AI.

00:31:00: In fact, you may complicate it.

00:31:02: And the worst part is that humans in general, like trusting things, you know,

00:31:06: it's like glass.

00:31:08: Once it's broken, it's very hard to put back together again.

00:31:11: So you start looking at an AI system and it starts giving you, you know,

00:31:14: incredulous answers.

00:31:16: And suddenly you say,

00:31:17: I can no longer trust this thing for anything, right?

00:31:20: Even if it's not the AI's fault, even if it's bad data that you're putting into it,

00:31:24: you know?

00:31:24: So I think that's really the challenge.

00:31:26: So, you know, unfortunately, I would love to tell you it's one or the other.

00:31:29: It's both, for those reasons.

00:31:31: So what would be the best practices?

00:31:33: So I know that, as you mentioned, so you do need to have, again, a strategy for

00:31:38: managing data, managing versions, accuracy, probably taking care of bias and

00:31:44: things like that.

00:31:45: So what are some best practices for?

00:31:47: doing it right.

00:31:48: So should you have a data team that really strictly just focused before even

00:31:55: implementing AI or even thinking about the solution, really focused on the data to

00:32:00: figure out what's relevant for my use case?

00:32:04: Is this data relevant to the problem that we're trying to solve?

00:32:08: And how do you ensure quality of the data?

00:32:10: And then is this a team that you maintain over time that kind of maintains and

00:32:15: oversees the quality of the data?

00:32:17: Is there a way to automate kind of that process with some, you know, monitoring

00:32:22: tools and things like that?

00:32:23: So just curious, what are some best practices that you recommend to clients

00:32:27: that you work with?

00:32:29: that's a really good question.

00:32:30: I know we only have limited time, so I'll try to condense this down.

00:32:33: The number one thing is you have to talk to somebody who knows what they're talking

00:32:37: about.

00:32:38: I've mentioned this before, but there are a lot of people out there that claim that

00:32:42: they're AI experts who don't know what machine learning is.

00:32:45: who don't know what a knowledge graph is, who don't understand that the fundamentals

00:32:47: of AI existed prior to the 1990s.

00:32:49: They see GPT and they go, oh my God, I can connect to the Zapier and it works and

00:32:53: take a look at this in magic like a chat with your PDF and this is magic.

00:32:58: Those are not the guys you wanna talk to.

00:33:01: You also don't need to talk to somebody that's super far in the weeds that's like

00:33:05: a complete LLM optimization gear head.

00:33:07: I love those guys to death.

00:33:08: I have several of them on my team, but they're not the people that you wanna be

00:33:11: leading the charge when it comes to business requirements.

00:33:14: So you need to find the right person or the right team to talk to who understands

00:33:18: what AI can do and what AI cannot do.

00:33:23: What AI is good at and what AI is probably not going to be that good at.

00:33:28: And then that same person needs to understand what problem are you trying to

00:33:32: solve and is AI a good fit for it at all?

00:33:36: So several organizations are hiring chief AI officers.

00:33:39: They're giving part -time AI officer roles.

00:33:42: Some folks are hiring.

00:33:43: you know, part -time AI consultants, that's all wonderful.

00:33:46: I would just stress that leading with the business process, leading with the

00:33:50: strategy that you're trying to solve for over a period of time, which is not gonna

00:33:54: be a week or two, okay?

00:33:56: I've been in the room with so many executives over the course of my career.

00:34:00: You come in and say, oh, we have this really important problem we need to solve,

00:34:03: and they start talking in weeks or months, and you're like, there's no way.

00:34:07: There's no way that a problem that's this big with this many horrible tentacles all

00:34:11: through your organization.

00:34:12: to be solved in three weeks.

00:34:14: So right off the bat, as an executive consultant, you need to understand that

00:34:18: that's a flag, that's a red flag.

00:34:21: So these are the types of pre -qualification questions and the right

00:34:24: person will ask them everything I mentioned and there's a lot more to it

00:34:28: behind the scenes.

00:34:29: But it starts with the right relationships.

00:34:31: So I would say that's probably the number one thing.

00:34:34: Unfortunately we already chewed through half an hour so it was kind of fast.

00:34:43: Michael, really thank you very very much.

00:34:48: you

00:34:53: actually works with that stuff.

00:34:56: Thank you very much.

00:34:57: we, we, you know, we don't see maybe on my side, kind of the ins and outs, the

00:35:03: details that you kind of provided, especially on the data.

00:35:06: I think I've learned a lot.

00:35:09: So I appreciate your time and working with us because again, we had the full start

00:35:13: before and we we figured it out, but really appreciate your insights.

00:35:19: I think that our audience will definitely benefit from your advice and kind of the

00:35:24: stories that you've you've told us.

00:35:26: to our audience and the advice that you've shared.

00:35:29: So truly appreciate you and we really enjoyed having you on the show.

00:35:34: it was a pleasure.

00:35:34: Thanks for having me guys.

00:35:36: Thank you.

00:35:37: Bye bye.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.