Your AI projects are bound to fail - address these common mistakes - EP 018

Show notes

FREE AI Risk Assessment Guide (Download): https://www.sparkchange.ai/navigating-ai-risk-guide

📍 Join Our Live AI Workshop (May 22): https://www.sparkchange.ai/join-our-live-ai-workshop-may-22-how-to-maximize-roi-reduce-costs-and-derisk-ai-projects
“How to Maximize ROI, Reduce Costs, and De-Risk AI Projects”

📆 Book a Free Discovery Call: https://www.sparkchange.ai/appointments

Most AI projects fail—and it’s not because of bad models. It’s the risks you didn’t see coming that sabotage success. In this episode, we’ll show you how to uncover them early and build AI systems that actually deliver.

In this Episode, What You’ll Learn:
- The 7 risk zones that quietly derail AI adoption
- How to spot AI hype masquerading as strategy
- When feasibility, compliance, or user trust become project killers
- Why user resistance—not just technical error—is your real enemy
- How to use the Clear7 Risk Grid to prioritize mitigation efforts

---------------------

Website: www.sparkchange.ai

Show transcript

00:00:01: Edgar: Hello and welcome a new year and new season and a new

00:00:06: episode of the AI Boardroom.

00:00:09: And welcome back.

00:00:10: Thanks for tuning in.

00:00:11: Of course with me, my wonderful co-host, Atlanta.

00:00:16: Svetlana: Hello.

00:00:16: Welcome back.

00:00:17: Uh, happy to be here for season two.

00:00:20: Uh, of our podcast series.

00:00:22: Yeah.

00:00:22: So

00:00:23: Edgar: yeah, we, we tried, um, we tried to improve.

00:00:26: We have some, some new stuff for, for you in the pipeline, um, starting today with

00:00:31: a very special topic that's near and dear to your heart for Anna, as far as I know.

00:00:35: Svetlana: Yes.

00:00:35: Uh, AI risks.

00:00:37: I feel like that's one of the hottest topics right now as more and more

00:00:41: companies are implementing it.

00:00:42: And ai, um, is a bit riskier, um, to deal with, uh, which we'll talk about

00:00:49: seven key sources of risks, um, that any company should really pay attention

00:00:54: to as they're kind of taking on AI projects into their organization.

00:00:58: So.

00:01:00: Yeah, this is, uh, a really, I think, timely topic as, as more

00:01:04: organizations want that positive ROI, but they also wanna mitigate risks.

00:01:09: Edgar: So, um, but it's not, it's not only like talking about the risks, it's,

00:01:12: this episode is about risk assessment.

00:01:14: Right.

00:01:15: Svetlana: We have a guide, which we'll get to.

00:01:17: Exactly.

00:01:18: Well, I think we'll talk about the seven topics and how do you actually, what seven

00:01:23: areas of risk could AI be introducing and.

00:01:26: Those ultimately are sources of, or points of failure in the system.

00:01:30: Right.

00:01:31: So, and we do have a systematic way of kind of laying out the land of

00:01:36: the land of, of those seven risks.

00:01:38: And I think there's flexibility to add additional ones and

00:01:41: really understand which.

00:01:43: Risks do you tackle first, second, last, and then how critical are those to

00:01:48: the success of your AI implementation?

00:01:50: So we do have a systematic approach.

00:01:52: There is a grid, there's a system mm-hmm.

00:01:54: Behind that as well.

00:01:55: But I feel like we need to get into the seven.

00:01:58: Yes, yes, yes.

00:01:58: Areas first.

00:01:59: Edgar: Yeah.

00:01:59: Just, uh, just wanted, uh, everyone to know when you finish this episode,

00:02:03: you should be more aware about.

00:02:05: What to look for in the first place.

00:02:07: Svetlana: Okay, let's get going.

00:02:08: Um, so I think it's important.

00:02:10: So what, why, um, I think to kick things off, why are.

00:02:15: AI risks different than traditional software development.

00:02:19: Um, one of the key areas that I, I feel like it's, um.

00:02:24: It comes to data.

00:02:25: So in traditional software development, you are kind of, um, creating rules

00:02:31: or rule-based logics, and you're kind of validating those algorithms.

00:02:34: You have kind of checks, checks and balances along the way before you

00:02:37: actually deploy things into production.

00:02:39: But with AI, you're dealing with data.

00:02:42: And so not always do you get the clean data.

00:02:45: Um.

00:02:46: How your algorithm algorithms are actually getting the data.

00:02:49: And then what the especially large language models do

00:02:52: with that data is also, um.

00:02:55: You know, could lead to different consequences, right?

00:02:57: So such as hallucinations, maybe retrieving the wrong

00:03:00: information and things like that.

00:03:01: So you can't always control for all kinds of use cases and things that

00:03:06: could, could potentially go wrong.

00:03:07: So, um, AI is just different than the way that you would manage those projects, but

00:03:12: also how you are kind of evaluating risks.

00:03:14: 'cause they are traditionally different from.

00:03:17: Um, traditional software development.

00:03:19: You also, um, kind of, there's a, a source of, I would say, like AI

00:03:24: hype, um, with, with AI projects.

00:03:26: So sometimes people tackle projects or introduce unnecessary risk with

00:03:30: AI into solutions that, uh, should not be solved with artificial

00:03:34: intelligence in the first place.

00:03:36: Um, they're just too simple of tasks.

00:03:38: Um, that could have been solved more simply with traditional programming.

00:03:42: Than ai.

00:03:43: So it's just, you know, misfits sometimes, you know, you're

00:03:48: introducing new risk because of, uh, wrongly applied, you know, uh.

00:03:53: Techniques to the problems, but then also, you know, the types of models

00:03:57: and how they're dealing with data and then how that data is transmitted.

00:04:00: You're also dealing with tools or API calls, you know, dealing

00:04:04: over the public internet.

00:04:05: You know, you're, you're talking about, you know, these large language

00:04:08: foundational models being used as, you know, APIs and things like that.

00:04:12: So.

00:04:13: For larger enterprises who are in especially large, um, highly

00:04:17: regulated spaces, this becomes also, um, a key consideration.

00:04:22: And I, I would say one of the reasons why they take a little

00:04:25: bit longer to, to deploy some of their solutions because they are

00:04:29: checking all of these potential new.

00:04:31: Introductions of risks.

00:04:33: Um, and some of them I feel like I, I have spoken to actually one executive in

00:04:37: the past from a big, you know, fortune 500 company, and he said, you know, it's,

00:04:41: it's not the risks that I'm afraid of.

00:04:43: It's what risks that I don't know.

00:04:45: Mm-hmm.

00:04:46: Uh, that is the biggest fear that I have in my mind.

00:04:50: So I,

00:04:50: Edgar: I I will, I would love to ask a question.

00:04:52: So,

00:04:53: Svetlana: yeah.

00:04:55: Edgar: Before we talk about the specifics, like mm-hmm.

00:04:58: Who, who is this for?

00:04:59: Like who, whom might I be for me to, for this to be relevant?

00:05:06: Svetlana: Yeah, I think that's a great question.

00:05:08: So if you are an executive, um, in a tech company who is undertaking,

00:05:13: um, you know, technical projects with artificial intelligence, so

00:05:17: if you are or you've attempted.

00:05:20: Uh, artificial intelligence before, maybe you've, you've done chat

00:05:23: bots and it's just not sticking, you're not really sure why.

00:05:28: Um, or how, and what points of failures actually like led to your,

00:05:32: um, kind of the following or the stalling of those projects and how

00:05:36: you can proactively mitigate those?

00:05:38: I would say it's for technical.

00:05:39: Founders, their teams or executives that are still exploring, um, kind

00:05:44: of the artificial intelligence space.

00:05:45: And they want to understand proactively what things do they need to look out

00:05:50: for when they take on these AI projects.

00:05:52: And do we have the tools under our belt?

00:05:55: Do we have the skill sets to cover for those risks?

00:05:57: Or is, are these areas that I need help with?

00:06:00: Um, I feel like more and more companies in the technical realm.

00:06:04: You know, even if you're dealing mostly traditional software

00:06:06: development are taking on, you know, um, these AI agents or introducing

00:06:12: AI agents into their workflows.

00:06:13: So this is becoming more and more of a common, I would say, undertaking, but

00:06:19: again, it's like what you don't know, what you don't know type of thing.

00:06:21: Um, and this is where we're hoping to close that gap for you to kind

00:06:25: of have a more structured approach for how you think about risk.

00:06:28: Edgar: Yeah, it's, it's, uh, especially when you think about like actual agent,

00:06:33: uh, applications or genetic applications, they grow in complexity like.

00:06:38: Quite fast.

00:06:39: Um, so, uh, yeah, there's a lot of, um, a lot of stuff to talk about and talk to.

00:06:44: I also have some, uh, uh, had some anecdotes in my mind when

00:06:48: I first read the, the topics.

00:06:50: Uh, so yeah.

00:06:52: Um, I would say let's, uh, let's dive in.

00:06:54: Could you give us an overview, um, what we are looking at?

00:06:58: Svetlana: Yes.

00:06:59: So, um, the framework that we're gonna go over there are seven core.

00:07:04: Areas of risks.

00:07:05: And this comes from, you know, uh, experience, uh, research and, you

00:07:10: know, lots of, kind of big hours being actually in the field of implementing

00:07:14: ai, AI projects that we've kind of consolidated it to seven areas.

00:07:19: Think of it as like a framework of seven different areas.

00:07:21: What could be.

00:07:23: Again, uh, a point a failure into your AI implementation.

00:07:27: So we call it the Clear seven AI Risk Framework.

00:07:32: Um, and it covers, again, seven areas of different kind of parts

00:07:36: of your AI implementation that you need to pay, you know, kind of.

00:07:40: Pay closer attention to, um, and we'll get into it.

00:07:43: I don't wanna kind of, uh, allude or give away too much.

00:07:46: Um, but I think as we kind of walk through it, I think you'll

00:07:49: find it very, very interesting.

00:07:51: And I think hopefully it'll start resonating in, in the idea, or the hope

00:07:55: is, is that, you know, as we kind of go through it, you'll kind of get the ah,

00:07:59: oh, was that what, what was happening?

00:08:01: Um, I've, I've, I've seen that.

00:08:03: So, um.

00:08:05: And we'll have some mitigation strategies.

00:08:07: Yeah.

00:08:07: Let's get, let's get started.

00:08:10: So the first, um, area of risk is, uh, feasibility.

00:08:14: So, and the gist of this question, or the gist of this kind of, uh,

00:08:19: risk, uh, area is can we actually build what we're undertaking?

00:08:24: So is there sufficient data to actually take on this project?

00:08:29: Do we have the infrastructure, do we have the actual, you know, the necessary

00:08:33: compute in our cloud environment to actually support the, these models?

00:08:39: And then, um, are we just kind of.

00:08:42: Ready to, to take on this project.

00:08:44: Do we know the tools in the market to actually orchestrate

00:08:48: the relevant solution for the objective that we're hoping to solve?

00:08:52: Edgar: Yeah.

00:08:53: Level.

00:08:54: This is also, this is also like different depth to it, but at the basis is like.

00:08:59: Data.

00:09:00: I think it's the biggest point of that infrastructure.

00:09:02: Um, tech, uh, and technology of course is one, two, especially technology like

00:09:07: are the models that we have access to are capable to do enough or good enough?

00:09:14: So they don't have to be perfect, but they have to be good enough.

00:09:17: Um, and of course, data is like the main point.

00:09:21: I always give this, um, idea of like, I want to build a marketing tool.

00:09:26: Um, mm-hmm.

00:09:26: And this marketing tool should, uh, like give me at.

00:09:30: Copy.

00:09:31: Yeah.

00:09:31: If I don't have proper ad copy to feed in as an example of a good app copy, it

00:09:37: will be generic stuff that's unusable.

00:09:39: So, um, yeah, do I have the data?

00:09:42: If not, maybe I don't build a marketing tool.

00:09:46: Svetlana: And what I've seen also happen is that, you know, you, um.

00:09:48: Executives might have these moonshot ideas.

00:09:51: So I think part of like also the engineering side is that you, I have

00:09:54: the data, but it's a moonshot idea.

00:09:57: You're looking for something that is like completely autonomous ic, you're

00:10:00: thinking a GI levels type of stuff.

00:10:03: It's just not fricking feasible.

00:10:04: Um, yeah.

00:10:05: Or you will have to, you know, for the type of use case, the return on

00:10:11: investment is just gonna be so minimal that it's like, why would we actually.

00:10:15: Do this, but we'll get into the, um, kind of that, that side of

00:10:20: it, the viability part of it.

00:10:21: But it sometimes it's just like a moonshot idea that I don't think that the

00:10:25: technology of today can actually solve it.

00:10:27: So that's part of the feasibility and can we actually build

00:10:32: what we're aiming to build?

00:10:34: Edgar: Yeah.

00:10:34: Another thing is, um, we also have like trade offs and these trade

00:10:38: offs, like sometimes you have to, um.

00:10:42: Like lose some speed for more precision.

00:10:45: Mm-hmm.

00:10:46: Um, you could also find that, yeah, just from my rough calculation, the

00:10:50: costs are already like, barely feasible.

00:10:55: Um, if, if I, if I go even deeper and, and if you also like just

00:11:00: add some buffer, you might end up in a place where you lose money.

00:11:04: So, um, and that's, it's like, for example, um.

00:11:08: AI coding is like big.

00:11:10: There's a lot of costs.

00:11:11: Like I pay several a hundred bucks a month for the influence for AI

00:11:15: coding, which for me, it's worth it.

00:11:19: Mm-hmm.

00:11:19: But it's just like, there can be substantial costs, even

00:11:22: on a small scale already.

00:11:24: Uh, you have to, you have to consider that.

00:11:26: Svetlana: Yes.

00:11:27: And I would say also, I think, we'll, we'll get to it, but also part of the

00:11:31: feasibility is that, um, especially with reasoning use cases that require

00:11:37: multi-steps to actually, you know, get to the output that you're actually desiring.

00:11:41: Um, you're treating latency.

00:11:44: Of the responses in order to get higher quality of output,

00:11:47: right there is a trade off.

00:11:48: So it's a bit unrealistic.

00:11:50: Again, can we actually fulfill, is it feasible for us to do

00:11:53: sub-second response times?

00:11:54: Sometimes it is not.

00:11:56: Um, just because of the, it's not a dependency on how quickly

00:12:00: your infrastructure moves or how quickly, um, or how skills

00:12:04: skilled your internal team is.

00:12:06: It's just that the inference of these.

00:12:08: Models is slower because in order for for them to optimize for higher accuracy

00:12:13: of output, you are sacrificing latency.

00:12:16: So again, those are the things that you have to look for.

00:12:18: JJ,

00:12:19: Edgar: J, just, just in case you dunno, inference.

00:12:22: Inference is the process of like actually where, um, getting output from the model.

00:12:27: Svetlana: Yes.

00:12:27: And latency is, is how long does it take to actually, um, get the output or

00:12:32: get the response from, from the system.

00:12:34: So yeah.

00:12:35: Okay.

00:12:36: Edgar: Thank you.

00:12:37: That's feasibility.

00:12:37: What, what's, what's the next one on the list?

00:12:40: Svetlana: Yeah.

00:12:40: The next area of risk is desirability risk.

00:12:43: Right.

00:12:44: So will they actually use it if we build it?

00:12:46: So if, um, I. I would say if, if you've built something, you know, it's, it's a

00:12:52: shiny object syndrome and everyone else in the industry has it and you're like,

00:12:57: oh, I think it'll be really cool if we introduced it into our organization.

00:13:01: Um, and the use case you, you know, that you saw was in, in the marketing

00:13:05: agency and you work in the industrial area, and that's not the main.

00:13:10: Business of you.

00:13:12: Yes.

00:13:12: It's very cool.

00:13:13: And it works for them.

00:13:14: It doesn't mean that it actually works for your organization.

00:13:16: So just because it's feasible and then you're able to actually build

00:13:20: it, the, it doesn't mean that it's gonna be actually adopted or, uh,

00:13:24: it's desirable to end your end users.

00:13:26: And so when you're actually prioritizing use cases, it's, um, desirability

00:13:31: needs to be high because, and the reason why it's important is because.

00:13:35: Um, you want more iterations, you want more use of these AI systems for

00:13:40: the, for them to improve over time.

00:13:42: So if you're building something that doesn't actually get used, you're

00:13:47: kind of wasting away, um, kind of the investment that you've put into, bring

00:13:52: that solution in the first place.

00:13:53: So desirability needs to be there in order for, um, for you to align.

00:13:59: To the people who are screaming, we need this tool because you, if you

00:14:02: have hungry consumers, um, and you give them the tool, you're, you have

00:14:07: higher chances they'll actually use it.

00:14:09: Edgar: As a lot consumers also, like if you have users internally, if you wanna

00:14:12: build up an internal tool or something, or automate something, uh, like if the

00:14:16: people don't have trust, which is a big part of acceptance and inability, um,

00:14:22: um, that, that you think about how to build trust, how to build understanding

00:14:25: for this, um, how to also deliver the um, uh, user experience as a whole.

00:14:31: Uh, which is AI is just a small part of it.

00:14:34: Mm-hmm.

00:14:34: If you build an AI solution, like the integration around it is

00:14:37: basically 95% of what you have to do and what you spend your time with.

00:14:42: And then you have this intelligent engine in the middle, like triggering

00:14:47: stuff, but you still have to do all the integrations, which is just like,

00:14:51: I, I had a workshop with a company, they built a complete solution.

00:14:56: Without any wreck or something because they spent most of

00:14:59: their time integrating properly.

00:15:01: And then I ca I came in, helped them with the proper advanced AI techniques,

00:15:06: and then they were able to, uh, quickly integrate them because they

00:15:09: did a lot of work beforehand, so.

00:15:11: Hmm.

00:15:12: Um, so, yeah.

00:15:13: Um, uh, yeah, that's like UX integration is a big part.

00:15:18: Therefore, only invest your time if you really think you

00:15:20: can get a lot of value from it.

00:15:22: Svetlana: Yes.

00:15:23: And I think, um, we'll probably have an episode some sometime in the future for

00:15:27: how do you develop and what are different ways of integrating AI into workflows.

00:15:32: And I, it's a deep passion of mine to talk about the project,

00:15:36: uh, this, this type of topic.

00:15:37: So, uh, would love to kind of dive into that as a, almost

00:15:41: like a standalone episode.

00:15:42: But one of the things that I did wanna share, 'cause I think we'll,

00:15:46: we'll continue down the list, but.

00:15:47: This resource that of the Clear seven framework is gonna

00:15:51: be available to our audience.

00:15:53: I wanna make sure that, you know, for folks who are kind of new to ai, um,

00:15:58: kind of risk assessment, monitoring, just their undertaking this for the first

00:16:02: time, we've created a 22 page or 23 page document, um, that covers this at Link.

00:16:09: So everything that we're covering in today's episode and more, um.

00:16:12: And I just wanna make sure that we disseminate that knowledge.

00:16:16: We've created it for you to accompany the, uh, the episode.

00:16:20: So we'll link it in the description on the YouTube channel.

00:16:23: And how would they find it on, on the, for Pots, people listening.

00:16:27: Uh,

00:16:28: Edgar: you either, you either, um.

00:16:30: Either have to visit spark change, or you have to, um, look in the description.

00:16:35: I, I'll try to put the link into the description.

00:16:37: Otherwise, um, um, I would just, uh, um, say, yeah, check out the YouTube channel.

00:16:43: Uh, we will still, we're still working on the, on the podcast side, but, um,

00:16:47: yeah, if you're listening on Spotify or Apple, um, uh, podcast or something,

00:16:51: we'll figure something out for you.

00:16:52: Uh, I'll put this in the description.

00:16:54: Whatever we figure out.

00:16:56: Svetlana: Yeah.

00:16:57: Maybe if you could message us, we'll, we'll give you, we'll, we'll

00:17:00: drop you a link, uh, for sure.

00:17:02: Of course.

00:17:02: So, um, but yeah, let's, let's keep going.

00:17:05: Maybe, um, yeah, number number three.

00:17:08: Edgar: So we, we had visibility, desirability, and now we are at.

00:17:12: Svetlana: Viability.

00:17:13: So this is, and I think hopefully, uh, folks who are, who've been

00:17:17: in the, the tech industry, this, the first three are not new.

00:17:21: I think we'll get into the unique, unique ones, but it's the DVF framework.

00:17:25: So the third one is, um, viability.

00:17:28: So can we actually afford it?

00:17:30: So part of being able to.

00:17:33: Standup AI solutions is not just the initial takes and a lot of vendors

00:17:38: offer two week or sometimes months of demo or like, you know, kind of

00:17:42: free credits for you to use it, and then they get it into production.

00:17:46: And they realized how expensive it it is to run these solutions, and then they

00:17:50: continued to scale it with more users and those hidden costs, um, basically start

00:17:56: to rub them off the ROI they were hoping.

00:17:59: So, um, part of it is like, again, viability is the cost

00:18:03: effectiveness, but also.

00:18:05: Compliance and regulation, right?

00:18:07: So how do you, is is it even compliant for us to, um, take a use case, uh, with

00:18:13: artificial intelligence one use case?

00:18:15: 'cause I, I do work in, um, you know, work on, on in the highly regulated spaces,

00:18:21: but can you actually give, give advice, uh, and let's say critical settings

00:18:27: and then delegate that task to an LLM, like spec specific to your industry.

00:18:31: So if you think construction, I don't know, you're frontline worker.

00:18:34: Maybe it's military, I don't know.

00:18:36: But there are gonna be certain guidelines and regulations and complaints, uh,

00:18:40: things that you have to hit before you can actually hit production on these systems.

00:18:44: And so sometimes it gets overlooked.

00:18:46: There's lawsuits and all kinds of things, or, you know, there's certain,

00:18:50: uh, guardrails that are not put in place before things get launched

00:18:54: and things like that, that, you know, can we actually sustain this?

00:18:57: Or is this gonna be a very short.

00:18:59: Um, kind of undertaking.

00:19:00: There's also related to guardrails is bias and ethics.

00:19:03: Um, so I think a good use case or maybe example of this was grok.

00:19:08: 'cause they, I think, raced, and correct me if I'm wrong, I know that Edgar,

00:19:12: you're good on staying maybe with the, the latest kind of the timelines.

00:19:16: But I think gr the, you know, by Elon.

00:19:19: Was created within like two or three week timeframes, so they didn't put all

00:19:23: of the appropriate guardrails in place.

00:19:25: And so when they launched it, you know, people were trying to

00:19:28: take advantage of the system.

00:19:29: And of course, like it gave biased outputs and then it was

00:19:33: giving them unethical responses.

00:19:35: And so before.

00:19:37: They actually like checked all of those things.

00:19:39: Of course, they, they didn't take the system down, but yeah.

00:19:43: Um, not everyone can afford to do it.

00:19:45: Edgar: GR is also kind of like grinding on that part that they're kind of open.

00:19:50: Mm. Um, um, yeah.

00:19:52: But um, besides that, yeah, ethics and biases, like the

00:19:56: thing is trained on human data.

00:19:58: Human data is biased.

00:20:00: By nature.

00:20:01: Mm-hmm.

00:20:02: And, um, so you have to, you have to deal with it.

00:20:06: You can deal with it.

00:20:07: So you have to put in guardrails, uh, we talked about that.

00:20:09: You have to test it, uh, like, and you have to, when

00:20:13: you test it, try to break it.

00:20:16: Um, I, we, we, this is about risk assessment today, so we won't go into

00:20:20: all the details, but we will have upcoming episodes talking about that.

00:20:23: Um, but yeah, that's something if you, for example, I had, I had this

00:20:26: in the workshop with a customer.

00:20:28: Um, we had the case of, um, checking cvs and if you evaluate cvs, you are pretty

00:20:38: quickly in the space of like biases.

00:20:40: Mm-hmm.

00:20:41: And, um, and dealing with also hallucinations and

00:20:44: stuff, but especially biases.

00:20:46: Um, uh, like at least in the eu, um, you have stuff like the, uh, EU AI Act.

00:20:53: There are also other AI compliance stuff.

00:20:57: It is basically the, you, you, you are not allowed to, um, accept or decline

00:21:03: someone based on an automated evaluation.

00:21:06: There has to be a human involved.

00:21:07: So that's just the rule.

00:21:10: Um, and therefore, yeah, you have to be really careful.

00:21:13: If you have assistance for evaluation from an automated system like an AI

00:21:18: system, that's fine and all can do this.

00:21:22: And you can also set up your AI to be, to be capable of doing

00:21:25: this, but you have to be careful.

00:21:27: Svetlana: Yeah, because you, someone, some, um, agency might be after you to be,

00:21:32: to require you to take down your system.

00:21:35: There's multiple industry use cases where that has happened, so make sure that,

00:21:40: can we afford it, but is it compliant?

00:21:42: Is it, um, ethical?

00:21:45: Is it following kind of, again, like all of the.

00:21:47: Industry best kinda practices, especially where you are located in the world.

00:21:51: So there are gonna be some variabilities, but nonetheless, it is part of

00:21:55: keeping that solution viable for a long period of time so that you

00:21:59: can get your return on investment.

00:22:02: All right.

00:22:03: Um, moving on to, wait, wait, wait,

00:22:05: Edgar: wait, wait.

00:22:05: One, like, uh, last part is reputation, which also like, it's also vi thing.

00:22:11: Like if the thing you put out there is damaging your reputation, you might

00:22:16: consider, um, to at least assess the risk.

00:22:19: Uh.

00:22:20: I often get this from Google employees when they talk about like,

00:22:24: yeah, opening Eyes is a startup.

00:22:25: They can just put out stuff at the breaks.

00:22:27: No way no one cares.

00:22:28: But if Google Fs up, that's, that's an issue.

00:22:33: Uh, and, and the reputation part is just.

00:22:36: A lot bigger, um, in their case.

00:22:39: So, yeah.

00:22:40: Uh, just keep that in mind, but yeah, we should keep going.

00:22:44: Svetlana: No, that was, that was a, a good, uh, story, so hopefully it, it

00:22:47: helped, um, enhance your understanding and, and what viability is all about.

00:22:53: So next one is, next one is a

00:22:54: Edgar: cool one.

00:22:55: Yeah.

00:22:55: Svetlana: Strategic fit.

00:22:57: Yeah.

00:22:57: So this is where we're kind of getting into kind of the space.

00:22:59: What the heck is strategic fit?

00:23:02: So.

00:23:03: Ai, and I've seen this multiple times.

00:23:06: Times, and it kind of, even though I've, I'm accustomed to it, it still

00:23:09: frustrates and I'm like, why can't, can't folks get this, um, that you

00:23:14: can't implement or you shouldn't implement AI if it's not aligned to one?

00:23:19: I. Of your business needs, your customer needs, or your business objectives, right?

00:23:24: So if it, there's not strategic fit for you to take on this AI project,

00:23:29: it's a shiny object syndrome.

00:23:32: That it's basically, it's, it's shining and then you're trying to

00:23:34: chase the shiny object without kind of having the promise that it's

00:23:39: actually impact your bottom line.

00:23:40: Right?

00:23:41: So you have to make sure that.

00:23:43: Um, you be, you tackle an AI initiative that you can then measure

00:23:49: and you can justify that investment.

00:23:50: How can you justify an investment for a shiny object that has no

00:23:54: promise of bringing or impacting any part of your business?

00:23:57: It's just cool because everyone else is doing it.

00:23:59: Edgar: So, or, and, and, and it also has to fit your vision in some way, like.

00:24:04: What, what is the company going to do and, and how does it fit into

00:24:07: the overall strategy, strategy of our company, of a product?

00:24:11: Um, especially for your product teams out there.

00:24:13: Like you have to be certain that this is kind of like in character.

00:24:17: Like if you are doing like hard number crunching and, and do a, like

00:24:23: finance assistance, an image generator might be not the thing you have

00:24:27: to build, even if it's cool, like.

00:24:30: Svetlana: That's a good, good example I think.

00:24:32: Um, just, yeah, strategic fit where it's aligning, but I think that

00:24:35: there's also, again, frameworks and.

00:24:38: Um, ways that you can identify, well, which use cases should I be tackling?

00:24:43: How can I future proof my organiza my organization?

00:24:46: There's also, again, frameworks that in the future episodes, uh, will

00:24:50: cover for how do you think about short term use cases and how does

00:24:53: that align and impact my bottom line?

00:24:55: And give me the RO I'm looking for the short term.

00:24:58: What about media?

00:24:58: Medium term, and what about these, again, longer term projects, how can I

00:25:03: potentially even reimagine workflows?

00:25:05: How can I introduce new opportunities?

00:25:08: Revenue generating opportunities with ai?

00:25:10: And these tend to be a little bit longer, but you can start now, but

00:25:13: you have to be kind of goal oriented towards what you're building.

00:25:16: As I think Edgar, you said towards the future.

00:25:19: What is that?

00:25:19: Future kind of look like.

00:25:20: And then you wanna make sure that you're kind of making, you're crawling

00:25:24: initially with your initial use cases.

00:25:25: Then you're walking, then you're running with these use cases because

00:25:28: you should be, it should be contributing to your sum goal in, in the timeframe

00:25:33: that you're hoping to, to solve it.

00:25:35: So this, this all about strategic fit.

00:25:37: Edgar: And also don't forget about the competition.

00:25:39: Like how, how does this AI feature that I wanna build and I wanna integrate?

00:25:44: How does this like position me also against my competition?

00:25:49: Um, because yeah, it's just, um, it's the, the core of every strategy decision.

00:25:56: It's kind of the same for AI plus all the other risks we are talking about.

00:26:01: Uh, but, but yeah.

00:26:02: Uh, it, it has to be.

00:26:04: Because the company can profit of it, not because it sounds cool.

00:26:09: Yes.

00:26:10: Like don't do, don't do AI projects just for marketing purposes.

00:26:15: Svetlana: And then I think it'll come bite you in the rear.

00:26:18: I would say when, um, end of the quarter or end of the year kind of, um.

00:26:23: Reconciliations of budgets and how, where are we spending the money on?

00:26:25: Um, because typically projects are assessed, uh, what, uh, value they're

00:26:31: generating back to your organizations.

00:26:32: If, if it's not measured and you're spending money on, on these projects.

00:26:36: And if you can't kind of quantify the value that you are giving

00:26:40: back on these solutions, and if it's not, you know, again, some

00:26:44: of them can be software metrics.

00:26:45: Some of them can be harder, like financial metrics, but there

00:26:47: should be some way contributing to.

00:26:50: What you're doing and how you're do trying to grow strategically as a company, right?

00:26:54: So what does, what isn't measured that doesn't get done type of thing,

00:26:58: um, would be really appropriate here.

00:27:00: Edgar: Yep.

00:27:02: Okay.

00:27:03: Um, next one.

00:27:06: Expertise.

00:27:07: Svetlana: Uh, this one's a good one.

00:27:09: I'm a big.

00:27:10: I stickler on this.

00:27:11: Um, so expertise risk is all about do we even know how to do this, right?

00:27:17: So this is becoming more and more, I would say, in apparent risk with

00:27:22: lots of organization again, who are newer to artificial intelligence.

00:27:25: They're taking it on.

00:27:27: They think, you know, well, there's these established frameworks.

00:27:30: There's, you know, open, um, source frameworks, there's,

00:27:33: you know, established patterns.

00:27:34: Everyone's been doing it.

00:27:35: We're just gonna figure it out.

00:27:37: Um.

00:27:39: Problem is, is like the failure rates of these AI projects.

00:27:42: And I think you've seen this, you've been approached by my, you know, companies,

00:27:45: I've been approached multiple times.

00:27:47: It's like, we can't get these projects beyond pilots.

00:27:50: My first question is, who's working on these projects?

00:27:52: Do they have the, the expertise of actually tackling this or someone at

00:27:56: least on their team to show them how?

00:27:58: Right?

00:27:59: Like you need kind of the guide to show you how to actually,

00:28:03: uh, follow those best practices.

00:28:04: But if you are.

00:28:05: Tackling these projects, these very complex projects, which AI

00:28:08: tends to be for the first time.

00:28:11: I mean, it's probably gonna take a few repetitions and failures for you to learn

00:28:15: and really get good at at the craft.

00:28:17: But if you don't have.

00:28:18: The patience to fail on the first attempt, then I think the next best

00:28:21: thing that you can do is to get a guide to who will show you the expertise.

00:28:25: So the two things that,

00:28:26: Edgar: and also, and also failing might not, not lead to, to, to proper results.

00:28:30: I, I, like, I had sometimes really dig deep into testing and trial and

00:28:35: erroring and, and just knowing some like tricks, tips and tricks basically.

00:28:41: Um, the thing is like a lot of.

00:28:43: Information you find when you, like, do like, um, learning by yourself

00:28:47: and, and just winging it basically.

00:28:50: Mm-hmm.

00:28:50: It's not, it's not that, that easy to get to deep information.

00:28:54: Like there is a lot like you, you can find it.

00:28:57: Eventually you have to dig quite deep and, and quite a lot.

00:29:02: So it might be, might be better to get someone who already did the digging.

00:29:07: I always say I try to save you six to 12 months in pain.

00:29:10: So, uh, and, and I actually mean it like that because I had

00:29:14: that pain and it was painful.

00:29:16: Svetlana: Yeah.

00:29:17: We, we don't want you to go through it as well.

00:29:19: Um, so one of the things that, um.

00:29:23: Kind of that goes into this, uh, category of risk, uh, comes down to two things.

00:29:27: And usually I, again, I ask these, these things, do you have the experience

00:29:32: of actually tackling this a similar problem or working with this technology?

00:29:35: So this is the technical expertise, but the second piece is the domain knowledge.

00:29:39: I. So especially if you're, if you're working in specialized

00:29:42: areas, let's say it's marketing.

00:29:44: If you're working in healthcare, if you're working in finance and you're tackling

00:29:49: this, these use cases where you kind of have to understand how either data

00:29:53: flows through these systems, how does the appropriate responses look like?

00:29:57: To these audiences.

00:29:58: You have to have both.

00:29:59: You have to not just have the technical ex expertise.

00:30:02: Yeah.

00:30:02: 'cause between domains, they don't always translate as well.

00:30:05: So if you really want to have, I would say like, to know the secret sauce

00:30:10: behind addressing this, the expertise risk, and what is my guide typically,

00:30:16: uh, should be consisting of it's technical expertise and a domain.

00:30:20: Domain knowledge in your specific space.

00:30:23: Yeah.

00:30:23: Uh, would be the two key ingredients.

00:30:25: Edgar: I would want to add something that, because I, um, I presented this

00:30:29: to, to my participants in the workshop and they were like, yeah, what,

00:30:32: what, how, how, what does this mean?

00:30:34: Like.

00:30:35: We can't do it.

00:30:37: And that's not what this is about.

00:30:38: All this risk assessment stuff.

00:30:40: Um, and we get to some, some how to on how to assess it in the end.

00:30:45: But, but I just want to add, when we talk about risks, just like we want

00:30:48: to, uh, create awareness and if you, for example, um, find out after risk

00:30:54: assessment, we find there's a high risk that we run into problems with

00:30:58: our technical or domain expertise.

00:31:01: The solution is not like we don't do it.

00:31:03: The solution is like, we have to see how we can mitigate the risk so that we

00:31:07: don't run into it down the down the line.

00:31:10: So it's not about like do or not do, it's about like mm-hmm.

00:31:14: Assessing and then drawing the right conclusions.

00:31:18: Uh, one conclusion can be, we don't do it, don't get me wrong, but it does

00:31:23: not have to be like, it's not 1 0 0, like there is a little in between.

00:31:27: Mm-hmm.

00:31:27: And for example, for the expertise risk, you can either hire new people, you can

00:31:31: hire consultants, you can hire trainers to train your staff, um, or do all of it.

00:31:37: So, so, um, yeah, because, because you, you might also, um, find that you just.

00:31:43: Lack some resources or you, like, you have all the resources, you

00:31:46: also have domain expertise, and you have really good technical guys, but

00:31:50: you just don't have the experience that you get to consult, right?

00:31:53: So that's like, think about all this risk assessment, not about like do or not do.

00:31:57: Think about it, like to assess to, uh, to find out where we have weak, weak points,

00:32:03: which we have to, uh, tackle first.

00:32:07: And then we can start, or, or to, to ensure that we have a

00:32:11: good project in the first place.

00:32:12: Yeah,

00:32:13: Svetlana: and we show you how you could prioritize that risk as well.

00:32:17: So like, where do we think, um, we could, um, it would be the most impactful to

00:32:22: address first, second, third, where should we take a, you know, a bit of

00:32:26: chance, you know, and things like that.

00:32:28: So knowing, I would say is, um, the key, I think, emphasis here.

00:32:33: So I'm glad that you, um, emphasized that.

00:32:36: So.

00:32:38: Um, one thing that, uh, maybe a, a a quick plug here.

00:32:42: 'cause I think we're, um, we have two more risks, uh, to cover.

00:32:45: But one thing that we're going to start introducing 'cause we want to disseminate

00:32:50: our knowledge 'cause we've been learning again in the time that we've been away.

00:32:54: A lot.

00:32:54: Um, and we wanna do more workshops, uh, live interacting with you,

00:32:59: kind of having that face-to-face, um, you know, more regularly.

00:33:03: So we are gonna be hosting our kind of first, uh, uh, workshop next week,

00:33:09: which is the 22nd I believe we decided.

00:33:11: Um.

00:33:12: May 22nd, uh, between 12 Eastern and 1:00 PM Eastern to talk to

00:33:19: you about what it takes to stand up a, a successful AI project.

00:33:24: Um, how do, how do you actually achieve positive ROI for your investments?

00:33:30: How do you maximize the R that, that ROI?

00:33:34: How do you mitigate these risks?

00:33:36: Um.

00:33:36: Kind of strategically.

00:33:37: And then how do you control costs?

00:33:39: And that's another runaway factor that contributes to failure, uh, rates of ai.

00:33:43: So again, we've organized it into six different pillars that you have

00:33:49: to kind of get right in order to.

00:33:51: I would say accelerate the path of your AI adoption.

00:33:55: And so we'd love to share and get your input on that.

00:33:58: So we'll be sharing that link in the chat here as well.

00:34:02: Edgar: Yeah.

00:34:02: In the description down below.

00:34:04: Um, um, yeah, we'll love you to join.

00:34:07: Uh, we'll be most likely on LinkedIn.

00:34:09: So, uh, yeah.

00:34:11: Um, cool.

00:34:13: See you.

00:34:13: See you then.

00:34:15: Okay.

00:34:15: Risk number six.

00:34:17: Svetlana: Awesome.

00:34:18: Um, so this one's an interesting one.

00:34:21: It's, um, we're calling it AI output risk.

00:34:25: And so this kind of goes back to this whole kind of reputation brand and

00:34:30: kind of, uh, the type of, I would say, effect you leave on the market,

00:34:34: especially if this is a tool again, and just if, if this is a tool that is

00:34:39: internally facing where the risks are lower because your internal staff, and

00:34:43: they'll kind of flag these, these, uh.

00:34:47: Maybe biased outputs or whatever's wrong with the system to you.

00:34:50: And that's internally you're able to kind of control the risks.

00:34:52: There's still risks, again, even with internally facing systems.

00:34:55: But if you're doing more of a B two, B2C launch where you're kind

00:34:59: of publicly making it available, and I think a good, um, maybe example

00:35:04: there is, um, you know, Chevy.

00:35:06: Who was, uh, selling, who was enabling AI agent to sell vehicles

00:35:11: basically, uh, off of their website.

00:35:13: And, you know, the software engineer basically took advantage and he basically

00:35:19: tricked the AI to sell, um, you know, a a pickup truck for, for a dollar.

00:35:25: Yeah.

00:35:25: And that ended up all of.

00:35:28: Yeah, and it ended up being, um, all over the news.

00:35:31: People are just like, well, how reckless can Chevy be?

00:35:35: You know?

00:35:35: And, um, you know, launching this, this, uh, kind of chatbot, proper

00:35:40: guard rails and all kinds of things.

00:35:42: Edgar: I think it was a, I think it was a, um, a single seller.

00:35:44: Or like, kind of like, um, car salesmen basically, who did this with his store.

00:35:50: But yeah, uh, you have to be, you have to be careful.

00:35:56: Um, yeah, yeah.

00:35:57: Hallucinations.

00:35:58: Um, uh, been there, uh, everyone who's touched AI knows that, um,

00:36:04: sometimes comes up with stuff it does not have to come up with in the first

00:36:08: place because it does not know it.

00:36:10: Um.

00:36:14: In the end, you can do a lot of techniques.

00:36:16: But you have to test and try and test and try and, and and,

00:36:20: and really evaluate the output.

00:36:22: On, on, yeah.

00:36:24: And, and that, that's why it's also important to, to test as early as

00:36:27: you can with your users, um, to, to actually, um, try to break the system.

00:36:32: Like try to generate the wrong output.

00:36:35: Try to break.

00:36:36: Systems try, like you don't have to go into all the depth of jailbreaking,

00:36:40: like most cases, your B2B, you have kind of a somewhat controlled environment.

00:36:46: Um, if not, you have to be even more careful.

00:36:49: Mm-hmm.

00:36:49: Um, and then you, um, you have to also think about like how to monitor stuff, how

00:36:54: to get the human in the loop mechanisms in, which means that you, um, not rely

00:37:00: a hundred percent on the, a output, but, um, you, you define basically.

00:37:04: Rules or circumstances where you say, okay, now I want the human in the

00:37:08: loop to, um, make an assessment of the situation to, uh, figure out, um,

00:37:16: some, uh, I, I can't get the word.

00:37:19: Um, yeah, some, some, some workflows for, for allowing stuff to happen.

00:37:25: Um, so yeah, that, that's basically where you, where you

00:37:27: have to, um, we have to try to.

00:37:30: Get the system robustness.

00:37:33: Svetlana: Mm-hmm.

00:37:35: Edgar: And for that to happen to, for it to become robust,

00:37:38: you have to try to break it.

00:37:40: Yes.

00:37:41: Yeah.

00:37:41: Svetlana: And I think that there's again, different workflows as you,

00:37:44: as you mentioned, depending on the use cases, it's hard to prescribe and

00:37:46: tell you all of the potential kind of workflows that could work for you.

00:37:50: But there are ways to introduce kind of checkpoints.

00:37:54: Um, through your workflow to be able to, um, prevent some of these

00:37:58: like output risks, uh, for ai.

00:38:00: But it comes down to like, well, why, why does it matter, especially for an

00:38:04: internal facing tool, brand reputation, risks, trust from your employees?

00:38:09: Like there's not basically

00:38:10: Edgar: all of the above.

00:38:12: Everything

00:38:13: Svetlana: that could go wrong, uh, could, could hurt your company, whether it's,

00:38:17: again, an internal tool or external.

00:38:19: So those are the things that you should pay proper attention to.

00:38:23: So, um, should we move on to our last, but not least?

00:38:28: Edgar: Yep.

00:38:29: Svetlana: This is, um, I would say a big one.

00:38:32: And so user adoption risk.

00:38:35: So the first one I think you'll be like, well, how's it

00:38:38: different from desirability?

00:38:39: Desirability is, uh, do they actually want it?

00:38:42: Right?

00:38:42: So have they expressed it or is there a particular need

00:38:44: that I can actually anchor?

00:38:47: My use case too.

00:38:48: I give them a solution and they've then like jumping up and down and

00:38:51: saying like, I want this tool.

00:38:53: If you build it, I'm gonna use it with user adoption.

00:38:56: And this is becoming more and more, uh, of a problem with artificial intelligence

00:39:00: because people fear AI in the first place.

00:39:03: So this is nothing to do with desirability, but this is

00:39:06: being, um, kind of rolling out.

00:39:09: Yeah, but rolling out like these, these tools in the organization and people

00:39:12: are just like, I don't wanna use it.

00:39:14: Like this is gonna replace me.

00:39:15: Oh, you wanna automate that process?

00:39:17: Uh, you're gonna be taking over my job.

00:39:19: So, um, one of the things, again, you could, you could be, uh, working in

00:39:23: the larger organization, you have the strategic fit, you've checked up all

00:39:26: the boxes in the other risks, but you forgot about this change management.

00:39:30: Um.

00:39:31: Aspect of the plan, and you roll this out and you're like, crickets.

00:39:36: Uh, why isn't anyone using it?

00:39:38: And why is the adoption kind of fails?

00:39:41: And a lot of times,

00:39:42: Edgar: let me say, someone who, who's done, uh, 15 years of, um,

00:39:46: roll, rolling out, uh, software automations and companies mm-hmm.

00:39:50: People start to lobby against you.

00:39:55: They start, they start to, to, uh, uh, to, to try to, um,

00:39:59: not, not only not use it, but.

00:40:01: Make others not use it too, so, uh, as well.

00:40:04: So, um, yeah, this is, um, they create scripts.

00:40:08: Yeah, yeah, yeah, yeah.

00:40:09: People can be quite like, um, averse to change in the first place.

00:40:13: Mm-hmm.

00:40:14: And if that change even, uh, like is combined with fear, they get.

00:40:19: Highly creative.

00:40:23: Stop it from happening.

00:40:24: Yes.

00:40:24: Um, yeah.

00:40:25: So, so, uh, and, and the best thing, uh, you can do, and it's, I just mentioned it,

00:40:29: like get your users early in the process.

00:40:33: Uh, for example, if, if, if I have, um, company and I, we need

00:40:37: to find out where to start with ai.

00:40:39: Mm-hmm.

00:40:40: The first thing I ask is like, what's the most annoying thing in your

00:40:43: day you would love to get rid of?

00:40:45: As quickly as possible, and then we can see if we can fix it with ai,

00:40:49: because that's something that people don't wanna do in the first place.

00:40:51: And if you automate them mm-hmm.

00:40:52: That that's a good entry point.

00:40:54: That's also the thing where I always say like, don't start

00:40:56: with like the big solution.

00:40:58: M make like start like, because most of the time what you have in your

00:41:02: head is level three solution, like the high, like agentic, everything

00:41:06: automated, start at level one.

00:41:08: Like just give the people something to work on and to work with.

00:41:12: Um, and, and so they feel, uh, feel for this to be more

00:41:16: of tool than a replacement.

00:41:19: Um, uh, educate people properly.

00:41:22: Um, and, and, and the big one, and I, I can't overstate this.

00:41:26: Trust you have to get the people to a point where they build

00:41:30: enough trust, um, in the system.

00:41:33: And that also is the expectation management part We had in

00:41:37: the, um, in the second risk.

00:41:39: If you manage the expectation, say, Hey, this system won't work all the

00:41:43: time, but it will work enough times so that it really, really will help you.

00:41:48: Um, and, and people know that they're more, more likely to, um, yeah,

00:41:53: live with the errors the system has.

00:41:56: If you say, Hey, this is a perfect system, just use it.

00:41:59: People encounter error first time, second time.

00:42:02: Mm-hmm.

00:42:02: And they start to not trust system anymore.

00:42:05: Um, and even better, you think about the errors beforehand or maybe you know it

00:42:10: from testing and they say, Hey, this error can happen, but then you can do this.

00:42:14: And then it's good.

00:42:15: For example, um, you know that, uh, you have cases where you

00:42:19: just need to retry and the second time it most likely will work.

00:42:23: Um, so, uh, and if you know this beforehand and you give it, like give,

00:42:26: give the people this tool with the, like what's, what to do when it goes

00:42:31: wrong, um, cheat sheet, basically.

00:42:35: Mm-hmm.

00:42:36: Um, then, uh, the adaption will be a lot better.

00:42:40: Svetlana: Yeah.

00:42:41: And I think that, um, one of the other things that I'll kinda mention the other

00:42:44: side of it is like, yeah, starting small.

00:42:47: Getting them a little bit of an appetizer for what AI can really do.

00:42:52: But I would say the other extreme people are fearing their job loss and AI is

00:42:55: out to get them and all of their jobs.

00:42:57: A lot of it is actually ungrounded.

00:42:59: Um, I would say they're fearing a GI and a SI that, you know, the

00:43:04: artificial journal intelligence and things like that that don't even exist.

00:43:08: So sometimes just having that conversation and educating your.

00:43:11: Teams on how AI works, what it's truly capable of, that could

00:43:15: be completely game changing.

00:43:17: It'll open up people's minds.

00:43:18: It's like, oh, um, I shouldn't be worried about a JI or a SI now,

00:43:22: probably not for another 10, 15 years.

00:43:24: You'll be fine for the next couple.

00:43:26: Like, continue using our, you know, go use our tools.

00:43:28: You'll be how be great.

00:43:30: Edgar: Honestly, I would even connect it like this is a tool.

00:43:34: We try to help you, uh, to have less annoying work.

00:43:38: Um, let's, let's do that.

00:43:40: Um, and also give the people, uh, like proper feedback channels so that they

00:43:44: are always like, if something happens, something doesn't work as you expected.

00:43:48: The people have a, a clear feedback channel and, and, and, um.

00:43:52: Best case, even some life support to help you out, uh, to help them out.

00:43:56: Sorry.

00:43:57: Um, so yeah, but yeah.

00:43:59: Yeah.

00:44:00: But I think, again, think, think of it like the big trust bubble.

00:44:02: How do you get trust into the system and, and then the usability will be better.

00:44:07: Svetlana: Yes.

00:44:08: Address the concerns through education, I think would be good.

00:44:12: So we've covered, again, the seven risks.

00:44:14: Um, we talked about the feasibility.

00:44:17: Desirability.

00:44:18: We talked about viability risks.

00:44:20: Uh, we talked about strategic fit expertise, uh, risks, AI output risks.

00:44:26: And the last one we just covered was user adoption.

00:44:29: Risks.

00:44:30: So that basically covers kind of the core seven.

00:44:34: And do you ask like, oh, what about, what about security and compliance

00:44:37: and like all kinds of other things.

00:44:39: Of course, every industry is different and so there's gonna be some, a

00:44:43: lot of overlaps between others.

00:44:44: Some there's gonna be some specific risks to your specific industry.

00:44:48: So, um, the AI risk matrix is quite flexible in accommodating other groups

00:44:53: as well, but, and there's also a lot more

00:44:55: Edgar: details.

00:44:55: Uh, and the PDF, um, which you can download, uh, and there's

00:45:00: also some security parts in there.

00:45:02: So, yeah, we can, we, we cannot, like, dive into stuff

00:45:07: too deeply in the podcasts.

00:45:08: Um, therefore, yeah, read the document, uh, or join us on

00:45:11: the, on the workshop next week.

00:45:13: Svetlana: Yeah.

00:45:14: So, and then, um, if you do get the guide, there is a nice impact grid where you

00:45:20: can actually score all of these risks.

00:45:23: And there's some like nice instructions, like very simple, or try to keep it like

00:45:26: very lightweight on the content so that you can simply just do it and you can

00:45:31: actually visualize it like a heath map.

00:45:33: So.

00:45:34: What things should be are very important than the likelihood of them happening.

00:45:38: So what should we be addressing?

00:45:40: So if it's talent, if it's AI output risk, we just don't know

00:45:43: how to mitigate the output risk.

00:45:44: Maybe that's what we should pay attention to first.

00:45:47: Yeah.

00:45:47: What we're doing is actually desirable.

00:45:49: Probably not a risk to us right now, but I think it's a very helpful matrix to

00:45:54: actually visualize where the risk is.

00:45:56: Um.

00:45:57: For you, and you can again, add others to, to this kind of framework as well, so

00:46:02: you can visualize lots of different risks.

00:46:03: So I think this will be, um, quite helpful for those who are implementing ai.

00:46:09: Edgar: Yeah.

00:46:10: And, and, and like I said, um, in the expertise part,

00:46:12: it's, it's about assessing.

00:46:14: So you, you take all this seven.

00:46:17: Areas, you get your team together and, um, based on whatever it is you want to

00:46:22: do, you um, try to assess all these parts.

00:46:25: And you, you might have parts where you say, okay, we are, we are, we

00:46:28: are totally fine in infrastructure.

00:46:29: So, um, and it's totally feasible.

00:46:31: We can do this.

00:46:33: Um, uh, on the feasibility side, there's a low risk and you

00:46:37: don't have to focus a lot on it.

00:46:38: But you have, for example, um, stuff that, um, regulation for example, where you say,

00:46:44: okay, we have a regulatory risk that, uh, we, we have to account for and we, there's

00:46:50: a high likelihood that we get that this risk is becoming a problem down the line.

00:46:56: Mm-hmm.

00:46:57: Then you just put more focus in, and what you get in the

00:46:59: end is the stuff that you have.

00:47:02: Little to no, um, worries about it disturbing your project.

00:47:08: And you have stuff that's like really high on the list and

00:47:11: that's your high priority stuff.

00:47:12: So you go ahead, you, you evaluate this.

00:47:16: The likelihood and what impact would it have?

00:47:19: You multiply it, put it in the matrix, and then your whole team is

00:47:22: aware of like, hey, this is, these are parts which we either have to,

00:47:27: um, act now and make decisions.

00:47:30: For example, hire consultants, hire new people, whatever.

00:47:34: Um.

00:47:36: And you have also an awareness in your whole team that this stuff

00:47:40: might come up along the way.

00:47:41: So on everything I do, I keep track in my head or like, just like by

00:47:46: looking at the matrix, if one of these things is triggering the work

00:47:51: I do right now, at this moment.

00:47:52: So that's like the, the two ways to look at it.

00:47:56: Um, and yeah, so it's not, it's not a blocking stuff, it's assessing.

00:48:00: And acting accordingly.

00:48:04: Svetlana: Oh yeah.

00:48:05: I think that's, oh, well, nicely summarized.

00:48:07: And again, we were hoping to bring you some more actionable tools and

00:48:10: frameworks like this, um, in the upcoming episode so that you can take

00:48:14: these as frameworks back to your teams.

00:48:17: But if you are ready to potentially in your bin, explore, exploring AI

00:48:22: space, and you're hungry to get.

00:48:24: You know, faster results with artificial intelligence.

00:48:27: Um, we do have an AI ROI accelerator program that we'd

00:48:31: love to talk to you about.

00:48:32: Um, that again kind of focuses on the key areas where people are

00:48:38: hungry to, to really get right.

00:48:41: Um.

00:48:42: With AI projects, it's achieving ROI return on their investment

00:48:47: finally in making sure that they are investing in the right projects while

00:48:51: mitigating these risks and as again, and others that are relevant to your

00:48:54: industry, but also controlling costs.

00:48:57: So how do you manage these projects from taking years to actually a

00:49:02: much more consolidated timelines?

00:49:04: And we'll show you kind of our approach to how we do that.

00:49:07: So if you're ready to.

00:49:08: Talk to us.

00:49:09: Um, we, um, schedule a 15 minute free discovery call with us.

00:49:14: Um.

00:49:15: Myself or Edgar will walk you through the program and see if we're the right fit.

00:49:20: So hope you, uh, hope you give us a call.

00:49:23: Edgar: And if you are in Germany, we can also do this in German, so

00:49:27: it doesn't have to be English, just

00:49:30: Svetlana: so I don't speak English.

00:49:30: So that'll be, that'll be on Edgar.

00:49:32: So I only speak English.

00:49:34: Um.

00:49:34: Not German.

00:49:35: So,

00:49:36: Edgar: uh, you could, you could do, you could do Russian.

00:49:39: Svetlana: Oh, yeah, yeah.

00:49:40: We can, we could also, we have a third language that we could

00:49:42: deliver these workshops in.

00:49:44: Edgar: Um, you, you, 1:00 AM mean No, like, I could listen and understand

00:49:47: half of what you say, but like,

00:49:50: Svetlana: listen to nod.

00:49:51: So Vegar, listen to nods.

00:49:52: Um, duh.

00:49:56: No, that's awesome.

00:49:56: Edgar: Okay.

00:49:57: So yeah.

00:49:57: Um, yeah.

00:49:58: Um, call us up if you, um, if you are more, um.

00:50:01: Want, wanna have a deeper assessment.

00:50:03: Um, and, and yeah, we, we, yeah, we are happy to help.

00:50:08: Um, but besides that, yeah, um, don't let the PDF get this in your teams.

00:50:13: Uh, it will help you, um, to get, get your AI projects, um, yeah.

00:50:18: Better planned and also better executed.

00:50:20: Um, and yeah, that was our episode on risk assessments.

00:50:24: Ana, thank you very much.

00:50:25: Uh, you did most of the work on this.

00:50:26: Um, so, uh.

00:50:28: Uh, yeah, it's your area.

00:50:30: I'm more the technical guy.

00:50:33: Um, well,

00:50:34: Svetlana: I think we, um, definitely lots of examples I think

00:50:36: is, are applicable across both technical and the business sides.

00:50:40: Yeah.

00:50:40: But again, we'll have more technical, we'll try to bring in more of the

00:50:44: strategic views into the planning aspects of your projects so that

00:50:49: we can give you a much more.

00:50:51: Again, comprehensive angle to, to AI implementation.

00:50:54: So

00:50:55: Edgar: this season will definitely, definitely be a mix of, of

00:50:58: different, different things.

00:50:59: Maybe not as many episodes, but uh, we try to, to add a, a lot more value.

00:51:05: Svetlana: Awesome.

00:51:06: Edgar: Okay.

00:51:07: So, yeah.

00:51:08: Thank you very much, Solana.

00:51:09: Thank you very much.

00:51:10: Everyone out there for listening.

00:51:11: Um, like we told you, you find all the links we mentioned down in the description

00:51:15: when you're listening to the podcast.

00:51:16: It's most likely also in the descriptions there.

00:51:19: Um, that being said, um, it was from my side.

00:51:23: Thank you very much.

00:51:24: And um, yeah, Solana, you have the last words.

00:51:27: Svetlana: Yeah, no, I am, uh, I guess, and thank you so much for

00:51:30: your time and looking forward to.

00:51:32: Hearing your feedback and um, on new topics and things like that,

00:51:36: um, that you're interested in.

00:51:38: 'cause we are here to maximize the value for you.

00:51:41: So thank you so much and we'll talk hopefully in in next time.

00:51:46: Edgar: Yeah.

00:51:47: See you next one.

00:51:47: Bye.

00:51:48: Svetlana: Thank you.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.