How to Build AI Prototypes Quickly and Scale AI Solutions - EP 015
Show notes
The process of building AI prototypes is different from building a full-fledged, scalable AI products. Prototypes are focused on solving a specific problem quickly, while production-level systems require considerations such as usability, security, and scalability.
In this episode, we do a deep dive and provide best practices for how to build AI prototype solutions and what you should consider.
Show transcript
00:00:02: Hello and welcome to AI Boardroom.
00:00:05: Today, we'll be talking about scaling AI solutions.
00:00:09: So, Edgar, do want to walk us through a quick round out of topics we'll be covering?
00:00:15: Yes, yes, yes, of course.
00:00:16: Hello, everyone.
00:00:17: So, yeah, today we want to talk a bit about one of our most beloved topics, which is building actual products and, yeah, getting them to scale.
00:00:26: So...
00:00:28: We won't cover all that we know in one episode, of course, but today we want to talk about some general idea of what does it actually mean to go from AI prototype to an A product.
00:00:39: What is actually that you need to scale?
00:00:42: And in the end, we also look at some considerations when you think about building stuff versus just buying stuff, which is an important decision to make if you want to scale AI
00:00:54: into your company.
00:00:55: So
00:00:56: That's what we're talking about today.
00:00:59: So can you maybe start with, let's dive into like, what is the difference between like a prototype and then a full -fledged, like what makes it different than why can't you just
00:01:10: consider a prototype, a production level system?
00:01:14: So that's something I encountered a lot before I even built my first full -fledged AI product.
00:01:22: And afterwards I noticed it myself.
00:01:25: on the hard way basically.
00:01:28: Because you get from AI, like it was never easier to get a convincing prototype than it is today with AI tools.
00:01:37: And they can even build stuff for you that's even more interesting.
00:01:42: So like you really like, I think for my software, the first prototype I built in the night or something.
00:01:51: And it...
00:01:54: hadn't changed much, like the AI part of it hadn't changed much throughout like month.
00:02:01: Like of course, refinements and some optimization privacy stuff that's still like, but everything around to get it to an actual product that people can use without guidance and
00:02:11: without someone sitting right beside them.
00:02:15: That's something different.
00:02:16: And that's what I mean.
00:02:17: Like you have to build a prototype which works in a really narrow window easily in hours.
00:02:25: but from there to something that actually works with different data, with different, in our case, it was different documents and different types of documents.
00:02:36: Before I started this, I wouldn't have thought that so many different types of tutorial documents even exist.
00:02:44: Like, all do the same, like all give you step -by -step guidance on what to do in a specific case, but they look all different.
00:02:51: Like we had like the first four companies we worked
00:02:54: They all have completely different layouts and stuff for their documents.
00:02:58: And that's what I mean.
00:03:00: It works in a narrow window, the prototype, like a charm.
00:03:05: When you get even a bit out of your comfort zone, stuff breaks and it breaks hard.
00:03:12: Yeah.
00:03:13: And I think maybe I'll add some color to it with the prototype.
00:03:16: You're not building, you're just validating some things around.
00:03:20: some specific objectives, right?
00:03:22: Like you're trying to solve a specific problem.
00:03:24: You're trying to go from A, a problem to get to a solution, and you're just trying to find the quickest path to get to that solution.
00:03:32: And so you're not considering all the features, the user interface, or like the production level features that you need to like enable it.
00:03:41: As you mentioned, I think I like that.
00:03:43: It's not gonna be usable in that short amount of timeframe because...
00:03:48: It's not going to consider any of the UX best practices, like the design and like the flow and integration into the workflows, right?
00:03:56: You're just like literally trying to get from a problem that someone faces to some solution that you figured out in the shortest amount of possible to demonstrate that that
00:04:06: problem can actually be solved with AI.
00:04:09: And then you're going to consider it's solved.
00:04:11: well, how do I get it to the next level of production readiness?
00:04:15: And that's when all of the other
00:04:18: stakeholders or basically the functions come in to really focus on the delivery of the security, the experience, all of the best practices for what a productized solution might
00:04:32: entail.
00:04:33: Yeah, especially security stuff.
00:04:36: That's crucial.
00:04:38: Also, oftentimes, for product times, you take a subset of data or even one document, one example of data.
00:04:46: And even the step of getting more data in already throws up the first questions.
00:04:52: It's not even about the AI and if it's giving the right solution, which is the topic in and of itself, but actually just how can we get the data into a secure enclave, into the
00:05:04: actual solution?
00:05:05: And that's also, you consider prototyping, already have to think what type of data is actually representative
00:05:15: do a proof of concept.
00:05:18: So, but yeah, in the end you have to throw more in and discover more stuff that doesn't work, adjust.
00:05:26: So yeah, I think for me, I built products now for like, as soon as I started programming it basically like 14 years, like even more like 16 years ago.
00:05:39: And it was never that hard.
00:05:44: to get from prototype to product, think, than it is an AI.
00:05:47: Because usually back in the days, we had input -output, A in, B out.
00:05:53: AI is not like that.
00:05:55: That's a good thing, to be clear.
00:05:58: But it also comes with its own challenges.
00:06:01: Yeah.
00:06:02: And sometimes I think another color to add to it.
00:06:04: So I think generative AI has changed the game.
00:06:07: So it's actually quicker to get to close to production level even output.
00:06:14: you're just, as you mentioned, you're now going to be spending time about all of these know, embellishments around the solution, but the model itself for the output is not going
00:06:23: to significantly change because you have more or less what you need.
00:06:27: It may not come with all of the features based on your custom data, such as indexing or basically linking back to the original source, like the features, like grounding and
00:06:38: things like that.
00:06:38: So these are additional things, but the information or the output does
00:06:42: like the model is going to change.
00:06:44: It's just all the things that you do around it in order to truly productionalize.
00:06:49: But I would say, if you really wanted to get a full -fledged solution and you're willing to work around those bounds, you can actually get to a solution that works with your
00:07:01: custom data in a very short amount of time.
00:07:05: But it would not be what we would consider a production level solution.
00:07:10: Yeah, so I also love to call it enterprise -grade.
00:07:15: think we can directly jump into, we already talked about something like, what do you actually need to scale?
00:07:21: I think it's, for me, I found it healthy to ask myself a question like, would this be enterprise -ready?
00:07:30: Because enterprise -ready means it has to work at scale.
00:07:34: It has to work without a lot of intervention.
00:07:39: That's debatable.
00:07:41: But that's at least what I was aiming for.
00:07:46: And the usability doesn't have to suck.
00:07:51: And that's the thing.
00:07:52: Like I said, we are not anymore programming or building whatever, local, no code, real code.
00:08:02: You don't do A, B anymore.
00:08:04: You do A and you always get A to C, B, D, maybe
00:08:09: that, I don't know.
00:08:13: And your only job basically is to get the system narrowed down, like set up in a way that from A to F, you make A to C.
00:08:27: And then the rest of the system is about getting it to work with this variability.
00:08:36: So yeah, how are you dealing with hallucinations?
00:08:40: How can you actually like mitigate wrong answers, even like filter them out beforehand stuff like if you can do this, what does this mean?
00:08:53: Does it add more complexity to my solution?
00:08:57: This adds time to the response.
00:09:00: Is it usable anymore?
00:09:03: That's all the stuff you need to take into consideration if you really want to build something to scale.
00:09:09: And of course, it's only the application part.
00:09:12: And that comes like a really big part, which I underestimated like a ton is like really the infrastructure.
00:09:22: to, where to deploy the model first thing, like no one wants to give the company enterprise data away, right?
00:09:30: Like I said, is it enterprise ready?
00:09:32: Enterprises don't give their data away without asking and they have compliance departments and they have lawyers is like a lot of stakeholders that are like really interested in
00:09:45: like keeping stuff secret.
00:09:48: And yeah, that's where you basically have to think about one side of the solution, like getting a proper output, which is actually useful.
00:09:58: And I always aim like 80 % of the time minimum.
00:10:03: I think that's like kind of a threshold where really it becomes actually useful.
00:10:08: Everything below that falls into category like three out of four.
00:10:13: Will people then still use it?
00:10:16: Like that's how you have to think about it, right?
00:10:18: It doesn't have to be perfect, but it has to be good enough for people to actually have some sort of reliance on it.
00:10:26: Yeah.
00:10:27: And I would just add, how do you come up with even
00:10:31: the number.
00:10:31: an example of what we've used in some cases, could, how do you ground or how do you compare it to something that you would consider useful, right?
00:10:41: One way you can use an industry benchmark.
00:10:44: So if someone has tried to tackle a similar problem, so what results are they getting?
00:10:48: So if they're at 80%, then your solution is 60.
00:10:53: Probably need to do another iteration of kind of enhancements and working to make sure that you can reach a certain standard.
00:10:59: that at least there's an industry benchmark.
00:11:01: You can also look at, if you like revamp existing tools, you can use the existing tools like output accuracy as an example.
00:11:13: So even if it's a rule -based, so for example, and you're evaluating on comprehensiveness or accuracy.
00:11:20: So you can like, even though you know the data is or the data is kind of outdated.
00:11:25: and it's not going to produce results, that's fine.
00:11:28: You do just kind of a comparable result with the two systems and you say, old system perform at this level, let's say 40%, we can't go below that before we can consider it
00:11:41: production level.
00:11:41: So anything that's marginally above that percentage is value delivered.
00:11:46: So if you are at 60%, you've delivered 20 % more value with your AI solution.
00:11:52: than what you have with your legacy software.
00:11:54: So there's different ways for what you would consider how to go about considering good enough before actually pushing it into production.
00:12:05: And then also quantify potentially value delivered.
00:12:08: I will also add maybe to what you've mentioned.
00:12:11: I think it's really key, especially with AI is to have some monitoring or explainability, some dashboards behind the scenes.
00:12:18: And there's ways to do with a lot of the cloud.
00:12:20: providers actually have tools for you to deploy models and then do some model monitoring, explainability.
00:12:27: You can build it behind the scenes for folks who are kind of monitoring these systems to overseeing like retrain or, know, just improve engineering.
00:12:39: Or you can ask, there should be some components that are also built into the user experience.
00:12:44: So for GDPR and other consideration, you probably want to consider it anyway.
00:12:50: because I think users are gonna want to know how the algorithm came to a decision.
00:12:55: So that's why you see more and more systems start to link to sources because again, it's meeting a certain requirement of explainability to actually surface the reasoning kind of
00:13:07: for how the model came up with the recommendations.
00:13:11: So would add maybe a few of those, but yeah, I totally agree.
00:13:15: Yeah, interesting.
00:13:16: Yeah, that's definitely another angle.
00:13:18: So if your current solution only is useful 40 % of the time, of course, 60 % would be a big improvement and actually useful.
00:13:27: That's also something like don't let yourself be discouraged too quickly if stuff doesn't work out as good as you've hoped for.
00:13:37: Aim for that good enough as an MVP and build from there.
00:13:45: because there is actually a ton you can do.
00:13:48: One of the biggest mistakes I did, which even didn't get me to a point where I could have a finished product to get out to people, was I completely overestimated what it should do,
00:14:03: because we had the first talks to the first test users, and they were telling a lot of stuff, and I would love to have this and this, and I was writing it all down, and was
00:14:13: coming out with brilliance.
00:14:15: brilliant, quote unquote, systems.
00:14:17: And I completely underestimated the effort behind it.
00:14:27: I'm still thinking I know how to do it, but getting it to this set working state, with all the complexity it involves, if you want to get all the bells and whistles, you won't get
00:14:40: to any solution at all.
00:14:42: So...
00:14:44: like focus on the use case you were set out to do, make this one work first and then add up gradually and carefully because as always, and this is the same for current AI systems
00:15:00: as it was for older software systems, complexity adds and it adds up quickly.
00:15:08: And then you get into errors which would have
00:15:13: avoidable if you would just have taken smaller steps.
00:15:16: yeah, stuff gets pretty complex pretty quickly.
00:15:21: So have always a look at this, like, do you really need that feature?
00:15:27: Or can we go about it without adding even more stuff and just take the minimal feature set you can think of, which would be still actually useful?
00:15:39: I think to even connect this to like, so who's responsible for doing this?
00:15:43: I would say someone on the product side.
00:15:45: So if you have a product manager, who looks and kind of determines again, like the strategy, the evaluates what good enough is, pushes into production, collects feedback.
00:15:58: But ultimately the product person, whoever you have on your team, a product manager, product owner, they would be able to look at that feedback and prioritize.
00:16:08: So I think there'd be not saying like, we'll deliver all of these, but
00:16:13: part of kind of initial build, you should have some success metrics for what you consider a full -fledged solution.
00:16:19: So you speak to it in terms of capabilities.
00:16:22: So as a user, I should be able to X, Y, and Z.
00:16:25: So you meet those criteria.
00:16:27: That's what you would consider first part of the project.
00:16:30: That's what you kind of mutually agree.
00:16:32: If it meets a certain criteria, that's what you launch with.
00:16:35: Everything else becomes a fast follow.
00:16:37: So what you're talking about is feedback.
00:16:39: following kind of that phase of the product, you have to still prioritize and say, okay, next iteration, you have to do X, Y, and Z.
00:16:47: And that way everything kind of moves along.
00:16:49: It doesn't mean that you'll never deliver on those features.
00:16:52: It means that you're gonna have to prioritize the most critical ones to the user experience, to the problem you're trying to solve.
00:16:59: And then you're gonna move and create like ultimately a roadmap, but that's kind of, you want that actually because...
00:17:07: and you want that continuous feedback loop.
00:17:09: over time, you are going to receive things that you may not have anticipated.
00:17:15: call them edge cases, things that you've kind of built out.
00:17:18: And then someone like three months later says like, it's happened to me before.
00:17:22: It's like the worst system.
00:17:23: Like, how could you have put it into production without like realizing this?
00:17:26: What only happens like once in a million use cases, right?
00:17:30: Like there are going to be things that you cannot anticipate.
00:17:35: and it only happens once in the blue moon as far as output, and you want to resolve that.
00:17:40: But you also look at the impact.
00:17:43: So there's multiple things that you also have to evaluate to say, yeah, I think this is worth actually us prioritizing next versus others.
00:17:50: But again, that feedback is going to continue to build out your roadmap and things like that, and you're going to continue to enhance.
00:17:57: And once you've probably taken care of as many of these features, guess what's going to happen?
00:18:00: You're going to have to update the model.
00:18:01: You're to have
00:18:03: do some other enhancements, services gets outdated.
00:18:06: So there's always things to do.
00:18:08: So there's no AI system that is completely stale because you have feedback loops and everything like that.
00:18:14: there's always stuff to do.
00:18:16: And I just want you to emphasize that.
00:18:18: You've kind of talked about like, there's so many features, so much complexity.
00:18:21: And I think that's okay.
00:18:23: As long as you have someone who manages that product to help you prioritize so that you can continuously deliver value back to your users wherever
00:18:33: internal, external.
00:18:35: And that's why I think a lot of companies in the AI, they've become, they started to publish their roadmaps because it's like, we've heard you, like you can go into the
00:18:43: roadmap and see when that feature is coming in next so that you know that they're aware and they've prioritized that according to again, value.
00:18:52: Not only AI, like, like software companies in general, like started doing
00:19:00: And also Mike, I don't say don't build features in it, but like don't start with all the features.
00:19:05: That's what I was basically saying.
00:19:08: Because that's crucial.
00:19:09: Because also stuff pivots a lot.
00:19:12: So I came up with like this huge feature set.
00:19:18: I can tell like one example, I was loading historical support chats to analyze them and to improve the response.
00:19:28: This is like...
00:19:29: It's a good idea in principle, but it's a first of all, these support chats have to be classified.
00:19:36: So they are like one continuous chat.
00:19:38: you load them, you can, you might be able to separate them by date or, but yeah, in the end I needed, I ended up, I programmed an AI solution, GenAI solution, which analyzes the
00:19:51: text and then like just tries to figure out.
00:19:56: If it's some way, shape or form is a new conversation started here.
00:20:01: Of course, that's a mess and it didn't need it at all.
00:20:05: Like the base functionality is good enough without this.
00:20:11: You just completely, it costs, it cost me like weeks and weeks.
00:20:19: And also on the other side, I had my, I had other stakeholders in the
00:20:25: waiting for the output.
00:20:26: And I was like, yeah, I'm close to finish.
00:20:28: I'm close to finish.
00:20:29: I never was because I didn't get the feature right, which I didn't need it in the first place.
00:20:35: So that's what I mean.
00:20:36: Like keep it simple, bring stuff out after you have some traction, after you have real user feedback of people that are really using it, then go in, build on more stuff.
00:20:47: And for example, now we did a roadmap ourselves and we looked at this feature.
00:20:55: I don't know even if I do it next year.
00:20:57: So because that's how far we've pivoted already, still in the rollout phase, we pivoted as much.
00:21:09: So I would say like, I would now prioritize, for example, a voice chat over something like this, because the base functionality would be even more useful if I add voice
00:21:21: interactions.
00:21:24: and do other stuff and this whole history was a nice idea.
00:21:30: But it was it was programmers with dream kind of idea and not actually that useful to be completely honest.
00:21:43: And that's what I mean.
00:21:44: Like that was a real world practical example where I completely missed the mark with my planning of the product.
00:21:53: I now have taken several steps back, even stopped developing some stuff that's not directly related to delivery of the software at all and said, OK, this year we only do
00:22:07: improvements and make stable what we have.
00:22:11: And from a stable platform, then you can still build everything on top of it.
00:22:15: But first of all, get your stuff stable, which is the first
00:22:21: prerequisite for have a scalable solution, it being stable.
00:22:26: Yeah.
00:22:27: so I like that a lot.
00:22:29: And I think there's so many things that I can talk about from a product standpoint that I'm happy to...
00:22:36: If anyone's interested in hearing more on how you actually build that roadmaps and not saying that what you did, I think that there's a lot of experimentation that happens and I
00:22:45: have so many different lessons, even in the past, that sometimes what
00:22:49: come up with initially in the project is like almost like a guessing game because it's based on your understanding of a use case only until you start getting it's in production
00:22:58: and start getting real user feedback.
00:23:00: So even sometimes the road mapping could be flawed because sometimes you were building them out 12 to 18 months in advance.
00:23:06: And guess what?
00:23:07: Like if you stick to it, you can be completely off mark.
00:23:10: So that's kind of why you have to reevaluate types of things.
00:23:14: like there are going to be, if you're going to build something and not going
00:23:19: Mistake make a mistake or two kind of with your feature you doing it wrong So there's always gonna be lessons and things you're gonna be like, well, we thought it was this but
00:23:30: actually the way people are You know valuing this is completely different.
00:23:35: So, you know be ready for pivots, but Prioritize I think what you're kind of the lesson here as well is like what you know In the near term and like do that following and kind of
00:23:48: follow your
00:23:49: guide the minimal critical features and then push it into production, get feedback, iterate, continue, and do it.
00:23:58: Because yeah, if you try to, I think, go for to get a complete set of features and everything that you have in the guessing game without getting user feedback, you may get
00:24:08: the experience wrong.
00:24:09: But you also invested so much time.
00:24:11: Yeah, that's a lot of fun.
00:24:13: That's basically what I mean, the time loss.
00:24:18: so, so painful and so time is valuable, especially in that manner, like in getting stuff out and not getting stuff out because we are not like, yeah.
00:24:32: And it also like, you start drafting the first presentation, you over promise, you cannot deliver.
00:24:37: So like, just like say, Hey, this is a product.
00:24:40: That's what it does.
00:24:41: If this is your use case, it will help.
00:24:44: Otherwise you may have to wait before we have a bigger
00:24:50: And yeah, and just be confident that the solution you're solving is already good enough and you don't need all the bells and whistles.
00:25:01: Full start.
00:25:03: Afterwards, yeah, take feedback, build on it, build it out.
00:25:08: But then you also have some something to work with.
00:25:11: In best case, if you have a startup and you have a new solution.
00:25:14: You start selling it already, you get some money and you can hire people and do more complex stuff because you have a bigger team.
00:25:20: Like that's how we should approach it.
00:25:23: Yeah.
00:25:23: Don't make my mistake.
00:25:24: Run each project as a mini startup.
00:25:26: And I think I like that a lot because you're optimizing for speed and value delivered.
00:25:31: Because if you're in the startup and I like that mindset actually a lot, because you're kind of saying like I have very minimal amount of money and like investment in my project.
00:25:40: I'm like, I need to really stretch that dollar, but I need to get the product out there in the market.
00:25:44: the quickest amount possible.
00:25:46: So with that framework, you're like, okay, what does this product need to do?
00:25:49: Minimum.
00:25:50: And for it to be considered production level so that when I pitch it to a customer, they'll be like, I need that.
00:25:58: It needs to be functional.
00:25:59: It needs to deliver the value and it needs to be desirable by your target audience.
00:26:04: So don't cut back too much on the feature, but make sure it's comprehensive, but don't go overboard because anything that you go overboard is a guessing game.
00:26:13: that may not make sense for you to invest that additional time and make additional investments because yeah, let the users kind of speak and tell you whether they want more
00:26:24: of specific features, which will then again, goes into a prioritization exercise, but it could actually speed up the time for from going from prototype to scale.
00:26:36: Yes, one way to do stuff is, of course, like we talked a lot about building, but there is also this, why don't we just buy it?
00:26:49: Why don't, like, I even hear like, what's the difference to Microsoft's copilot, for example?
00:26:55: And I'm like, can copilot do this?
00:26:57: And they're like, no, that's the difference.
00:27:00: And we cost a third.
00:27:04: So yeah, but that's a consideration.
00:27:09: Not only your customers should ask, but you should ask yourself too, because the thing is, even if you want to deliver a solution, building all of this tech never was a good idea.
00:27:21: There is a lot of open source, there's a lot of closed source stuff you can consider like taking into your, you wouldn't build a new Zapier if you just could use an adapt Zapier
00:27:32: for example, for your automated.
00:27:34: workflows.
00:27:35: And that's what we like basically say on every AI solution in the prototype phase, basically.
00:27:42: Yeah, you try to with the minimum effort, build out a prototype.
00:27:47: And if it's just typing in stuff into chat GPT, that's perfect.
00:27:52: Like, like take take the minimal approach before you before you tackle the actual solution.
00:27:58: So I did a lot of like I
00:28:02: When the whole Gen .AI stuff started, I had a project where I needed to do a lot of translation, like really a lot of translation.
00:28:09: So what I did, I was taking GPT 3 .5 at the time because it was able to translate in context.
00:28:15: I was saying like, this is the context, this is the source, please translate it to German or English or whatever.
00:28:22: And it did a wonderful job of that.
00:28:26: And the way I tested this out before
00:28:30: do any coding, I was just like taking some captions, throwing them into ChetGPT and seeing what the outcome was.
00:28:37: And I was seeing, okay, this works most of the time.
00:28:40: Okay, let's build something.
00:28:42: And that's how I approached this back in the day.
00:28:49: And even today, I would say like, what I build it myself again?
00:28:53: I don't think so.
00:28:55: There's a lot of open source translation solutions I might've taken.
00:29:01: one of these and just work with it.
00:29:04: yeah.
00:29:06: I would say there's also a spectrum of what you consider build versus buy.
00:29:11: So, I think I even had a post on this on LinkedIn at some points.
00:29:17: If you're curious, you can go back and take a look.
00:29:20: But basically, when you buy a solution, can come in multiple different ways.
00:29:25: So, when you buy, can buy an off -the -shelf solution.
00:29:28: So basically a package group subscription, you just log in and you start using it, upload your documents.
00:29:32: HedgePT is a good example of that.
00:29:34: You can also license OpenAI's models.
00:29:37: So you're ultimately buying an existing model, but you're building software layers and customization.
00:29:43: So it's not necessarily buy completely because there's some customization work you do.
00:29:48: So it's almost like a mix of the two.
00:29:52: So you are still paying for the licensing because you don't have to build that proprietary model.
00:29:58: And then, of course, if you build, what that means is you don't think that any of the existing models, codes, open source tools offer the same benefit that you're looking for.
00:30:08: So this is kind of what goes back to your comment about, would I ever build this myself?
00:30:14: Sometimes the state of where you are right now, there may not be a tool that does what you need it to do.
00:30:21: And sometimes it makes sense for you to actually build the model.
00:30:24: I'm talking about machine learning.
00:30:26: computer vision, whatever it is that you need to customize based on your own data.
00:30:30: Sometimes it makes sense if it delivers kind of business value, but a lot of the solutions are going to fall onto the spectrum depending on where you needed to work in the business.
00:30:40: I would even say like the buy build type of like mix could also come in terms of right now software as code or as APIs has become so democratized like through,
00:30:55: a lot of these platforms, even like we talked about earlier today, like Python libraries are so widely available, a lot of these features you don't even have to develop.
00:31:04: the reality of what the build of some of these custom AI solutions have been is like a combination of existing libraries, models, tools, all orchestrated together under one
00:31:19: application.
00:31:20: So some of it is
00:31:22: And I've had experience actually doing a lot with existing code, like modular services, APIs, and then building very minimal software layer on top of it.
00:31:33: And it's possible.
00:31:34: So you would consider that actually a buy type of a solution because you're still licensing.
00:31:39: But it is billed because you are kind of customizing some layer to it.
00:31:44: And the intelligence and the IP is really in how you're orchestrating.
00:31:48: the solution.
00:31:49: So it goes back to like what's intellectual property.
00:31:51: I'm actually trained in IP law.
00:31:56: So it's really the orchestration, how you actually put these solutions together, the logic behind it and the code that you layer on top of it to support an integrated customer
00:32:06: experience.
00:32:11: We talked about Rabbit LR1 on one of the newest episodes,
00:32:15: They also don't only repackage stuff, right?
00:32:18: They took 3 .5, they took 11 laps for the audio translation and they repackaged it and made up some stories about it.
00:32:29: But in the end, they didn't do a lot of stuff besides their own prompting.
00:32:34: So don't say that's the best way to do it or how you should go about stuff, but think about what you can leverage because
00:32:45: And that's the most important stuff.
00:32:47: You have to focus on the business value delivered.
00:32:51: And what is the quickest way to get there?
00:32:54: And like Svetlana says, orchestration is half of the game most of the time.
00:33:00: And everything else, like what you might need to adapt, integrate your personal stuff, that's effort -worth spending.
00:33:12: but not building out the whole stuff before you even get like to one first scalable run.
00:33:19: Yeah.
00:33:22: No, lot.
00:33:23: to talk about more.
00:33:25: think we can spend even more diving into.
00:33:27: Yeah, we will definitely have some more product episodes following.
00:33:33: Yeah.
00:33:33: And let us know as always, if there's anything, any particular topic that you're curious about sample applications or use cases and how
00:33:42: I don't know, solve a specific need with it.
00:33:44: Happy to, again, get your feedback and dive into that topic.
00:33:47: We want to make sure that we deliver value back to you.
00:33:51: So don't forget to subscribe because we do this for you guys, and your support means a lot to us.
00:33:57: Yeah, definitely.
00:33:59: Yeah, thank you very much for listening.
00:34:00: It was nice to talk.
00:34:02: I just love talking about product development and also about how to get to a concept, how to evaluate ideas, stuff like that.
00:34:12: There's a lot to talk about.
00:34:15: Let us know in the comments what you want to hear next.
00:34:17: Maybe you heard a topic today which we briefly touched and you want to hear more about it.
00:34:23: Let us know here.
00:34:24: Let us know on LinkedIn.
00:34:26: And yeah.
00:34:28: And we'll suggest also somewhere on the screen, we'll pop another recommended video from our previous talks.
00:34:35: And yeah, if you're interested in hearing more about AI for business applications, just watch our next video.
00:34:42: Thank you.
00:34:43: Thank you.
00:34:43: Bye.
New comment