Enhancing AI Agents for Business Use, OpenAI Updates - EP 013

Show notes

In this episode, we dive into the latest releases from OpenAI, including GPT-4.0, and how these advancements are revolutionizing AI-human interaction. We explore the implications for business owners, discussing practical applications, multimodal capabilities, and the potential for enhanced customer service and productivity.

Show transcript

00:00:00: Hello and welcome to this week's episode of the AI Boardroom.

00:00:06: And with me again, my wonderful co -host Svetlana.

00:00:10: Tell us about today's topic.

00:00:13: Yeah, today we want you to talk about how Chagiviti and some of the recent releases

00:00:17: that OpenAI had announced in the last couple of weeks and how that is redefining

00:00:24: the human to AI interaction.

00:00:26: And what does that mean to you business owners?

00:00:30: And just I'd say, you know, we'll even cover it.

00:00:33: maybe everyone out there, but mostly your business owners.

00:00:36: Like.

00:00:37: and I think that some important topics will go into the practical all the way to

00:00:41: conceptual.

00:00:42: So yeah, if you're interesting in hearing more, stick around.

00:00:47: Yeah.

00:00:47: Yeah, so I would just try to introduce this today's topic a bit by just trying to

00:00:56: talk about what ways of communication are there.

00:00:59: Also a bit what my experience of different interactions with AI are, because some of

00:01:04: them are cool on paper, are cool in demos, not that cool in practice, which yeah, a

00:01:10: lot of it comes down to what people are used to.

00:01:13: And for me, I have an AI solution, which is basically a chatbot.

00:01:19: It does a bit something with images, but it's still a chatbot.

00:01:24: I am totally capable for some time now to do voice and to do real -time voice.

00:01:33: And I didn't do it because people are not used to it.

00:01:38: And even me,

00:01:40: Like I'm getting used to it, but even me, like I'm a digital native.

00:01:43: I grew up with computers.

00:01:45: I built my first computer, I was 10.

00:01:47: Like I really dig the whole AI stuff.

00:01:51: And still, like the chat interaction is like feeling the most natural to me.

00:01:56: And I'm still hesitant talking to my AI in a conversational manner.

00:02:04: Is it because you've been burned by the technology that maybe wasn't capable

00:02:09: enough for you to carry on the conversation?

00:02:12: help, to be honest.

00:02:14: I think that's what's probably messing a lot of folks up and adopting some of this

00:02:21: voice technology because of Siri on these Alexa, I mean Google Home and everything

00:02:26: like that.

00:02:26: They're unable to carry on the conversation.

00:02:28: actually a good experience from day one.

00:02:32: It's not perfect, but it's really fast.

00:02:36: It's just activated.

00:02:41: Yeah.

00:02:42: Yeah, it's just seeing the blue light in the corner of my eye.

00:02:47: So yeah, that's the better experience.

00:02:52: Also Google Assistant is also a really usable experience for quite some time now.

00:02:56: So it's mostly Apple that didn't update Siri since 2015 or something.

00:03:04: But rumors have it this year is the year.

00:03:08: So yeah, looking forward to the goal.

00:03:11: I love to, and honestly, I'm really rooting for it because GPT -4 .0

00:03:17: integrated into iOS, deeply into iOS on my phone.

00:03:22: I would love that.

00:03:24: And everyone will have Scarlett Johansson talking back to them.

00:03:28: Or not, yeah, we'll see.

00:03:31: see how the judge decides.

00:03:35: But yeah, I don't know what it is.

00:03:38: I found that older people, like in their 50s, even 60s, they oftentimes they don't

00:03:47: use the keyboard as much because they are not as fast in typing.

00:03:50: And they use voice to type a lot.

00:03:54: Mm -hmm.

00:03:55: And wait a second.

00:04:00: Keep going.

00:04:00: Was that about a sneeze coming on?

00:04:04: But I think that's a good point with, yeah, depends on people's maturity, as you

00:04:10: mentioned, like I have been grown up with this technology and what their comfort

00:04:13: level is, but I would argue that we shouldn't be building technology for the

00:04:17: people today because it's the technology who's gonna be most used by is millennials

00:04:22: and probably the upcoming generations who were born into the technology.

00:04:26: So they're much more likely to.

00:04:28: Like my kids are growing up with Alexa.

00:04:31: We have both Alexa and Google Home and they go and mind you, they don't speak

00:04:35: English.

00:04:37: They still try to like, they learn English with Alexa and Google Home because they

00:04:42: are motivated to like converse, con conversate with her.

00:04:46: Like if they want something to get done, like they go and they learn whatever task

00:04:50: that needs to be completed.

00:04:50: So there is, I think an emotional, like not emotional, but there's a need and

00:04:55: people will go out of their way to.

00:04:57: like in my kid's case, actually learn the language to be able to take advantage of

00:05:00: the system.

00:05:01: So again, I think always designing for people that, you know, of today is

00:05:06: probably not the best mentality, but I do think that over time, voice will become a

00:05:11: much more native experience.

00:05:14: the people today are most of the people.

00:05:15: So you cannot ignore them.

00:05:20: And you have to, it's not that, it's the majority, it's not the minority.

00:05:27: So yeah, I'm with you, you have to keep in mind that there will be a skill set, a new

00:05:34: skill set that is needed for the future of technology.

00:05:38: Always has been like that.

00:05:40: but still we have to at least show people a path to easily get on the train and to

00:05:47: not jump on the running thing.

00:05:52: And so, yeah, one thing that I think also is an issue is the whole notion of it

00:06:02: being clunky and kind of like, yeah, not really responsive, not really natural.

00:06:08: Mm -hmm.

00:06:08: that's, I think, where the latest updates of Google and OpenAI and Microsoft and

00:06:14: everyone came in, like multimodality and also small models that can be executed on

00:06:21: your device.

00:06:22: All of that gives you a new frame, a new canvas basically to work on.

00:06:29: And this canvas is latency free.

00:06:32: It gets your emotions.

00:06:34: It is really...

00:06:38: really starting to become helpful by quite a margin.

00:06:43: And that's what's getting up to us.

00:06:46: And to be honest, I'm a bit disappointed and I'm still not able to use the new

00:06:51: voice experience of Chetchipity because I'm really looking forward to it.

00:06:55: Because yeah, it got my attention.

00:06:59: actually starting to exhibit emotions too, right?

00:07:02: Like, and personality and things like that.

00:07:04: So it's too cited.

00:07:05: So it not only interprets.

00:07:07: Yeah.

00:07:07: right?

00:07:08: Last week was Microsoft's Build conference, which was Microsoft Developer

00:07:15: Conference, and they showed a demo of GPT -4 .0 integration into Windows, or into Co

00:07:22: -Pilot for that matter, and they had a screen share running and...

00:07:29: the AI was watching the screen basically, and they were playing Minecraft and it

00:07:34: gave live updates on the Minecraft game.

00:07:38: And then in the game, he turned around and said, what's that?

00:07:42: And then the AI was like with a like urgency, a lot of urgency was, hey, it's a

00:07:48: zombie, go away, find something to hide, a shelter or something and get given like

00:07:52: really actual usable advice in an urgent tone, which was in...

00:07:58: pretty cool.

00:07:59: the latency, which was ridiculous.

00:08:02: Like really, it wasn't sped up.

00:08:03: It was kind of a live recording.

00:08:08: Interesting, really interesting.

00:08:11: So it also opens up so many new use cases.

00:08:16: And to be honest, like I'm pretty sure AI is better in customer service.

00:08:26: pretty quickly than most of the customer service people are, just in terms of

00:08:30: always being polite, never being in a bad mood, always having all the answers that

00:08:36: are available.

00:08:39: So, yeah.

00:08:40: ChadGPT 4 .0 model, then 20 agents being live, because it's available 24 -7.

00:08:47: Yeah.

00:08:48: it's a huge opportunity.

00:08:53: And yeah, I find myself like more and more rooting for companies to add AI to their

00:09:04: service, like customer communications.

00:09:08: Because honestly, like I know the information I want is really mundane, but

00:09:15: I need it.

00:09:17: why can't an AI explain it to me?

00:09:20: Just go ahead, launch the stuff, honestly.

00:09:24: And I think with even my experience calling agents, the one thing that I hate

00:09:29: doing, and if I can figure it out on my own, I would always go through the website

00:09:35: and try to find the answer, Control -F, if there's no agent, like, you know, chat bot

00:09:40: available.

00:09:41: I dread calling the 1 -800 number because I'm like, and it's going to ask me 500

00:09:47: times to press these buttons, and it's going to send me to a...

00:09:52: If you want the reception in English, press 2.

00:09:54: If you want the reception in Dutch, press 3.

00:10:00: Yeah.

00:10:01: I just have such a bad experience is that I think if you can, if you truly, if your

00:10:05: experience was called a 1 -800 number and you immediately talk to someone who

00:10:09: doesn't care what prompts you ask, like it's going to handle all of your numbers

00:10:13: and select.

00:10:14: I mean, that's huge, I think for folks.

00:10:18: But one of the other use cases that I think I'm really excited about, which is

00:10:23: like outside of customer service, but like,

00:10:26: Any products and services, whether it's like an Alexa type of speaker or like a

00:10:31: personal robot, whatever have you, being embedded with these AIs?

00:10:36: Because we're talking about like, you know, this human to interaction or elder

00:10:40: population, because a lot of them are aging in their homes and they are by

00:10:45: themselves oftentimes and they don't see a lot of people kind of in their day to day.

00:10:50: And they've started to adopt, you know, these like robotic pets.

00:10:54: that of course don't talk.

00:10:56: Sometimes they, again, I've worked on some products that actually, this is true, and

00:11:00: we've done some research on this, that they talk to Alexa as if it was a person

00:11:04: because they had very minimal interaction.

00:11:07: How cool would it be if they had access to this chat GPT -40 omni version like Sky,

00:11:15: they would carry on like a full conversation.

00:11:17: Like...

00:11:19: Yeah, like, oftentimes, like, I'm, lately, like last year, I'm not really good at

00:11:25: following through on my personal goals, like eating behavior, stuff like that.

00:11:30: And like, just like having the AI know that and just talk to it about that, just

00:11:37: to get some reflection on my own thoughts, I think that could be hugely valuable.

00:11:44: And now that it has video, it can be like, Edgar, put that cake down.

00:11:47: yeah, don't go to that restaurant.

00:11:49: You know, you don't want Burger King or put put in random fast food name here.

00:11:58: No, accountability, I think, is going to be also huge because it can have that

00:12:04: longer term memory to understand your goals and then have like track your

00:12:08: progress towards them.

00:12:09: So I think that the use cases and possibilities, like you can integrate that

00:12:13: into an app and have the LLM kind of run the tracking with the voice and kind of

00:12:20: like this personality that again, you rely on.

00:12:23: You don't need a coach.

00:12:25: You don't need like an accountability partner.

00:12:28: when you have access to a powerful tool like that.

00:12:32: and then he takes you ransom with your browser history.

00:12:41: But first, I would love to take a small step back.

00:12:45: GPT -4 .0 is the new model of OpenAI.

00:12:49: For you that didn't know, your chat GPT got a lot better lately.

00:12:53: That's because of the new model.

00:12:56: The GPT -4 .0 model is not...

00:12:59: released to its full capabilities as of now, but will be in the upcoming weeks as

00:13:04: far, like at least as they say.

00:13:09: And O stands for Omni, and Omni meaning multi -channel or multimodality.

00:13:16: And what is multimodality in the first place?

00:13:20: So right now, chat with AI was mostly like, I type in text and it gives me text

00:13:25: back.

00:13:26: It's a large language model in the end, so yeah.

00:13:29: It's trained on a lot of text data.

00:13:33: Multi -modal models are able to not only read text, but also stuff like images, or

00:13:40: in the case of GPT -4 .0, also audio, natively.

00:13:44: Because right now you could also, like in the old version, you could talk to

00:13:50: Chetjubete, and you could get a talking answer back.

00:13:57: This was...

00:13:58: basically taking your voice, running through another model, making text from

00:14:03: the voice, putting it into the inference, like the text to text, giving text back.

00:14:12: This text is then synthesized back with the voice to give you like the voice

00:14:17: answer, the audio answer.

00:14:19: Now you give audio in, it gives audio out.

00:14:23: And it gives video in, it gives audio out.

00:14:26: And that's also the beauty of it.

00:14:29: If it's multimodal, you can switch between modalities basically.

00:14:34: And yeah, that's a huge leap.

00:14:37: What it gives you is recognition of, speaker recognition, recognized that there

00:14:44: are different people speaking.

00:14:47: It's the ability to interrupt, which was shown a lot in the LGBT4 .0 demos.

00:14:55: Mm -hmm.

00:14:56: give the AI the option to get the information in between the content, which

00:15:02: is your tone.

00:15:05: Do you speak in a loud voice?

00:15:08: Are you anxious?

00:15:10: Whatever.

00:15:11: It gets the feeling, the non -spoken part of the audio.

00:15:17: And that's, yeah, I think that's hugely valuable to get even better applications

00:15:23: that.

00:15:25: also have a low latency because you don't have to do all the transformation steps

00:15:28: and the answer comes quickly and that's what we also seen in the demo of GPT -4

00:15:35: .0.

00:15:36: I think with one other thing that one other benefit that I'm surprised you

00:15:41: aren't speaking of, because I think out of the two where you're the more technical of

00:15:46: us, but if you think about historically, if this multimodal version did not exist,

00:15:51: you would be integrated in order to support that type of an experience.

00:15:54: You would have multiple AI capabilities that would be a spaghetti of an

00:16:00: architecture kind of behind the scenes in order to kind of again support that same

00:16:04: experience.

00:16:04: Now you have to...

00:16:06: points.

00:16:08: One system, you basically select, and now you're going to have choices.

00:16:12: I think now we have Astra from Google, we have OpenAI, but I'm sure there's going to

00:16:17: be competitors emerging in the market.

00:16:18: But the users of this are the ones benefit.

00:16:22: So you'll have choices.

00:16:24: So you don't have to think about, well, what model and how am I going to architect

00:16:29: this and stuff like that?

00:16:29: You kind of have to think about which model am I going to insert?

00:16:33: What are some use cases?

00:16:34: What is the experience that I want to support?

00:16:37: And some of these tools are already available to you and they're pretty easy

00:16:42: to plug and play from proof of concept.

00:16:44: I think Edgar can speak much more at length as to like operationalizing this

00:16:49: tool is probably like more 90 % of the effort of actually bringing it into

00:16:53: production.

00:16:54: But most of it is very streamlined because you're only needing to rely on one model

00:17:00: in order to support that customer kind of like end -to -end experience.

00:17:03: And I think that's really powerful.

00:17:04: And the more capable the model is and the more it understands just from the get -go,

00:17:11: the easier it is for you to just prepare the right context.

00:17:17: And also you have more context from the same data if you want to.

00:17:22: So because you had the audio signal before, but you could only take the text

00:17:26: from it.

00:17:27: You couldn't take.

00:17:28: emotions from it without like there was also there's I think, Hume AI, they are

00:17:33: pretty huge in recognizing emotions from any modality basically.

00:17:39: And you could also like take into consideration that but you would only

00:17:43: build text from that and also and then get the answer out as text which then has to

00:17:51: be translated.

00:17:52: So yeah.

00:17:54: Yeah, and of course, the less moving parts you have, the better, always, always was

00:17:59: like that, always will be.

00:18:02: So yeah, having one model which can do it all gives you the ability to do more with

00:18:12: what you have already.

00:18:14: And generally, I have used cases you haven't even dreamed of.

00:18:19: And there has been a lot of discussion, which I think you're really strong in, is

00:18:23: agentic AI.

00:18:25: And so, following Google Next, there was a huge talk, and I think the word agentic AI

00:18:33: has been trending ever since.

00:18:36: They even released use cases that these companies, their customers are using

00:18:42: Google's technology and some of the models they make available through their cloud

00:18:47: technology.

00:18:48: But I think...

00:18:49: you know, none of the use cases actually came close to what the potential with the

00:18:55: 4 .0 is going to enable you with agentic AI.

00:18:59: Yeah, what do you think is, what do you think the 4 .0 will enable us to do more

00:19:06: of from a perspective of productivity and how kind of you envision maybe even

00:19:13: businesses using it, maybe internally even, like, is that an option?

00:19:18: What are the...

00:19:19: What are your thoughts?

00:19:20: So generally speaking, that's a really good question.

00:19:27: Because one thing that GPT -4 is worse than the model before is reasoning.

00:19:35: Reasoning capabilities are hugely important for agentic behavior because the

00:19:41: model has to decide based on reason what to do with the given information.

00:19:49: and how to plan out and stuff like that.

00:19:51: So the worse your model is in reasoning, the worse it will perform on agentic tasks

00:19:57: and the performance, like if you go even a little bit beyond easiest task, will give

00:20:07: you poor results as of now.

00:20:10: That said, having a better zero -shot performance, which...

00:20:15: definitely is the case in this GPT -4O model because of course you can't hand

00:20:19: over the same data, like I said, the same audio file you had before, but it just

00:20:24: gets you a better zero -shot result.

00:20:27: That's hugely, hugely useful because then you have to do less calls to the AI, which

00:20:38: if you build any application, of course will help you.

00:20:41: The AI itself gets faster.

00:20:44: the result, you get a faster result because, yeah, like you have only to do it

00:20:51: once.

00:20:52: But then it goes back to the reasoning part.

00:20:59: It might fail on like executing the tasks and sticking with the plan because it

00:21:05: hallucinates somewhere along the way and then arrows compound and it breaks or

00:21:11: influences.

00:21:12: the outcome in a bad way.

00:21:15: So yeah, like from application building perspective, you definitely have more

00:21:20: options and honestly, I'm really, I close to every day look up if there's any news

00:21:27: on when actually the API will allow me to send audio and get audio back because

00:21:32: yeah, the applications are really huge.

00:21:34: If you can combine it with image generation.

00:21:38: which also will be a part of that multimodality because you can now, you

00:21:42: don't need Dali 3 for image generation because the model itself can do it.

00:21:48: So right now you would say, JJBT give me a prompt for that and then you would take

00:21:54: that prompt and put it into mid journey and get the image.

00:21:57: Now you just get out what you want and get the image back from the same model.

00:22:04: And yeah, and all that is really...

00:22:07: hugely, hugely beneficial if you want to build applications because I told it a lot

00:22:14: in the past episodes, I tell it a lot on LinkedIn, it's all about context.

00:22:20: And the better you can use your context, the better you understand your context,

00:22:23: the better the result is.

00:22:25: In terms of...

00:22:27: automate the tasks I'm assuming.

00:22:29: So is that kind of where you're leading?

00:22:31: Yep.

00:22:32: And more tasks, because I spoke to a company which is doing real -time voice on

00:22:41: a telephone already, like already as of now, before GPT -4 and everything.

00:22:47: And they have to add stuff like arms and keyboard typing to just bridge the time

00:22:55: the inference takes.

00:22:56: They got it pretty low, give them that.

00:22:59: but it's not instant.

00:23:01: So it's not like a real conversation.

00:23:05: And these applications might have been possible already, but they get really,

00:23:11: really humanized, basically, with this new release.

00:23:17: And I think that's really interesting because I think with, we talk about like

00:23:22: the lines blurring ultimately between the humans and AI because we're adding these

00:23:28: things that even, I mean, production of podcasts, videos that you watch on

00:23:33: YouTube, like that's ultimately what, you know, there are systems that are developed

00:23:37: to remove the ums and the ifs because like that's just our normal tendency to like

00:23:43: speak that way.

00:23:44: Or like my tendency is always to start like what's so.

00:23:48: So you have these additional keywords that you're, not keywords, but words that you

00:23:55: add in your normal conversations that we're now adding to AI for it to sound

00:23:59: more human -like, which I think interesting, or as you mentioned, like

00:24:04: typing behind the scenes.

00:24:05: Because again, these are the things that we hear.

00:24:07: So at some point, and we even started to experience this with, there's been a lot

00:24:11: of discussions with character AI now.

00:24:14: people starting to form bonds with these avatars that people are creating because

00:24:18: they've created and they chat with them.

00:24:21: Just think about the potential of having a voice, kind of AI agent, basically

00:24:28: speaking to you with all of these additional kind of more human -like

00:24:32: interactions.

00:24:33: I think, I mean, we're probably gonna hear more on this once the AI system is

00:24:38: released, but I do anticipate that people are gonna start forming bonds.

00:24:41: with this AI, as I mentioned, like people who are lonely, people who, you know,

00:24:46: maybe taking the system even to the extreme of developing a relationship with

00:24:51: Sky.

00:24:52: already thanking it, basically.

00:24:58: Yeah, but this is so natural language that I can't feel inclined to do what I'm

00:25:08: raised to and that's being polite.

00:25:13: Yeah, it's weird, but in the end, as it gets emotions now, maybe I should.

00:25:20: thank Skye.

00:25:22: better things, Kai.

00:25:26: But yeah, like you said, the line gets pretty blurry.

00:25:30: I'm already at a point where I would love to just speak to an AI, because oftentimes

00:25:36: humans don't do a good job with random situations if they're not highly skilled

00:25:42: to deal with them.

00:25:44: But if you think about also, there's research that actually wrote an article, I

00:25:49: have a blog post on humanizing AI.

00:25:52: I think it's interesting that people develop a form of bond and they actually

00:25:58: trust it because one of the things why kind of going to a therapist, there's not

00:26:03: as many people going to a therapist because they're like, what if I say

00:26:07: something and people are going to think badly of me?

00:26:09: Or I don't want to disclose this because it's going to end up, I don't know.

00:26:14: And they're going to form the wrong picture of why my motivations were such.

00:26:20: But now you have AI.

00:26:22: So what if you taught it to be not judgmental or I could truly be open to

00:26:28: this agent, with some caveats that my information will not be shared and it will

00:26:33: be anonymous.

00:26:34: But you could truly use this for therapies and being a psychologist, because it could

00:26:39: recommend some of the same principles that...

00:26:42: these therapists again, like people are going to hate me for saying this, but like

00:26:45: some of the things that maybe on like mild cases, like the day -to -day therapy, just

00:26:49: to keep your kind of mind flowing or inspired, there's certain things that I

00:26:53: think AI could be delegated to do.

00:26:55: And then there's certain things where, you know, you are going to need more hands -on

00:26:59: like therapy, deep dive sessions that AI cannot handle.

00:27:03: But I think that you're going to get, it's an opportunity, I think, for people to

00:27:08: have this bond to be more open.

00:27:11: which they otherwise would not either seek help or would not even have anyone else to

00:27:16: talk to and kind of voice out their concerns or get help.

00:27:19: So.

00:27:20: to be completely honest, I was with the current AI voice integration already at a

00:27:27: point where sometimes I drive home, like I have a thought, I wasn't able to do it on

00:27:32: my notebook.

00:27:33: And on the way home, I just used the speech, like the microphone system in my

00:27:39: car and I talked to it and I tried to conversate properly.

00:27:47: And now I can also interrupt it if I have the feeling it mis -enlisted me without

00:27:52: pressing any buttons whatsoever and being able to edit.

00:27:55: One thing also, Chachapiti related, is it uses, like they added memory across

00:28:03: different conversations, if you want to.

00:28:07: And this is also hugely helpful because it starts building up like a knowledge base

00:28:12: about you and what you talked about and stuff like that.

00:28:16: And that's where it gets me excited.

00:28:19: Also, Microsoft did this recall function Windows, which we will talk about at

00:28:24: another episode, I guess.

00:28:28: But AI needs context to really feel like a personal assistant.

00:28:33: It needs to know you, it needs to know what you do, and the more it knows, the

00:28:38: more you're exposed, of course, but the more it can help you.

00:28:42: Because otherwise, if it doesn't know what I'm talking about,

00:28:45: most of the time, how should it help me, right?

00:28:48: Yeah.

00:28:49: I think it's only fair, right?

00:28:50: So if I was thinking about this the other day, so if you go out of the blue without

00:28:56: signing in and without AI opening, I even knowing your contacts, you're just going

00:29:00: to prompt the system to be like, hey, what color should I choose between this dress

00:29:06: and this dress?

00:29:06: And you upload like a picture of the dress.

00:29:09: How should it know how to answer?

00:29:12: It's just going to be like an arbitrary choice, which is similarly like you

00:29:15: walking up to a stranger.

00:29:17: But that person, like understanding who you are, what you stand for, what your

00:29:22: preferences are, you're just going to be like, hey, what dress can I choose?

00:29:27: It's going to basically, that person is going to rely on their own experience and

00:29:30: they're going to provide their preferences.

00:29:31: In this case, large language models just give you the average tendency.

00:29:35: So it's like, they just rely on kind of its knowledge base to give you the

00:29:38: recommendations.

00:29:39: So if you want a much more personalized output, you kind of have to like,

00:29:44: similarly to the stranger, to like...

00:29:46: give them at least a little bit of your life story and like your preferences in

00:29:49: order for them to have a contextual feedback, something that's helpful to you.

00:29:53: So I do think that providing that context, whether it's with system prompting or

00:29:58: whatever have you, or through this kind of conversational history, you're truly

00:30:02: building this context about what your preferences are, what you stand behind,

00:30:06: and like what kind of questions you tend to like gravitate towards, or how you're

00:30:11: following up on like...

00:30:13: fine tuning the responses or maybe iterating on the responses.

00:30:17: So it's learning some of these things and that's kind of the power of AI to be able

00:30:22: to do that like infinitely and to be able to kind of handle it for different people.

00:30:28: But that's kind of the power of personalization.

00:30:30: But you have to provide that context in order if you want a much more accurate

00:30:34: response that will make you happy.

00:30:37: So in a way, I think that's like a fair ask.

00:30:40: And also to all the business leaders out there or project leaders for AI projects,

00:30:45: same for you.

00:30:46: The more information you gather, the more information you also curate or let it be

00:30:52: curated by the AI.

00:30:54: If you want to know how this works, feel free to ask.

00:30:59: Because if AI is building up memory, it can structure it in a way where it can be

00:31:06: recalled easily.

00:31:08: and also be already prepared to be recalled better than if it would be the

00:31:15: plain text.

00:31:17: And also stuff like context gives you the option to rephrase a question.

00:31:24: So you have a context which might be only your chat history from the actual chat, or

00:31:29: like the current chat.

00:31:30: And then you go ahead and then you get a new question.

00:31:34: Like you were talking about the dresses.

00:31:35: Like...

00:31:38: And then your next question would be like, and what about a scarf, for example?

00:31:44: And then AI goes ahead, looks at your history, and then it can rephrase the

00:31:52: question.

00:31:53: It's like, what scarf would be good for a dress which is in that or that color, for

00:31:58: example, right?

00:32:00: And then it rephrases the question for itself before it does the actual

00:32:06: inference.

00:32:07: And that's how you can technically solve or make context and more context usable by

00:32:18: looking at what's getting in, looking at what is around it and just rephrase what

00:32:23: is in there to get a more precise answer.

00:32:25: And that's how you get rid of hallucinations.

00:32:28: That's how you get rid of false information.

00:32:30: That's how you get proper grounding for your information to be really accurate.

00:32:37: It's not perfect by any means and you need a lot of tuning to get this dialed in and

00:32:41: right, but it's hugely helpful.

00:32:45: Yeah, totally agree.

00:32:48: Yeah, if you enjoy this episode, would love for us to dive into any particular

00:32:53: agents.

00:32:54: So I know that we covered about, you know, Chagy PTV focused on 4 .0 and kind of the

00:32:58: multimodality that it now offers.

00:33:00: But if you want us to kind of talk to you about maybe a specific use case around

00:33:05: sales, marketing, and kind of lay out the groundwork, maybe with some, some tangible

00:33:11: use cases.

00:33:12: Happy to do it.

00:33:13: As we always say, we want to grow with you and we want to deliver content that you

00:33:18: find useful.

00:33:19: So let us know in the comments if there's a particular topic you'd like for us to do

00:33:24: a deep dive on.

00:33:25: But don't forget to subscribe.

00:33:27: want us to do this live on stream in a session where you can ask live questions,

00:33:36: yeah, just let us know down in the comments.

00:33:39: And yeah, that's it for today, I would say.

00:33:43: Yeah, well, don't forget to subscribe again as I mentioned, like this video and

00:33:48: I guess we'll see you on the next one.

00:33:51: Thank you.

00:33:51: bye.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.