EP 009 - Microsoft's AI Copilot is Changing the Game for Productivity and Efficiency - Jamie Hu
Show notes
In this episode, Jamie Hu from Microsoft discusses the impact of AI on productivity and operational efficiency. He highlights the importance of automating mundane tasks and the value of AI in reducing time spent on processing emails. Check out this and more on this week's episode.
Show transcript
00:00:00: Hello and welcome to this new episode of the AI Borg Room.
00:00:06: Today we have a special guest in really interesting topics for you, but more about
00:00:11: that from Lana.
00:00:13: Hi guys, another week, another episode.
00:00:17: This one's really exciting.
00:00:18: We have Jamie from Microsoft.
00:00:22: I'm sure he's covering not Microsoft specific topics, but we're really excited
00:00:27: to talk about productivity, AI, Chad GPT.
00:00:30: Some of the things that we hear about often here on the internet about creating
00:00:35: operational efficiencies with Chad GPT, LLM.
00:00:38: Are they useful?
00:00:40: Is this something that you should be implementing in your business?
00:00:42: Why are we not?
00:00:43: and what you should be considering as you kind of implementing these solutions.
00:00:48: So I would love to pass the mic to Jamie to introduce yourself and just kind of
00:00:53: give us a high level of the things that you are really passionate about.
00:00:58: Thanks Lana.
00:00:59: Yep.
00:00:59: Good to see everyone.
00:01:00: My name is Jamie Hugh.
00:01:02: I'm a solution consultant at Microsoft, specifically within the business
00:01:08: applications division.
00:01:09: And so that really means I've been very fortunate to be in the thick of the AI
00:01:15: wave that's happened over the last couple of years.
00:01:18: And I think that's tremendously exciting because the way we do business, the way
00:01:23: that we work has been pretty similar for the last 20 years or so, right?
00:01:28: nothing's really changed.
00:01:29: Maybe we've had better internet or slight improvement here and there, but this
00:01:34: technology, I think will create a bit of a shift in terms of how we think about doing
00:01:39: things, how we do do things.
00:01:41: And I'm really excited to talk exactly to your point about productivity and what is
00:01:46: the impact, right?
00:01:46: I mean, is this really going to change all these different pieces?
00:01:51: And let's do a deep dive.
00:01:52: So yeah, absolutely thrilled.
00:01:54: Thank you for having me.
00:01:56: Yeah, so thank you.
00:01:58: It took a few weeks for us to find the time, so really happy to finally have you
00:02:03: here.
00:02:04: So maybe just to dive in.
00:02:06: So one of the things that we talked about previously on this podcast is the fact
00:02:09: that one of the things that businesses really want to ensure is that they kind of
00:02:15: build longevity in their business.
00:02:18: So there's two ways to do that.
00:02:19: There's one creating new value in the business, but then there's also the...
00:02:25: operational efficiency.
00:02:26: So what we kind of talk about a lot when we hear about conversations, cheat sheets,
00:02:32: kind of talk, you know, shared or tools, they're really kind of catering to that
00:02:36: productivity angle, which ultimately kind of drives the operational efficiency.
00:02:42: So being able to deliver value to your customers at the least amount of cost.
00:02:46: And I think a lot of the initial use cases we hear about, especially in 2023 and now
00:02:51: in this year, is around productivity.
00:02:53: So how do you...
00:02:55: basically use the folks that you have in your business, but you apply ChaggBT to
00:03:00: basically extract more value out of basically the resources or the folks that
00:03:07: you have in your business.
00:03:08: So I'm just really curious, what are some things that you kind of hear from clients
00:03:13: that, especially in the productivity space, what gaps currently exist in their
00:03:18: business that they're leaning towards AI?
00:03:21: Just...
00:03:21: wherever that conversation typically goes with the folks that you that you
00:03:27: conversate with would be really insightful.
00:03:31: Yeah.
00:03:31: So I would say this, right?
00:03:32: There really hasn't been a great deal of innovation in productivity apps for a
00:03:37: while.
00:03:38: And suddenly we have this great base technology.
00:03:42: It can do a lot of interesting things, but I think there's one area in specific that
00:03:48: is really valuable for people.
00:03:50: And that is automating mundane tasks, right?
00:03:55: That is ultimately one of the biggest issues because people go to productivity
00:03:59: apps.
00:04:00: and they're spending time doing things that they rather not do.
00:04:03: And in fact, there was a survey done by Microsoft a while ago and 64 % of copilot
00:04:10: users, which is our AI platform, said one of the biggest value ads is it helps them
00:04:18: spend less time processing email.
00:04:21: Who wants to process email, right?
00:04:23: I don't know about you, but I hate processing email.
00:04:26: I hate being in Outlook and...
00:04:28: trying to sort that out.
00:04:29: So if I can find an ability, a tool, whatever you want to call it, to help me
00:04:35: deal with that a little bit more efficiently, so I don't have to deal with
00:04:38: it, that's great.
00:04:40: That's fantastic.
00:04:41: And so I think that is the cause of a great deal of interest in AI and
00:04:48: productivity, which is we've been doing this forever, is how we've been doing
00:04:52: business, but is there a better way now that we've got this great technology to
00:04:56: get...
00:04:57: rid of the mundane work that's within productivity.
00:05:02: Yes, definitely.
00:05:04: Yesterday I noticed I completely dropped my whole inbox and I already have like 120
00:05:12: unread emails.
00:05:13: I'm thinking every day about finding some active AI solution.
00:05:20: If we take this email stuff, of course we're certainly at the beginning of all
00:05:26: the productivity improvements, but...
00:05:27: What I certainly miss also about for example Copilot and a lot of other
00:05:32: solutions, they're really passive.
00:05:34: So they're like not as much like active solutions, like where's the algorithm that
00:05:39: actively goes through my inbox and like evaluates stuff for me and writes stuff
00:05:45: based on previous conversations and stuff like that.
00:05:48: So yeah, like I know you like manually are able to do that.
00:05:53: But like, when do you think comes that push?
00:05:56: Is it like maybe this year, maybe next year for really like autonomous algorithms
00:06:03: or even partly autonomous, like with the human in the loop?
00:06:07: That's a great question.
00:06:08: So I think that's part of a wider discussion around the topic of something
00:06:13: called AI agents, right?
00:06:15: You might have been familiar with that topic.
00:06:17: And just to sort of explain this a little bit, the current setup is to your point,
00:06:22: we have AI that's helpful, but really relies on humans for their discretionary
00:06:29: commands.
00:06:29: So the humans will have the ultimate say.
00:06:32: And I think that's why Microsoft has called this whole platform Co -Pilot,
00:06:35: right?
00:06:35: It's not captain or it's not.
00:06:37: Mm -hmm.
00:06:37: main user.
00:06:39: It's supposed to be there to help.
00:06:41: So to answer your question, then we have to examine why do we not have AI in a spot
00:06:46: where it can do these autonomous tasks?
00:06:49: And I think it comes down to the fact that the current generation of large language
00:06:53: models, if you look at it, has limitations to what it can do.
00:06:58: And the major limitations when it comes to this autonomous work is its ability to
00:07:04: reason and plan effectively.
00:07:06: I think there was a paper that was done on this.
00:07:07: a few months ago that was basically looking into this.
00:07:12: And until that is improved, it's, I mean, let me put it this way, it's possible.
00:07:17: Still currently right now it's possible.
00:07:18: If you wanted to, and I'm sure you've talked about Devon or you heard about
00:07:21: Devon, which is this to some degree autonomous programmer that's AI based,
00:07:27: right?
00:07:28: So all this is still possible.
00:07:29: It's not quite good enough for mass adoption.
00:07:33: So we've got this bit of a gap.
00:07:35: And to answer your question, when is that gonna happen?
00:07:38: Well, until we get better large language models, I think it's always gonna be a bit
00:07:41: of a daunting task.
00:07:43: When can we expect large language models to get better?
00:07:45: Well, I don't know, but at this point it's improving pretty quickly.
00:07:49: So I would imagine relatively soon, right?
00:07:51: Yeah, so.
00:07:52: advocate for maybe a quick second?
00:07:54: Do you want the LLM or basically AI in general to be in the driver's seat anyway?
00:08:01: So maybe to a point, but do you truly want to automate every possible process and
00:08:08: then kind of eliminate a lot of these tasks that typically are humans involved
00:08:12: with?
00:08:12: So as an example, we're kind of talking about email.
00:08:16: So would you ever want...
00:08:17: some large language models to go through your email and start responding to
00:08:21: messages without your input?
00:08:23: Or would you, or do you want to kind of get it to a place where most of those
00:08:28: responses are kind of generated and then you just press send?
00:08:32: I would love to at least reduce it to one step for me just checking it.
00:08:40: I would love to be autonomous to the point where it actually commits to doing
00:08:47: something.
00:08:48: The first AI tool I was working on, I thought about a marketing system.
00:08:54: So what if I can have a new product and I need the usual marketing stuff?
00:08:58: So I would love to have a handle that's just like, hey, this is my ERP system.
00:09:02: I have created a new article.
00:09:04: There's all the product information you need.
00:09:08: We need a marketing campaign for that.
00:09:10: And it goes ahead, creates me a website, creates me some visuals and stuff like
00:09:14: that, just all autonomously.
00:09:17: And...
00:09:17: Like you talked about Devon.
00:09:18: I love about Devon.
00:09:19: It has like an own self -managed tasks list.
00:09:25: So and if you imagine such a task list you could also have the tasks in separate
00:09:30: categories one of which could be human feedback and so for example I take the
00:09:38: product I need to create a marketing page and now I don't have any information about
00:09:43: the CI so I create a task which the human has to answer to.
00:09:47: and hand over the right colors so I can go on and do the work.
00:09:52: And in the end, I wouldn't deploy anything without a human looking at it.
00:09:57: But all that stuff like we, right now, the AI solutions are quite far away.
00:10:01: And I was with Jamie on the current large language models.
00:10:04: They have a lot of small inconsistencies.
00:10:08: And if you have such a large task like creating a marketing campaign, these
00:10:12: inconsistencies add up.
00:10:13: So at some point, they just break apart.
00:10:16: And that's where I I personally stopped back in the day and said, okay, let's
00:10:21: let's start with something easier because also it's not not that not not such an
00:10:25: easy sale because so now you're completely right people are also afraid of the stuff
00:10:30: going wrong So you have to start a bit leaner basically But but like that's where
00:10:37: my opinion AI and productivity gets to a point where it really starts to to to
00:10:45: benefit because I
00:10:46: In the end, it's about mundane tasks that have to be eliminated.
00:10:51: Email, which of course is one of which.
00:10:54: But also stuff that's just repetitive.
00:10:57: Everyone, look at the job.
00:10:59: Everyone's doing, in best case, 20 % of value add and the rest is mundane tasks
00:11:06: which are repetitive and just have to be done to get the stuff through the door.
00:11:10: That's something that I would love to see eliminated because it's wasting so much of
00:11:15: one of the
00:11:15: greatest resources on earth which is the human brain.
00:11:20: That's what gets me excited personally.
00:11:25: Okay, I want to go back to your point, Lana.
00:11:27: I think you made a really interesting point, which is how much of our work do we
00:11:32: really want to be automated?
00:11:34: Right.
00:11:34: And it's a great question.
00:11:36: So the way I think about AI is this.
00:11:39: I treat it almost like somebody that's working for me, an intern or some other
00:11:46: person, right.
00:11:46: And it's current deployment, the current capability.
00:11:50: It's pretty useful, but still not great.
00:11:52: And so it's always coming up to you, it's like, do you want to do this?
00:11:55: Do you want to do that?
00:11:56: And mostly I'm having to go to it, right?
00:11:59: So that's a little bit annoying.
00:12:01: As we get this technology better, the hope is it can understand me significantly
00:12:07: better.
00:12:07: So when I'm asking it to do things, it can go off and do it.
00:12:10: Now, am I going to trust it with high value tasks?
00:12:13: Probably not, to your point.
00:12:16: So I don't think everything should be.
00:12:19: or would be automated?
00:12:20: It's going to depend on a variety of different factors.
00:12:22: But it's a great question.
00:12:23: I do not think that even if we have this awesome technology that can do a lot of
00:12:28: things, it'll be used for a lot of things because as humans, ultimately, we want
00:12:33: control and we want an ability to ensure that the output makes sense.
00:12:39: Right?
00:12:39: And so great question.
00:12:41: Really interesting.
00:12:43: And maybe it's a combination, I think.
00:12:44: So as you mentioned, so maybe it's the complexity of the task and the risk
00:12:49: associated.
00:12:50: So I would imagine that in places like health care where I work, you want to have
00:12:55: more control.
00:12:56: But the control kind of reasoning behind this, because there's risk associated with
00:13:01: something going wrong.
00:13:02: And then there's potentially patient lives that could be impacted.
00:13:06: So I think it's maybe a combination of complexity of task, but then also risk
00:13:10: associated with something.
00:13:12: going wrong, but one thing that I think Microsoft Copilot does, I think there's
00:13:18: two things that are really crucial for, and I think game changing with Copilot is
00:13:24: the fact that from productivity standpoint, there has been solutions
00:13:29: existed out there, and there's been innovative approaches that improve these
00:13:36: processes, but a lot of them have been point solutions.
00:13:40: And,
00:13:41: kind of being able to provide context across different, basically systems across
00:13:49: different applications and embedding them into a workflow.
00:13:54: I think those two things are really crucial for productivity because instead
00:13:57: of you going and going to, I don't know, SharePoint to go and fetch this type of
00:14:03: documentation and then you can go to another application to fetch another type
00:14:08: of application.
00:14:09: I think what Copilot does,
00:14:11: is those two crucial things that are also kind of enhanced productivity is being
00:14:15: able to centralize a way of communicating across all different applications and then
00:14:22: truly kind of providing that context for what you're, it's probably not there
00:14:26: exactly where we wanted it to be, but I think that's kind of the vision.
00:14:29: I think that's truly game -changing.
00:14:30: That's what people, I think when you develop these AI systems, you're looking
00:14:34: for context and how you embed them into the workflow.
00:14:37: So,
00:14:38: I think that's really awesome that Microsoft is tackling that problem early,
00:14:43: even though it's not probably figured out all the way, but I still think that that's
00:14:50: very much needed, I think, in this space.
00:14:51: But is that what you're kind of hearing as well from customers?
00:14:56: What are they specifically looking for from a productivity standpoint?
00:15:02: Because you mentioned that you work on CRM and ERP systems.
00:15:07: So...
00:15:07: Are those things important to businesses?
00:15:13: That's the million dollar question, right?
00:15:15: If it wasn't important, then it wouldn't be a conversation.
00:15:18: Let's put it that way.
00:15:20: Very simply, if this wasn't important to them, they wouldn't speak to us.
00:15:24: But let me touch upon a couple of things.
00:15:27: And it's great that you mentioned it, right?
00:15:30: You said, what are the core aspects of AI in terms of productivity?
00:15:37: And I'll talk about the CRM side as well, that makes it a little bit different.
00:15:42: than chat GBT.
00:15:43: I think I like to address that a little bit.
00:15:44: And you mentioned that, you know, because there's been a lot of research done in
00:15:49: terms of how people work with AI, like what can we do to position AI so that it
00:15:55: benefits them the most?
00:15:58: And you touched on, I want to expand on it.
00:16:00: So the first is this idea of integration.
00:16:04: So people traditionally have set workflows when they're working and that can involve,
00:16:11: current applications that what they're currently doing.
00:16:14: So there could be, they go to email and Outlook, they do a couple of things, they
00:16:17: go into Teams, or they go into Word or do certain things, go into the ERP system,
00:16:21: blah, blah, blah.
00:16:22: The worst thing can do is introduce new tools that they have to do and switch to a
00:16:29: different application, right?
00:16:30: Nightmare.
00:16:31: Now they're going to have to say, oh, I don't want to do this, but I have to, so I
00:16:35: guess I'm going to out tab into this thing or switch to another tab.
00:16:39: It's really, really inefficient.
00:16:40: and not great.
00:16:42: So what you want to do is make sure that if you have AI, you want to integrate it
00:16:46: within their current workflows.
00:16:48: That's really important.
00:16:50: So why not use ChatGPT?
00:16:52: Well, one of the reasons is ChatGPT by and large, the way people use it is they go
00:16:58: onto the website, right?
00:16:59: It's not an API call.
00:17:01: Nobody's going to build out their own custom app just for that.
00:17:04: It's going to go to the website.
00:17:06: Well, that's going to disrupt people in the flow of work.
00:17:08: So one of the advantages of copilot,
00:17:10: the way Microsoft has approached it is they've said, look, let's think about
00:17:14: integrating this technology within the existing workflows.
00:17:18: Let's put it into Outlook.
00:17:19: Let's put it in Word.
00:17:21: Let's put it in Excel.
00:17:22: All the tools that people traditionally use.
00:17:24: And let's make it so it's not going to obscure or distract people, but it's
00:17:29: there.
00:17:29: So if I want it, I can press a button and I can engage it.
00:17:33: And that's sort of the thought process.
00:17:35: Now there's another, I think, couple of important points within this chat GPT
00:17:39: question.
00:17:41: And the other part of it is privacy and security, right?
00:17:45: Nobody wants their data to be used to train the overall large language model,
00:17:50: right?
00:17:50: Because obviously from a competitive perspective, that's very dangerous.
00:17:55: And the same is true for security.
00:17:56: Everyone wants to make sure that whatever security you have going on, given the fact
00:18:01: that it's gonna go off to some server somewhere, you want encryption, you want
00:18:05: AES -256, you want all the things that you traditionally would expect in an
00:18:10: enterprise setting.
00:18:11: So I think that's a really important question to ask when you're dealing with
00:18:15: this kind of tool, which is tremendously impactful, all these different
00:18:18: considerations, which a lot of people, it may not be very obvious to, you know?
00:18:24: Yeah, so, and I didn't know if Edgar, you had a perspective, but I have lots to say
00:18:28: about integrated in the workflow, and I totally agree with you.
00:18:32: I think the way people use it, there's two, and I would say, I would extend that
00:18:38: beyond just disrupting the workflow, security, but people use it, and then
00:18:45: there's no guidelines to how to use some of this technology too.
00:18:48: So people kind of adopt the AI technology, and oftentimes, and I don't, I probably,
00:18:54: I'm able to pull even some research, but the amount of people actually use with,
00:18:58: you know, ChadGPT within their enterprises without kind of guidance of their kind of
00:19:04: leadership kind of creates all kinds of problems.
00:19:08: Cause I think you could upload documents that are proprietary to your organization.
00:19:12: And sometimes I would say, I would venture out to say like some workers don't even
00:19:15: understand what's IP and what's not IP.
00:19:19: So sometimes I think they're just trying to, out of the goodness of their heart,
00:19:22: trying to...
00:19:23: upload documents to build those efficiencies to do their work better, but
00:19:27: without even knowing they're uploading those documents.
00:19:30: And I think that statistic, one statistic I do know that 60 % of people that were
00:19:37: surveyed in one of the articles I could even link it to in the description,
00:19:45: suspect that they've uploaded some sensitive information to ChadGBT or one of
00:19:51: the chatbot systems.
00:19:52: And because again, a lot of this adoption is happening bottom up and it's not
00:19:57: integrated into the workflow.
00:19:58: So I think there's also control that you can have once you integrate it, once
00:20:03: you're very mindful as to where kind of that AI system sits, but then you also can
00:20:09: provide training.
00:20:10: You could give access and things like that, but there's I think an inherent
00:20:14: problem of also the fact that people are...
00:20:19: already using it and it's disruptive.
00:20:21: They're finding efficiencies in that, but you should probably take a next step and
00:20:27: say, hey, we should proactively train our folks or provide them access to Co -Pilot,
00:20:34: which again addresses some of those security risks.
00:20:39: It's something that's already embedded in the workflow, so you're addressing
00:20:42: multiple issues by having that top -down control and having...
00:20:48: kind of democratizing that access that's already embedded in the workflow that
00:20:53: people can use as well.
00:20:55: So yeah, I think that the ChadGPT is great, but there are, again, inherent
00:21:01: problems with how people use it.
00:21:06: So quick question.
00:21:07: So I know that there's a partnership with the ChadGPT and Microsoft, and there is an
00:21:12: enterprise version of ChadGPT that enterprises...
00:21:18: subscribe to?
00:21:18: How does that work with kind of the Microsoft ecosystem?
00:21:22: So let's say enterprises do subscribe to the enterprise version of Chaggit BT.
00:21:27: Do they still go to the website to access it or is there a way to integrate it
00:21:32: within Microsoft tools?
00:21:34: Yeah, great question.
00:21:35: So perhaps I should touch on that relationship with OpenAI and Microsoft,
00:21:40: which is obviously a big driver behind AI strategy of Microsoft.
00:21:46: And I can only speak from my perspective, right?
00:21:48: I'm not speaking from the perspective of the company, but I would say this.
00:21:53: What we do here at Microsoft is we take the base technology, which is GPT, GPT -4,
00:22:00: and then we embed it within our experiences.
00:22:03: So...
00:22:03: Copilot is to some degree an overarching AI solution that we use across everything.
00:22:11: It could be from security, it could be Windows, it could be the Bing browser, it
00:22:15: could be productivity like we mentioned.
00:22:19: All that backend is powered by GPT and OpenAI's technology.
00:22:25: So what we have is our own private instance of that.
00:22:30: So we've got a really contained version.
00:22:33: of GPT.
00:22:34: And so that way, when we provide that service to our customers, then we can
00:22:39: guarantee them that their data isn't used to train that particular model.
00:22:43: So that's the reason.
00:22:45: What OpenAI then does is they may have their own separate services.
00:22:48: So for example, you talked about this enterprise platform of chat GPT.
00:22:53: And so when a customer wants to purchase that and has that kind of deployment,
00:22:59: That is a relationship they have with OpenAI and OpenAI only, not with
00:23:03: Microsoft.
00:23:04: So hopefully that sort of clarifies the difference in go -to -market and how the
00:23:10: different solutions are available on the market right now.
00:23:14: to maybe give it some perspective from a user and partner perspective.
00:23:19: So you have, for my company, I now have employees and I would love, just also for
00:23:25: me to really push it to the limits.
00:23:27: I want to be an AI -first company.
00:23:30: So all my employees, all the developers get GitHub Copilot, all the other
00:23:35: employees get Copilot right on the desktop, basically.
00:23:39: And it's the Copilot from Microsoft 365.
00:23:45: I wanted first to get the chat GPT Enterprise version, which is just not
00:23:53: available without talking to their sales.
00:23:55: So like I'm not the customer.
00:23:58: And the team's experience was not the one I was searching for kind of.
00:24:02: And then I went ahead and just checked copilot .microsoft .com.
00:24:07: And I was automatically logged in with my Microsoft account and then it directly
00:24:10: shows like secure.
00:24:12: So it directly shows, okay, your data is secure.
00:24:14: You can use it now.
00:24:16: What I would suggest Microsoft does is like develop further on the chat
00:24:21: experience because you never know which model is used.
00:24:24: Is it like the 3 .5, is it 4, stuff like that.
00:24:27: We had to figure this out ourselves, which works better.
00:24:31: This is something that could enjoy a bit more guidance, I think.
00:24:36: But for everyone who has Microsoft, first check with your partner how you could
00:24:43: maybe already use a secure GPT experience within Copilot before you go ahead and buy
00:24:50: extra GPT licenses.
00:24:52: Because most likely it isn't necessary.
00:24:57: That's interesting.
00:24:58: Can I maybe expand a little bit on one of the topics that I think you were
00:25:04: passionate about and we kind of talked about at a high level that you wanted to
00:25:07: cover and how do you really extract most out of the co -pilot?
00:25:11: And I think one of the things that you've suggested that there's maybe some prompt,
00:25:15: prompting type of suggestions or maybe even is prompting even important?
00:25:23: for kind of copilot and maybe the future of where chatbots are heading.
00:25:27: So I'd just love to kind of pick your brain on the tips and tricks for how to
00:25:33: get the most use out of a copilot.
00:25:35: Yeah, that's a great question.
00:25:36: So I think there's a couple of things you can do, right?
00:25:39: First, this is what I would describe.
00:25:42: I don't know if you two would agree with me.
00:25:43: I would describe AI in general as a very intuitive technology.
00:25:48: I remember...
00:25:50: potential to be intuitive.
00:25:52: Right.
00:25:52: But here's the deal, right?
00:25:54: It's natural language because I remember trying to learn Java when I was 16.
00:25:58: And it was very frustrating because nobody was teaching me and I had this big, big
00:26:04: book that you could probably knock somebody out with.
00:26:06: It was, it was immense.
00:26:08: And it wasn't a great kind of a way to learn.
00:26:12: Exactly.
00:26:14: But it wasn't probably wasn't a great way to learn as a 16 year old.
00:26:18: But what I'm trying to get to is it had a really steep learning curve.
00:26:22: because nothing really made sense.
00:26:24: Everything had own system, its own rules.
00:26:27: I couldn't really get the grips with it really easily.
00:26:30: Whilst with AI, you know, it's natural language and it's very intuitive for me to
00:26:36: go on and start experimenting and figuring things out.
00:26:40: So with that being said, there's a few things.
00:26:42: First of all, just try it out.
00:26:44: Historically, everyone's used ChatGPT, right?
00:26:49: To get access to the productivity suite at Microsoft, you would need to be an
00:26:53: enterprise customer, buy a bunch of licenses, and that's been challenging for
00:26:57: a lot of people.
00:26:58: So what Microsoft has done in the last month or so is they released something
00:27:03: called Copilot Pro.
00:27:05: Now this is for consumers, which is really great, and you don't need to buy more than
00:27:11: one license.
00:27:12: It's a personal thing.
00:27:13: It's $20 per month, and it gives access to all the
00:27:18: advanced the same level of co -pilot in all the productivity suites.
00:27:23: So I would say, give it a go, right?
00:27:25: I mean, it's not like you have to buy an entire year of this thing, pay a month,
00:27:29: $20, couple of beers and give it a go, try it out, push it to the limits.
00:27:34: What can I do with it?
00:27:34: What can I not do with it?
00:27:36: There's also another thing.
00:27:38: A lot of people, you mentioned prompting.
00:27:40: A lot of people don't know how to prompt AI.
00:27:42: And I think that's perfectly reasonable.
00:27:44: Even initially, you're not really sure.
00:27:46: You're very, very...
00:27:47: cautious what it can do.
00:27:50: And so what Microsoft also done is built something called Copilot lab.
00:27:53: Not a lot of people know about it.
00:27:55: So what this is, it's a prompt library.
00:27:58: And it has this great list of these prompts that you can reference and you can
00:28:03: use.
00:28:04: And personally, I think it's incredible because I use it occasionally, right?
00:28:10: I come in there and I think, and okay, well, can I actually do this kind of stuff
00:28:13: within Word?
00:28:14: So I'll go into Copilot lab and have a look.
00:28:17: And it's interesting because I do think prompting is important.
00:28:21: Lana, it's a great question.
00:28:23: The quality of the prompts will to some degree determine the quality of the
00:28:27: outputs in my opinion.
00:28:29: And that's probably why we have, right.
00:28:33: Right, Agar.
00:28:34: And that's why we have prompt engineers, right?
00:28:35: That there's a role specifically to improve prompting.
00:28:39: But the question I have for you two is I would like to get your answer on this one
00:28:41: because I find this fascinating.
00:28:44: Moving forwards, will...
00:28:47: prompting become less important in that currently there's a lot of freedom in
00:28:52: terms of how we prompt it, right?
00:28:54: We just get a text box and they tell us you can enter whatever you want.
00:28:58: Is that gonna be simplified in the same way that we have wizards for doing things?
00:29:02: Are we gonna get wizards to assist with prompting?
00:29:07: Do you want to start or should I?
00:29:11: So for me, like, I just thought about it.
00:29:15: It's a bit like, right now it's like a bit like if some consultant is talking to a
00:29:19: software engineer, like, which oftentimes hears stuff very literally.
00:29:25: And oftentimes you have to be like a good consultant is oftentimes someone who's
00:29:31: really learned how to choose his words while talking with the developer.
00:29:36: So like that's just from personal experience.
00:29:39: I'm kind of in the middle between consultant developers so kind of in both
00:29:44: camps, but Yeah, and that's I think how it's right now and I think that we like
00:29:51: referring to GPT I think language models right now are in the stage where they're
00:29:56: like more like you have to learn how to talk to a technical or in this case LM
00:30:02: persona and
00:30:04: with time and with better LLMs they will more and more understand even the basic
00:30:11: wording correctly.
00:30:14: I think people oftentimes go into that also with the kind of wrong expectations
00:30:20: because even if you talk to a human, like a real intelligence for that case, you
00:30:28: have to choose...
00:30:30: and find the right wording and to phrase and frame your stuff properly.
00:30:35: Otherwise, the other human does not understand it.
00:30:39: And that's always when you deal with uncertain stuff, like you have to find a
00:30:44: way to make it more certain.
00:30:45: And that's the same with LLMs.
00:30:47: And I think prompting will get easier.
00:30:50: I also think prompt engineer is something that's a really temporary job because LLMs
00:30:56: will do the prompts themselves.
00:30:59: and then prepare them.
00:31:01: So, yeah, that's my opinion on that.
00:31:05: So I'll maybe a longer response.
00:31:08: So sit back, buckle up.
00:31:11: So there is an article that I was just recently reading.
00:31:14: So AR prompt engineering is dead.
00:31:18: And one of the reasons why they're kind of even saying that is now, there's at least
00:31:23: two research articles if anyone's interested.
00:31:25: I actually have them pulled up.
00:31:26: One's called large language models as optimizers.
00:31:29: And then the other one's called DSPY, compiling declarative language model.
00:31:33: calls into self -improving pipelines.
00:31:36: So ultimately, what they're suggesting is that why not use large language
00:31:41: themselves, models themselves, to improve the prompts to better the output?
00:31:46: So do you even have to be like this excellent prompt engineer that finds the
00:31:55: right almost, essentially, they're trying to find the right combination of words in
00:32:00: order to...
00:32:01: get the best output out of the system.
00:32:04: So what if you could even automate that?
00:32:06: So what if you could even automate the prompt engineer's job to be able to
00:32:11: optimize the prompts?
00:32:13: One other thing that we're kind of tackling even in healthcare, and I think
00:32:17: why in healthcare also embedded in the workflow is so important, is because if
00:32:23: you're kind of thinking about asking providers or asking any of the health...
00:32:29: kind of workers to prompt engineer every time that they're trying to use this chat
00:32:35: bot.
00:32:35: Well, they're on the go.
00:32:37: They just need to get a quick response.
00:32:39: They don't have time to like go and, you know, try multiple times.
00:32:44: They just wanna have like a single go at a question and basically get the answer and
00:32:50: then continue delivering care.
00:32:52: And so one of the things that, you know, we're considering and something that I
00:32:55: think is really the future is,
00:32:57: and why embedded in the workflow is so important is context.
00:33:01: So if you, I don't know, Jamie, you and your wife have lived together for, let's
00:33:06: say, a really long time.
00:33:07: And so if you ask your wife some, like, maybe some absurd question that will make
00:33:12: no sense to me, she'll still be able to understand.
00:33:14: Because she has context and she knows you well, well enough to know what the heck
00:33:19: you're asking and where your mind is at.
00:33:22: And because of that experience or some things that are maybe the context of the
00:33:27: situation you're in, because so she's able to fill in those gaps and complete that
00:33:32: question to be able to give you the right answer.
00:33:34: So if you ask me the same question, I would be like, I have no idea, Jamie, what
00:33:38: the heck you're talking about.
00:33:39: Try something different, like ask me a different way.
00:33:42: That's ultimately what we're doing.
00:33:44: So if we're able to build these systems truly embedded in the workflow, bring the
00:33:48: context from.
00:33:50: I don't know, whatever in healthcare, for example, maybe that's a relatable
00:33:52: question.
00:33:53: Like, the type of patient that you're kind of talking about, or maybe the provider
00:33:57: who's kind of helping you, you know, who's asking this particular question.
00:34:01: I think that context is gonna be really important in the prompt engineering as
00:34:05: well, because you're building up that context, you're filling in the gaps so
00:34:08: that, and in addition to using these large language models to actually fill in the
00:34:13: gaps and then better prompt the systems, maybe the providers or...
00:34:17: users don't actually have to become prompt engineers.
00:34:20: And maybe there is not gonna be a role in the future that's so dependent on finding
00:34:25: these words.
00:34:26: But I will tell you that there's one situation that's always, it's not gonna be
00:34:30: an edge case, but there are, I would say industries or maybe roles where prompt
00:34:39: engineers would be important and those would be creative tasks.
00:34:42: Because all creativity is about building up,
00:34:47: and kind of brainstorming.
00:34:48: And so you really want to have that dialogue of saying like, hey, and what are
00:34:51: you about this?
00:34:52: And then bring in that framework and apply it to this.
00:34:55: And so you're kind of trying to use it for creativity and then you're really trying
00:34:59: to expand.
00:35:00: So that conversation is really important.
00:35:01: But there are other places where if you're really tackling productivity and
00:35:06: operational efficiencies, you want to streamline that prompting aspect of that
00:35:11: workflow as much as possible.
00:35:13: But there are use cases where that conversation is really important.
00:35:18: Yeah, that's a great point.
00:35:20: Definitely.
00:35:21: I didn't even think about the creative side.
00:35:22: You're totally right.
00:35:23: That is a very different beast than typically.
00:35:28: Yeah.
00:35:29: you have another purpose.
00:35:31: The purpose is not to get the correct one output, but to get an output that's close
00:35:36: to your imagination.
00:35:38: So, yeah.
00:35:39: Yeah, did I just basically cover all of the aspects of it?
00:35:44: But I'm just curious.
00:35:46: So is prompting important in the near term?
00:35:51: I don't know.
00:35:51: I think it is very much the case because I think we're trying to tackle, again, some
00:35:56: of these, because large language models are not in that state yet.
00:36:00: They're not truly embedded in the workflow.
00:36:01: But I think, again, I'm really excited about Copilot because I think they've made
00:36:05: that first step.
00:36:06: And one of the things that...
00:36:08: these AI systems do really well is that the sooner you start embedding them or
00:36:13: getting usage out of them by users, the better they become.
00:36:17: So I totally agree with you that I think it's just a matter of time before we start
00:36:22: to see kind of some of those improvements.
00:36:25: But yeah, I think it's gonna be embedded in the workflow.
00:36:31: And I think as more companies kind of really understand the value of integrating
00:36:36: that.
00:36:36: those AI systems in their processes is really truly game changing.
00:36:44: And I think you've mentioned, I'm sorry, God.
00:36:47: speaking of values, I just looked at our time and I would love to go to like one of
00:36:55: the most important points.
00:36:57: Microsoft is now like a huge amount of data they're sitting on with AI and
00:37:04: copilot and all the like alphas.
00:37:06: I think copilot has already rolled out like for eight months at some customers,
00:37:10: the big ones.
00:37:14: Is there any evidence of business value for that whole stuff or is it like a lot
00:37:19: of show and like not so much shine?
00:37:23: That's a great question.
00:37:25: Let's define value, right?
00:37:28: Because value could mean a lot of different things.
00:37:31: And if you ask me, how is AI valuable?
00:37:35: It's going to come down to maybe three core things.
00:37:37: It's going to be speed, quality, and effort, right?
00:37:42: So speed is, can I do something faster?
00:37:45: And obviously that's going to lead to better productivity.
00:37:48: Quality is, can I do my work better?
00:37:51: So that's going to lead to better execution.
00:37:53: And then one that people don't really think about that is still very impactful
00:37:57: is effort.
00:37:59: So is it going to make me happier because I don't have to work as hard?
00:38:03: Am I going to have better wellbeing?
00:38:05: And what does that result in?
00:38:06: Well, less turnover, right?
00:38:08: So I don't have to recruit people as often.
00:38:09: I don't have to train people as often.
00:38:12: And so there's really some hidden value there.
00:38:15: And so what Microsoft has been doing is really massively investing into studies to
00:38:19: understand what really is the true impact.
00:38:22: So if we look at a couple of different use cases, I think there is some pretty cool
00:38:27: evidence that not all the people are aware of.
00:38:30: And the first is summarizing meetings.
00:38:33: So imagine you have a call.
00:38:34: Everyone has calls all the time.
00:38:36: One of the things that people like to do just to keep track of everything is just
00:38:40: good best practice is to take notes.
00:38:42: So as I'm in the meeting, I'm...
00:38:45: laboriously taking notes of what everyone's trying to say, the overall
00:38:48: things that I want to capture.
00:38:50: And then afterwards I have to clean that up and send that to everyone.
00:38:54: And so what Microsoft did is they said, all right, how long does that typically
00:38:58: take?
00:38:58: And they measured it and it was 40 plus minutes.
00:39:01: So it's a pretty significant amount of time.
00:39:04: And then they said, okay, let's use co -pilot because what they have within
00:39:08: Microsoft Teams is an ability to create these notes automatically, right?
00:39:13: One of the things that AI is tremendously good at is summarizing.
00:39:16: So it can summarize an hour meeting and give you all the key points just like
00:39:23: that.
00:39:23: But it's not perfect.
00:39:24: Sometimes you have to go in there and add a few things and whatnot.
00:39:28: They said, all right, well, here's Copilot.
00:39:30: How long is it going to take you now?
00:39:32: So remember, it was 40 plus minutes before, right?
00:39:34: And so after with Copilot, it was 11 minutes.
00:39:38: So we're talking an almost 4x time improvement.
00:39:42: And again, going back to the start of the conversation, what is it helping me do?
00:39:45: It's helping me deal with the mundane tasks.
00:39:48: Nobody likes summarizing meetings.
00:39:50: I can tell you I don't, right?
00:39:52: Sucks, horrible.
00:39:53: So I can get this out of the way.
00:39:55: I can go on to do something else that's way more interesting.
00:39:58: That's fantastic.
00:39:59: And again, that makes people that much happier.
00:40:01: So that's one use case.
00:40:03: I got a second one and I use this quite often so I can almost pretty much memorize
00:40:07: all of the key information here.
00:40:10: So this is a marketing.
00:40:11: use case.
00:40:12: Think you'll find this interesting Edgar.
00:40:14: So this is paper done by Stanford researchers by Noi and Zhang.
00:40:19: They're the two main researchers.
00:40:20: What they did was they looked at 444 folks in marketing, sales, and a few other very
00:40:26: similar disciplines that basically had to do a lot of writing and creative work,
00:40:30: right?
00:40:31: And it was an experiment in terms of they were trying to find out if humans or
00:40:37: humans with GPT were better at a series of
00:40:40: 30 minute occupational specific writing tasks.
00:40:44: So emails, marketing copy, things like that.
00:40:47: And what they did was they wanted to incentivize all the human participants to
00:40:51: do the best they could.
00:40:52: So there was some kind of reward, financial reward to do the best.
00:40:58: And they had other people in those professions to grade them, right?
00:41:03: And like I said, what they did was they split the group into halves.
00:41:06: The first half got access to GPT -4 and the second half didn't.
00:41:10: So it was a direct comparison of does GPT -4 help people?
00:41:14: And they looked at a bunch of things.
00:41:17: So I'm going to cut to the chase.
00:41:19: They found out the AI group was 40 % faster, which is obviously very
00:41:25: significant.
00:41:25: And I think if you use charge GPT, it makes sense, right?
00:41:29: Even if it's not going to give you an instantaneous, brilliant result, it's
00:41:34: going to do a lot of the grunt work for you.
00:41:36: And then building off that is significantly better than having to write
00:41:39: from scratch.
00:41:40: But the remarkable thing is not only was it 40 % faster, the human graders scored
00:41:46: that work 20 % better.
00:41:49: So that was very significant.
00:41:50: And also they found there was a 0 .5 standard deviation higher job satisfaction
00:41:55: than the other group.
00:41:57: So these are all really kind of impactful metrics that we've started to understand
00:42:01: from the perspective of business value.
00:42:04: And there was something else, because you mentioned context.
00:42:07: So one of the things they did,
00:42:09: And this is a very, it was a really great study.
00:42:12: One of the things they did was they followed up these participants about two
00:42:16: weeks after and they said, are you still using ChatGPT?
00:42:21: And a lot of them said no.
00:42:24: So the researchers said, why?
00:42:27: Why did you stop?
00:42:28: And the number one reason, the predominant reason people stopped using it is because
00:42:34: they said, look, this AI tool is great.
00:42:38: but it doesn't have context of my company, my products and my customers.
00:42:44: And so the output isn't to the level that I want it.
00:42:47: And to your point exactly, right?
00:42:49: What was it missing?
00:42:50: It was missing the context.
00:42:51: It didn't understand the really important web of context that enables really great
00:42:57: AI to give the answers that I was looking for.
00:42:59: But I thought that was really, really interesting.
00:43:02: There is obviously business value in these kind of studies, but there's also...
00:43:07: other lessons that can be learnt, for example, that piece about context.
00:43:12: I think it's very passionate about this and as far as context, but also use cases.
00:43:19: I think those are really awesome and just something, again, food for thought for
00:43:24: people to kind of expand on this more.
00:43:27: So context could be a feature that is built on top of these large existing
00:43:32: models.
00:43:33: So a lot of them are closed.
00:43:34: So you have to build on top of them.
00:43:37: So context can be brought in.
00:43:39: as a separate component to whatever you're kind of building.
00:43:42: But the first example that you talked about, Copilot, using like for
00:43:46: summarization, just again, expand that a little bit more.
00:43:49: You don't have to use these tools for what they are designed to do, because I think
00:43:53: if you kind of expand that, then there's a tool, I believe also by Microsoft, called
00:43:56: Nuance.
00:43:57: So they're using the summarization of patient to provider conversations.
00:44:03: So same thing, like you don't want to, for the providers to be heads down typing,
00:44:09: all of these notes during the provider.
00:44:11: So why not delegate that documentation task and summarization to AI?
00:44:17: But also, I think even going back to summarizing, let's say, even stick to the
00:44:21: sole purpose, think of CRM.
00:44:23: One of the things that salespeople, I think, probably also hate to do is
00:44:28: document all of the, documenting all the intricacies of the conversation.
00:44:34: So what if you really kind of, you know,
00:44:37: spend that time building that relationship one -on -one and delegated that task.
00:44:42: And then that summarization then automatically populated into CRM.
00:44:46: So that again, that alleviated the need for you to remember all the nuances that
00:44:50: you have to document.
00:44:50: And even the fact that you have to go and log into the CRM system to input those
00:44:55: notes.
00:44:56: I feel like, you know, from some of the use cases and studies that I've read, I
00:44:59: think there's a huge gap in people actually documenting conversations.
00:45:03: And then when they leave the organization,
00:45:06: there's lack of documentation or lack of ideas that were shared between those
00:45:11: partners.
00:45:12: And so there's really no continuity.
00:45:14: So you can truly take these AI tools and try to stretch them beyond what their sole
00:45:20: purpose is and then try to build the components on top of them or try to
00:45:24: connect them or orchestrate them within a workflow, as I mentioned, and integrate it
00:45:30: with other systems so that they truly provide that continuity of.
00:45:35: of workflow, but efficiencies also for the users.
00:45:38: But I really, really appreciate those two examples.
00:45:41: Those were great.
00:45:42: Edgar, I think you were going to add something.
00:45:45: Yeah, just for me, like I think I said it in previous episodes, I look at like
00:45:53: context is always something and will always be something you have to add.
00:45:59: Like no matter how good these models...
00:46:02: get and all what's surrounding them.
00:46:05: You always will have and there's also what would at least I know Microsoft tells
00:46:09: people like AI maybe not at the point where we wanted want to have it but we
00:46:15: have to start now preparing the stuff so that we can deliver the context in a
00:46:21: manner and AI is allowing us to do a lot more with a lot more context than we
00:46:26: previously were able to because we have
00:46:29: Huge amounts of unstructured data which now can be read by a machine which wasn't
00:46:34: possible before so I always say like if your task in any way shape or form uses
00:46:41: language to to do something to make something unstructured structured and
00:46:46: usable then it's most likely something that AI can do for you and It might not be
00:46:52: perfect, but it's more than worth it already to to use it in a lot of cases so
00:46:59: And that's why I'm also happy about Co -Pilot.
00:47:07: But Microsoft will, like they always did, have to rely on the partners to integrate
00:47:12: the stuff at customer level to really make it work.
00:47:18: Because no AI solution, not now and not in the future, will just come in and have all
00:47:24: the information.
00:47:25: So I think that's something.
00:47:28: We still have to also propagate because I think that's oftentimes the expectations
00:47:33: from all the marketing material that I just go in and everything works magically
00:47:37: and it's not magic, it's math and you have to feed it.
00:47:40: So yeah.
00:47:42: AI magic.
00:47:43: That's, you know, we do treat them as black boxes often.
00:47:47: And it's kind of incredible what it can do really.
00:47:50: I want to actually go back and touch upon something you said Lana.
00:47:53: So you mentioned about CRM and I'm laughing because as a seller, I can
00:48:01: promise you that the CRM system is often one of the last pieces of technology that
00:48:07: I want to touch, right?
00:48:08: If I can avoid doing it, I'll be great because look, it's...
00:48:11: ha ha.
00:48:12: additional admin.
00:48:13: It's exactly what we spoke about.
00:48:14: It's admin work, right?
00:48:16: And I think that's just one of the things that will change as things start moving
00:48:23: forward.
00:48:23: Because we were talking about this massive database that is essentially, in my
00:48:27: opinion, a necessary evil, right?
00:48:29: In order to run your business, you need it.
00:48:32: You need it to keep track of everything, to have continuity, to basically do what
00:48:37: you need to do.
00:48:38: But in working with the system,
00:48:41: A lot of that effort is mundane and repetitive.
00:48:45: And so I think this is where AI can make a huge impact.
00:48:49: So let me give an example, right?
00:48:51: So if you look at what we've done for Copilot for sales, what we said is this.
00:48:57: Let's figure out how we can extract some of that experience away from the CRM so
00:49:04: that users can interact with it using generative AI outside CRM.
00:49:10: So what they've done is they've said, let's make this experience in line with
00:49:15: the flow of work, again, based on what we talked about earlier, so that when I'm in
00:49:20: Outlook and I'm getting an email from one of my customers, I have all the background
00:49:25: information from my CRM that's brought to me on a side panel, right?
00:49:30: So I don't have to go into my CRM anymore to find that information.
00:49:34: It is brought to me and it's in this digestible form thanks to AI.
00:49:38: and I can still interact with it.
00:49:39: So it's not just a read -only relationship.
00:49:42: What if I could update my contact record?
00:49:45: What if I could import data back into the CRM?
00:49:48: Wouldn't that be great?
00:49:49: And so this is all the kind of the core understanding of AI has now enabled us to
00:49:54: do a lot of things that historically hasn't been possible.
00:49:57: And all of it's tied around, can we automate mundane tasks that nobody wants
00:50:02: to do?
00:50:03: but that still need to be done.
00:50:07: That's awesome.
00:50:08: And I know that we wanted to kind of, I think we've covered a lot of the topics on
00:50:19: this.
00:50:19: I would love to invite you back for maybe a part two as well, because I think that
00:50:25: there's some really cool topics that you wanted to talk about, like AI and
00:50:29: automation, and maybe dive into the values of the ERP and...
00:50:33: you know, CRM systems, because I think you have, I think from sales, lots of use
00:50:37: cases and co -pilot how some of these features really bring efficiencies into
00:50:43: kind of your world as well.
00:50:45: So I would love to, yeah, for you to come back and speak to us about some of those
00:50:51: things.
00:50:51: But I also know that you have a podcast yourself that you've started.
00:50:54: And before we kind of leave and say our buys, I would love to understand how.
00:51:01: How can our listeners find you and what you're up to and how people can connect
00:51:08: with you?
00:51:09: Yeah, absolutely.
00:51:10: So thank you for mentioning that, by the way.
00:51:12: I really appreciate it.
00:51:13: By the way, I would love to be back.
00:51:14: So again, I appreciate the invite.
00:51:17: That podcast I started with my friend and colleague, Jarrett Miller is called
00:51:22: disruption digest.
00:51:23: And the concept is this, right?
00:51:25: We're going through this wave of incredible disruption, but what kind of
00:51:31: interesting insights can we explain about these in 20 minutes, in 30 minutes, in a
00:51:37: small window time?
00:51:38: So what we do is every two weeks we'll take a very interesting topic and then
00:51:42: we'll do a deep dive, but only for 20 minutes.
00:51:44: And we'll really break it down, understand what the deal is.
00:51:48: So that is the intent of that podcast.
00:51:51: I will provide the links to you to that podcast, but it's called disruption
00:51:58: digest.
00:51:59: Yeah, absolutely.
00:52:00: Thank you.
00:52:02: So, well, any parting words, Epkar?
00:52:07: Yeah, Oleg, also for my part, thank you very much.
00:52:11: I love the podcast title.
00:52:12: It's definitely a good one.
00:52:17: So yeah, visit Jamie's podcast.
00:52:19: I'm pretty sure from how I met him and what I heard from him, he's most likely
00:52:27: delivering a lot of value to you all.
00:52:30: As you did for us today, thank you very much for that.
00:52:34: And yeah, we need to do another episode, I think.
00:52:37: I also thought maybe we could do like kind of a dynamics live stream or something
00:52:41: just to show stuff in practice could also be nice.
00:52:45: Because I know like for Dynamics for Sales there is some pretty good demos how AI and
00:52:50: sales work hand in hand.
00:52:53: So that might be some good content.
00:52:55: But yeah, for everyone who's watching on YouTube, if you want to see such things,
00:53:00: put it down in the comments.
00:53:02: If you hear us on Spotify or yeah, Apple Podcasts or wherever.
00:53:08: listen to this podcast.
00:53:09: Yeah, let us know, write us, you find us on LinkedIn, as always.
00:53:14: And yeah, with that said, that was from my side.
00:53:21: Some closing words from you.
00:53:24: Yeah.
00:53:24: see you on the next one, which will be next week.
00:53:27: So thank you again, Jamie, for joining us and yeah, we look forward to having you
00:53:32: back.
00:53:32: Thanks guys, appreciate you having me.
00:53:35: Thank you.
00:53:35: I'm gonna stay, bye bye.
New comment