Adam:
00:01
Thanks everybody for joining. I'm back today as our host. We're looking at what we've done here at Joseki Tech with Hub Boost, which is our generative AI product to help a sales team with their emails. I'm Adam Steidley, the president here at Joseki Tech. Joining me today, I have Dan and Jasmin. You guys want to introduce yourselves real quick?
Jasmin:
00:18
Sure.
Dan:
00:19
Hi, I'm Dan.
Jasmin:
00:22
I am Jasmin. I am the VP. Not the VP, sorry. I'm the VP, er, director of marketing. [To Dan] You are supposed to say your title, so we cut all this out.
Dan:
00:31
But I am the VP of application development for Joseki Tech.
Adam:
00:35
So our editor now wants to kill us. But we'll still get started. So I'm going to share my screen. I'm going to show everyone the next feature that we did here on the product, which is pretty exciting. So what we've shown you before with Hub Boost is when you're creating an email, you have the Hub boost button, which is the HubSpot extension integration that we've written that fires up a window, will pass some data about the customer to OpenAI, where we also have a knowledge base with all sorts of information about Joe Seki tech. And it builds out an initial starting email for a welcome to the customer. So, we also have a feature here where we can make a feedback loop to start improving the conversation. We talked about that on another call, but now we have added a new feature.
Adam:
01:28
So, Dan, what have we done?
Dan:
01:30
What we've done is now when you start generating the email, you can type, you type some text into the box and whatever text that you provide that will be submitted forward to the AI generation loop and it'll use that as the basis for its email generation.
Adam:
01:47
Okay, so if we say, I assume it doesn't need to be a full thought necessarily. We specialize in both email marketing and applications integrations. We are big on Java and not on.net dot. We'll see what bill thinks about that. Introduce us and our products for what would work best for Bill Gates. Now I hub boost and what data are we passing in now? In addition to the information about the.
Dan:
02:25
Customer, same information we've been passing in. The only new stuff is whatever text you've typed there just to start the email. What we have in the pipeline now is positions and skills. So in the upcoming demo, we'll show how it'll cater its generation to the contacts, skills and positions. It was going to match that up to what it knows about our offerings and capabilities in Joseki.
Adam:
02:50
Okay, pretty cool. So the subject lines going and the bodies going up, or the new sort of input fields, did we change the starting instructions? How does that work?
Dan:
02:59
Yeah, we have a prompt that we're doing, and that prompt is growing with, you know, as we learn, we keep tailoring that prompting. We keep dialing it in. We're helping, you know, giving the generation algorithm guidelines. The assistant itself has a base set of instructions and then any additional information that we start including, like these positions, the skills, the basic copy to start the text with. Those get included into the threat of conversation.
Adam:
03:25
Okay, but do we change the base instructions yet so that it knows if we're passing in an email, are we saying something like, tailor this email to better for the customer, or do we not change those yet?
Jasmin:
03:38
So in our last recording, we did change the instructions in our knowledge base. So instead of getting more specific about the customer, now the knowledge base is more specific about us and it understands Jeseke tech a little bit better and how we want to represent ourselves to our customers. So that's the knowledge base update that we did on our last call.
Dan:
04:01
Yeah, there's three sets of things that we're doing. One is if you give me something really generic, like, oh, it's got to be in under 200 words. It's got to be two paragraphs. If you start giving those constraints, those should apply to all the generations. So those get hooked in the form of instructions to the assistant. If you then start giving me things like, we have these people working at Joseki and they have these skills, and these are our offerings. These are past projects that we've worked on. And those types of information are then being fed into the assistance view via knowledge base.
Dan:
04:33
And then the third thing that we have is if it's personalized, if it's contact specific, you know, first name, last name, skills, positions, or any additional feedback, or the initial body text that you're typing there, those things get pushed into the assistance via user messages. That's anybody doing OpenAI would understand that terminology. So there's various and various ways to get that information in there. And depending on what we're talking about, I can clarify which one we're using for you. All that being said, we are becoming more skilled in prompt engineering. So every time we tack a new piece of personalized data and we are modifying the prompt that we're sending into.
Jasmin:
05:17
AI, what's happening with AI in general is making sure that the AI understands what it is that we're asking for, because it can generate essentially anything but if we don't give it the specific set of instructions via prompt engineering, then we're never going to get anywhere.
Dan:
05:34
And then we have, you know, we got to be very careful with how we do our prompt engineering if we want to think about costing, every time you add something to the prompt, it's something else that needs to be tokenized on the fly, which is a separate service and a separate cost.
Adam:
05:49
Now, the other thing that around this feature I thought always would be handy is our ability to just start from a template. So these are some other templates I've used before. This is just a sort of standard. It would be good to catch up. So if I know sort of where I want my conversation with my contact to start, I can just pick a template that's already canned and then tweak from there. Okay, so when we're looking at this email, it seems to be generally cribbing my first sentence. Exactly. It's been a minute since we last caught up. Can we prompt it to not be so precise in using the language?
Dan:
06:28
Yeah, you can use the feedback. We ran into this, touched on this a little bit in our last demo. Remember that we've given it constraints to say, stay under 200 words and stay within two paragraphs.
Jasmin:
06:40
So if we want to affect that first sentence, then I think the best thing would be is go into the Feedback and say, I want to play around a little bit with the intro sentence more, so provide me a new intro sentence.
Adam:
06:54
I have to spell sentence correctly, I think so.
Jasmin:
06:57
I'm sure it'll understand.
Dan:
06:58
It's doing pretty good with me, and I'm the world's worst speller, so that.
Adam:
07:01
Is not your strong suit.
Dan:
07:02
Punctuation seems to matter more.
Jasmin:
07:04
That makes sense. It feels like an age since we.
Adam:
07:09
Yeah. Different kind of phrase.
Jasmin:
07:10
Last conversation.
Adam:
07:13
Yeah, we're really keen to catch up. Said if it would be good, cool. And copy paste. There are like, this interface here is not real good. Apologies to our friends at HubSpot, but is there any magic to doing this cut and paste or is it just cut and paste?
Dan:
07:32
No, it's pretty manual. We provided a link there for you just to try to make it a little bit easier to get a copy to the current clipboard. In general, it works. There's little gotchas here and there. If you have, you know, their editor, I believe, is a rich text editor. So if you have double sets of new lines in what we generate, then it might get translated to two HTML breaks when you paste it in. But, you know, that's not that difficult here. It's artificial intelligence. There's a human in the loop. We can press, delete and clean it up before you set it.
Adam:
08:06
Okay, so this is definitely cool. And I think it's gives us a lot better places to start from and start using this. What are some of the next features we're thinking about adding here?
Dan:
08:15
What we have in the pipeline next is the contacts, positions and their skills. So as you're typing, the AI generation loop is just going to take that into account?
Adam:
08:25
Yeah, I do think having the skills may prove to be interesting, and those are the skills we scrape from the LinkedIn profile. So if the guy says he's a Java expert, we'll lean into Java. If he says he's an email marketing guy, we can lean into that. I think that'll be interesting.
Dan:
08:42
With positions, you also get location. So, you know, if it's somebody here that's in like Philadelphia, right? And then we have a contact and it's got the skill, Philadelphia, that's gonna match up. It's gonna help driving to, you know, generate a more personalized sales email here.
Jasmin:
09:00
Yeah, I'm really excited to see how the messages can get tailored more specifically to like, I know that you have worked at A, B and C company in the past and the work that you have done there as this position title works really well with what we are trying to do. And now that you're in this position, we think that this could be really helpful to you specifically because that personalization aspect is the kind of the first and foremost thing that we should focus on. Because for any old salesperson who is sitting there typing up 10,000 emails in a month, it's going to be impossible for them to get as specific as possible without spending a ton of time doing it.
Jasmin:
09:49
So being able to just simply scrape a LinkedIn page and then tell your AI, look at everything that they've done before and tell me where the points that I should be talking about, like what are going to be my best talking points when I actually get a meeting on the calendar with this person.
Dan:
10:07
Is going to be based more out of fact. What we're seeing a lot with the AI is it hallucinates. So if you ask it to talk about our AI capabilities, it makes up some stuff which makes us look real good. I mean, it hasn't thrown us under the bus yet, but, you know, we get on the phone with that contact and they start probing us. It's, we'd like to have something real and concrete to talk about.
Adam:
10:30
It's rooted in things we've actually done. The other thought that I've been having around this is in how we can work on some tools so we can help tune things quicker and a bigger scale. So I think we've been having good conversations about kind of the main sides of things. We've got to get the starting instructions better and better. We got to get the knowledge base better and better, the data that we feed into the whole engine about the customer and then the prompt engineering that the sales guys can do. To me, those are all the kind of the inputs, but to test this at scale, I'm thinking about developing kind of just a separate tool over on our own site that'll pull 510 25 customers at random and perform the action for all of them at once, and then give us.
Adam:
11:20
Okay, here's the 25 emails it developed so we can look at all of those to see if they're reasonable, then make one change to the instructions. Get it? Run those 25 through again so that we're turbocharging our ability to do the testing on this.
Dan:
11:39
If you can curate that set and you can adjust the language to what you like to be representative of Joseki and yourself, then we can use this other feature that we have available to us, and that's the ability to fine tune the assistant. We've obviously not done this yet because we are, like you said, focusing more on the prompt, engineering the inputs, the knowledge base, we get that correct first. Fine tuning is another ballgame, and it requires a number of samples, so it's just a bigger process, but it's on the horizon. We got it. We'll get to it.
Adam:
12:11
Yep. No, I think we're making real good progress. It's moving the right way. So, Dan, the positions and skills data getting in there, you think you'll have that ready for next week's demo?
Dan:
12:20
I believe so. We're almost finished coding it now, so I got a couple of days to work with. Train doesn't fall off the tracks, we'll be there.
Adam:
12:27
That sounds really cool. So next time, we'll see how we can get better prompts with that extra data in there. And thanks everyone for their time this week and looking forward to next week's discussion.
Dan:
12:37
Yep, sounds great. Thank you.
Adam:
12:39
Thanks. Bye.