Building an AI Assistant to Deliver Your Best Presentations

Eve Porcello from Moon Highway took the stage with a creative and playful talk blending AI, JavaScript, and live audience interaction. She introduced “Kira” - Kira Corbet (short for Keynote Information Regurgitation Assistant), a custom AI-powered presentation tool built with React, Next.js, p5.js, and the ML5.js library. Kira helped co-present by identifying images and demoing pitch detection powered by a pre-trained model.
Eve walked through how the app detects microphone input and translates pitch to musical notes, showcasing hooks and functions like usePitchDetection and startPitch. To test it out, six volunteers were called onstage to match musical notes—live! The app visualized detected notes in real-time.
Share this talk
Transcript
Hello, everyone. Thank you so much for having me. Thank you, Kent. Thank you, Epic Web Comp. We are so excited to be here today. If we haven't met before, my name is Eve Porcello again. We started Moon Highway thirteen long years ago, not that I'm old enough for that.
And thirteen years ago, we started teaching these JavaScript courses. We've created courses for LinkedIn Learning. We teach a lot of live workshops and wrote a couple of books. But our work is such that we always have to be learning. So we always have to understand what's next. We have to figure out what is the new React.
And what I found out about what the new React is is that it's just React, but slightly different different folder structure. So we love tanstack. We love Remix. We love Next. Js. We love teaching all of those things. But what I really am excited about lately is AI.
So like any good developer who gets excited about something, when you get excited about something, you buy a domain name. And then when you have the domain name, you get the project going. So I wanted to create an AI project that basically would be an assistant to help me do the work that I do.
So teaching, presenting, all of those things. So I decided to build Kira. And I'm excited to announce that today, actually. Yeah. It's a big launch day. So I want to introduce you to Kira. Kira is the keynote information regurgitation assistant.
And I'm super excited to welcome to the stage today, Kira. Give it up for Kira, everybody.
Amazing. Welcome, Kira. Thank you. My name is Kira, and I'm your presentation assistant. Woah. Amazing, Kira. Can you tell us where we are? We are in Salt Lake City at the epic web conf. Is everyone doing good today? Nice pandering to the audience, Kira. Excellent.
So what we wanna do is put Kira through a little bit of a test here. Kira, can you tell me what you see on the screen? A crosswalk. Nice job. Can Can you tell me what you see on the screen now Kira? A bridge. Excellent work. Kara what's on the screen right now? A motorcycle.
And can you tell me what's on the screen now? Left center lower left. Okay. You seem ready. You seem ready. Kira would you like to give my presentation today. Most certainly. But before I do could I please have the clicker. Sure. Okay. You're just gonna do it. Yes. All right. Great. All right.
Now that I have the clicker, we're gonna be doing a little bit of just giving you a little demo over the app that we're working on. It's built with the ML five JS library.
We are using a pre trained pitch detection model And we have p five JS for a little bit of fun coded graphics, and all built up on a Next Next JS and React app. So there's three kind of main things that we're kind of focusing on with AI.
And that's one, is that first, we're gonna take the prompt as inputs, and then we're gonna use those inputs and send it to the AI to analyze. And then we will use those results as outputs. So I'm gonna switch over to our code over here just to kind of give you a Kira?
During my presentations usually I have like audience interaction. Oh, yes. Certainly. Can I get six volunteers?
Alright. Come on up. Alright. One, two, three. We got five out here. Yeah. Five volunteers up on the stage. Everybody give them a hand. Let's go. Please put that on. Please put that on. Please put that on if you don't mind. Thank you. Wow. Look at Pat. But still put the shirt
on. Awesome. Thank you. Thank you, Eve. Thank you. All right. So to kind of get started in this, I just wanna show a little bit of the environment. It's your simple Next. Js and React app.
But just a few things to point out that's a little different is we have our pre trained model located in this directory, which we will be calling out. And then we have, some additional utility custom functions that we reiterated and built upon the NL five JS library in there. But enough about that.
Let's go ahead and start adding this in pitch detection. So we're gonna go ahead and import the pitch detection. And this is gonna be from the ML five library.
And then you can see down here, we have this thing called use pitch detection. And the use pitch detection is a hook that runs once the component is mounted. It does a few things here before all the setup is completed, including getting the audio information, as well as the user's device, input from the browser.
It's then later gonna call a start pitch function, which we'll write here in a second. And then this will kick off once everything is set up. The other important piece in this file is the get pitch detection function, which is a recursive function that's continuously checking for pitch.
If there's a frequency detected, then that frequency is mapped to a midi number. And then that's our all of our scale, our ray of scale up here, that scale array is then mapped to a note. This is then being constantly updated so we have access to our currently detected note here.
So we'll go ahead and start by writing this start pitch function. And it's gonna be have two things. One is the stream. And the stream is getting the audio the microphone input and then the audio context here.
Okay. So we're gonna go ahead and call the start audio context function. And that's just from our basic web API, in that function. But then we need to check, we need to check if we're getting the pitch. And we need to check eventually if these volunteers are actually gonna start singing. Sorry, not sorry.
So so if we have the audio context here, what we are then going to do is to start doing things within here. So we'll call the pitch detection function. And then this is when we are going to pass everything, that model that we saw in here, in this public directory. So we're gonna go ahead and pass that.
And then we have to give it a few other things, including the audio context, the stream, and then we have a callback here as well. And then for error's sake, we'll go ahead and just write something here.
We might as well just keep that theme going, you know? Perfect. Okay. And the last part to this is we're gonna call that start pitch function right here.
Excellent. So I'm just gonna go ahead and run it. We don't expect anything to be happening quite yet, but just to make sure there's no typos in there. And that looks great. Okay. So we did step one and two, which is getting the inputs, which are our prompts.
So the sound is going to be our inputs and our prompt. We're passing it to the AI model. And the third part is that we're going to be displaying it displaying these results as graphics. So there's a few things in here. That is we have a custom canvas component that's using the next dynamic function.
And then other than that, we're just gonna go ahead and call the functions that we were just working on in here. So we're gonna import the use pitch detection. And this is from our utilities directory.
And then we're going to get that currently active note. So I'm just going to call it detected note. And then call the use pitch detection. And then lastly, we're just going to display all of these results. So we'll do detected notes so we can see if our volunteers are seeing on point there.
And then our custom canvas component actually takes a note. So we're just going to feed it the detected note. Alright. We'll go ahead and run this.
Okay. So now the one thing is that I do not sing though. So Eve, could I have you help with help with our volunteers? Yeah. I guess so. Sure.
All right. We need you to go here. If we could have everybody shift down a little bit.
And then is that right? Have Jason and I haven't met you yet? Nice to meet you. Have you switched thoughts? Alright cool. Oh and unfortunately our microphone is not connected to this one but you know what let's go ahead and sing it off. He's got a riff to do. All right.
We need you to match this pitch right here.
Can we match this pitch here?
Beautiful. Here, let's get the microphone connected. Yeah.
It's not seeming to connect up there. It's not? No. Not sure.
It's perhaps because that device is off.
All right. Well, we gotta sing the e cafe anyway. All right. Let's do it. We need you to match here. Beautiful. And then Oh it's on. It just took a second. Oh nice. F.
So good. Alright. So we're going e.
Now if this sounds familiar
Crank it. Thank you all for watching. Thank you, volunteers. Thank you, Epic WebPump. We thought it would be funnier if we had you stand up there for too long and it felt right. So thank you all so much.