How AI Helps You Speed Up to Slow Down

Jason Gulya was a pessimist about artificial intelligence – until he took the time to learn it. Now, he is an AI enthusiast, with a particular interest in how generative AI can help people learn faster and better. Listen to our conversation to find out: Why AI helps us slow down How Jason applies AI […]

Jason Gulya was a pessimist about artificial intelligence – until he took the time to learn it.

Now, he is an AI enthusiast, with a particular interest in how generative AI can help people learn faster and better.

Listen to our conversation to find out:

  • Why AI helps us slow down
  • How Jason applies AI to his day-to-day right now
  • Jason’s advice for trying out and keeping up with generative artificial intelligence

Connect with Jason on LinkedIn to keep learning!

Transcript

[00:01] Tom Moriarty: Welcome back to the Secret Society of Success. In this not so secret podcast, we explore the changing landscape of corporate learning and development so that you can bring successful L&D to your organizations. Here in season three, we're taking on a very hot and controversial topic, generative artificial intelligence. In each episode, we'll be talking to different L&D Experts about what generative AI is, how it is already being deployed for learning design administration today, and frankly, whether or not you should be scared. Oh, by the way, we use Chat GPT to write this intro. Hey, Jason, thank you so much for joining us today.

[00:42] Jason Gulya: Thank you so much, Thomas. A pleasure.

[00:44] Tom Moriarty: So before we jump into learning a little bit more about AI, giving you speed and slowing you down at the same time, which I think will be our theme for our discussion today, why don't you give our audience a little background on who you are?

[00:56] Jason Gulya: Yeah. So my name is Jason Gulya. I'm a professor of English and the humanities at Berkeley College. So basically what I do is I teach anything related to English, writing and the humanities, which is actually the kind of. And we can talk about this. One of the weird ways that I got into artificial intelligence, really kind of thinking about it and teasing it out. So I basically do a couple of things. So I teach all the time, in person, online, hybrid, all the modalities. And the other thing that I do is I work with professors and students and colleges more generally to really think about artificial intelligence. And I've done this really ever since chat GPT came out or shortly thereafter, because I think that now a lot of students, professors and more broadly, institutions are trying to figure out what to do in the age of AI. So I do consulting in that space. I do presentations, because I think a lot of us are trying to figure out what college and what learning in general, formal and informal, looks like with this technology. So that's a lot of what I do. And I work AI at this point into all of my classes, and I try to do it in a way that encourages critical thinking, analysis, all the things that we really want and, yeah, so that's kind of me in a nutshell. And I try to be very honest and open on social media and other spaces with just how I'm experimenting with this technology and what it means for learning. And that's me.

[02:27] Tom Moriarty: That's great. Thank you, Jason. We appreciate the background, and I think given the unique nature of your background and your recent focus into making AI part of how you deliver learning, specifically in the context of higher ed, I think is going to be really compelling for our audience, right. Because a lot of our audience delivers learning all day long. Now, their audience is a little different, right? Not exactly the higher ed. Typically, it's a learning and development organization, a company, or it's somebody working in L& D within a corporate space. But there's so much overlap between a higher ed space that I'm really excited to unpack. What your background, experience and what you've learned might be able to help our audience. Before we get into specifics and some examples and some stories, because we try to spend a lot of our time on those, we feel that unpacking stories and examples teases out some of the biggest takeaways. I'd like to start with some definitions. So when we say AI in today's episode, what do we mean? What does that mean to you?

[03:32] Jason Gulya: For me, it really means... So, AI is anytime we have algorithms or machinery imitating human performance. And when I talk about this with my classes and more generally, I try to use that word performance as much as I can, because I think a lot of times we talk about in terms of thinking, but in the end, we have this technology that is very much a black box. When we throw something into chat GPT, we get something out. We don't know what sort of quote unquote thinking is there and how it's processed. The thing all we're seeing is input and then output. We just sort of, like, guess. For me, that output is performance. And I try to tell my students that AI isn't so much about, at least for me, imitating human thought. It's about imitating human output or performance or whatever you want to call it. Right? So I think that that is essential for my students, because a lot of times they are in that thought of, well, they're in that process of thinking about it as thinking. And it just doesn't seem totally accurate to me because there are so many questions about what's happening behind closed doors. And honestly, there's that. But it's also that we don't know a lot about human thought. Right. There are a lot of things, we're black boxes, too, that you and I can interact with each other, Tom, and I don't know what's going on in your head. You don't know what's going on in mine. So we all have these black boxes. So I try to stay away from that language as much as possible. But for me, AI is really about an algorithm or machine imitating human performance.

[05:00] Tom Moriarty: I like that. And I think that example of a tool where you deliver an input and get an output is really the simplest way to describe it. Right. And I think that that can also, I think, as the way that you're emphasizing it to your students, take away some of the stigmas that can be associated with it, right. And actually just make it about what is this thing and ultimately, how do you then apply it, right. And how is it applied to our context and what it is that we're know. Jason, what made you want to jump in and start playing around so early? Right. Like you said, you pretty much jumped in right away and made this almost a second career, which is an interesting thing to sign up for when you've got little ones and then another one on the way. Right. So what motivated you to do that?

[05:48] Jason Gulya: I want to talk a little bit about the first time I used chat GPT.

[05:52] Tom Moriarty: Okay.

[05:52] Jason Gulya: So right after it came out, and it came out November of last year, it's basically been around with us for a year a little bit more. And the very first time I used it, so I found out about this program, this thing. No one really knew that much about it. And I started to play with it, and I ran a couple of queries, got some stuff back, I took some assignments, put it through there, and I turned to my wife, who was sitting on the other side of the table, and I said to her, the most horrible thing just happened. I just found and saw the future of plagiarism. That was my initial thought. I thought… I was so negative on this program, and that was my initial knee jerk reaction. I think a lot of us start there or started there. And then I set it aside for a few days. I went back to it, and I started playing with and I started to figure out how I can use it to learn things. And for me, that was the lead into it.

[06:53] Tom Moriarty: Right?

[06:53] Jason Gulya: That was the transition. When I started to think about it less as a professor and more like a learner, that's when I started to come to that side. And I tried to think about this technology in a more welcoming way. And then from there, I just kind of kept going, because once I had that focus on learning, I started to think about how it can help students. Right? How it can help students practice skills in these kind of low stress scenarios. Because for me, and this was another kind of transitional moment for me. So I'm a very shy person. Whenever I'm talking to another person, especially face to face, if I'm really testing out an idea, my mind doesn't allow me to do certain things. Nerves get in the way. I'm worried about being judged. I'm worried about what happens if I just hit pause and say, let's try it again. That didn't really work for me, but I found something very freeing once I played with that with chat GPT and then with later AI programs, because now I had this judgment free zone, right? Good or bad, machines cannot judge us, at least not the way that we're worried about, as with humans. And so I got a lot out of that. And so that, for me, was something that really, I started to work into my classes, and I was actually very fortunate, because when chat GPT came out in November, I was actually in the process of going on sabbatical. So I went on sabbatical in January. So that basically gave me about three months to just play. And I fully recognized that most people do not have that luxury. I was in this weird space, this weird sector, when I could just play with AI. I didn't have a publication requirement or anything like that. And so that really got me just thinking about how this technology could be used. So I very much went from that experience of horror to acceptance and then trying to figure out how we can mix everything, because for me, I don't think it's going to be about just working AI into every single thing. It's going to be about being purposeful with it and trying to figure out how it can be worked into a larger learning infrastructure. And for me, those were the big transitional points when I moved from being horrified by this program that I just came across on social media to actually now using it all the time.

[09:16] Tom Moriarty: Yeah, I appreciate the story, and I think it's a very relatable one. Right. I do think that that barrier that exists for all of us with new technology can often be pretty significant. Right. And the initial interaction can be won many times of fear, especially in the context of this. Right. But I really appreciate in your story how you mentioned stepping back from it for a little bit and then going back and saying, okay, let's approach this from the lens of a learner versus the lens I approach it from historically. And where can I find ways to get value out of this tool? I think that's a really simple but meaningful sort of takeaway for the audience in terms of how to think about this somewhat intimidating technology that's out there.

[10:05] Jason Gulya: And for me, the big questions have to revolve around value, as you mentioned, and also this understanding of ease. And I think that is something particular in some ways to learning communities and people who are really interested in how people learn, because you go into other sectors and you think about how easy AI is going to make things and everything like that, and that's great in the vast majority of sectors, it will allow us to do things faster, easier, all of that stuff. But then you go into the learning space and we understand, we come out and we say, oh, one of the best indicators of whether you learned something is how hard you worked for it. And we know this, this has been tested that if you work really hard to unravel something and practice something and really get it, you remember it right. You remember it a month from now, a year from now. And it's actually one of the biggest indicators given just learning science. And we encounter this in classrooms and also just informal learning all the time. And so suddenly that focus on ease becomes very potentially problematic. And now there are ways to actually, and I try to do this in my classroom, too, use AI to kind of amp that up right through practice. Well, there are ways to use AI to actually make the learning, I don't want to say make it harder, but make it more challenging. It actually makes it stickier in a certain way. But I do think that there's a way in which we can't even generalize about AI because how it's being used and its implications really in many ways depends on the sector. And for us, for the learning professionals, whether you're in the college classroom or you're in L&D and corporate L&D, I think that that focus on ease has a very particular relevance to us and importance to us specifically in the context.

[11:58] Tom Moriarty: In the higher ed world, where are you seeing AI applied and where are you starting to apply it yourself? Maybe let's walk through a couple of specific examples and scenarios, because I think that those will generate some really meaningful takeaways for the audience in terms of how they can use that tool the same way but applied to their life.

[12:20] Jason Gulya: Yeah, I've seen a bunch of different experiments, and now they are starting to get bigger and bigger, which is exactly what I want to see. I think that for the first year, especially from the perspective of college professors. We saw them started to play with things right here and there, and there were these kind of small examples, and now I think they're starting to get bigger and bigger. So some of the main examples I've seen. So one of the first ways I started being used is chat bots.

[12:46] Tom Moriarty: Right?

[12:46] Jason Gulya: Figuring out if you can have an assignment or reading and create a chat bot that helps students out. So I actually played with this several semesters ago, I was teaching Edgar Allen Poe’s the raven. So I was teaching a general literature course, and I started with that one because it's really hard. And so there was actually something you had to do on the level of reading that helps you out with later things, that actually helps you out with movies and podcasts and everything else. Once you really, in many ways, work through your reading, right? With something like Edgar Allen Poe. And in the past, it's been sort of weird because I would have students read it and we'd come to class and discuss it or work through it together, or we'd do something online, and it was fine, but it was very much focused on a couple of students. And this happens all the time. If you have something that's challenging, which I think it should be, especially given that course, but then if you open up to everyone, suddenly you have a couple people who are weighing in, who already feel comfortable reading it, and then you lose everyone else. What I did is I created a chat bot, and I did this through Poe. I've done it also through Zapier. I redid it through that. And so there's a natural context window that you can go online, you can find the open access version, you can give it the entire poem. They could have the whole thing as a part of its knowledge base, and I could have students run things by it. And students did. They went. Some of them asked very general questions, like, what is this poem about? Which is fine, that is perfectly okay as a starting point. And then others asked very concrete and difficult questions. Right. They were trying to figure out the philosophy behind this poem and everything else. And so that allowed students to come in and do that regardless of their level and regardless of where they started from, they could weigh in on it. And that is an activity, I think, that can obviously be done in a formal education space, but can also certainly be moved into corporate L&D, giving someone a chat bot where they can ask either basic questions or just move up the rungs of complexity depending on just how comfortable they are with the material. So chat bots are a great way to just help that out and also just to help practice.

[15:12] Tom Moriarty: Right.

[15:12] Jason Gulya: You can create scenario based learning with the chat bot, actually pretty easily, especially with something like chat GPT, which is so good at just role playing, as long as you give it proper instructions for doing that and what that means. And that allows students to kind of try again and again and again and get feedback every time they try. So that's a big thing that I know that professors have played with it. Another one that was really popular and still is. Rubric creation before AI, if I wanted to sit down and create a rubric for assessing something, it usually took me about four to 5 hours. I'd have to spend some time thinking about whatever I'm trying to test for whatever I want to see demonstrated. Then I had to figure out how I was going to do that. I had to define everything. If I'm going to see something like critical thinking or whatever it is, I'd be very concrete in rubric form with what that means, how I'm going to assess it and everything, and it would usually take about four to 5 hours. I can now create using a prompt, a rubric in about four minutes, and it allows me to then bank that time. One of the things I try to tell more and more people is that AI doesn't save me time. It doesn't. I'm still spending 40 hours a week working. A lot of my weekly workload is very much the same. But what AI does allow me to do is to repurpose it so that now, because I'm spending four minutes instead of 4 hours on a grading rubric, I can be better at my job. So one of the things that I can do for my learners is I can now create personalized rubrics for them and I can actually do it with them so that they can work with me. We can decide what is important for that assessment, how to define whatever those skills are. And we can using a I've done this with chat GPT. I've done this with Bing. I haven't tried it with many other programs, but in four minutes I have a rubric that a learner actually weighed in on. And so they own a little bit of in the past, before AI, that would have taken me my entire day. Right. All of my time. I would have done nothing else besides create rubrics all day. And so now, because I'm able to bank that amount of time, I'm able to just be better and more efficient in my job and do something that was not scalable in any way in the past, even just a year and a half ago.

[17:47] Tom Moriarty: Yeah, those are two really great examples. Something that resonates with me and I've heard across a number of our audience is even back to a couple of seasons ago, and we spent a lot of time talking about Persona development in the L&D space. Right. And the value of a Persona because it gives you the ability to truly understand your audience. That's the thing that was resonating with me in your example in the classroom.

[18:10] Jason Gulya: Right.

[18:10] Tom Moriarty: And I've seen it. I deliver a lot of training to our sales team, right. That's my profession what I do here every day. And even in a more outgoing, assumed more outgoing group like that, in every cross section of a group, there's always going to be the 20% to 30% that is very vocal in any learning space, and then there's the 70% that's not, right. And the ability to use a tool like AI or this custom created chat bot to go meet the learner where they are and try to drive some improvement is great because it gives you the ability to meet the audience where they are at their level, right. And let them engage on their time versus forcing an engagement that, frankly, doesn't make that person comfortable. And you probably get better. I'd imagine that you probably have seen in your experience, like, better uptake learning wise across a wider audience than you would historically, right. Because that other scenario is really only focused on that 20 or 30% that starts to really be vocal. Right. And then maybe the next 5% that can learn from that interaction. So I think that's really interesting, right, because that applies to every scenario where you're trying to create some learning on a particular topic.

[19:30] Jason Gulya: Yeah, absolutely. And AI at its very best gives us the ability to do things we know we should have been doing for a while, actually, that we can actually stick to learning theories that we knew in some way but weren't really scalable, that we've known for decades that personalized learning works. We've known for decades that there are multiple intelligences. And so having people come in and talk in a group discussion, using that as the only way to assess learning, we know that that was not good. That wasn't good learning science and being able to personalize the learning experience for students so that everyone is able to, regardless of where they start, they can grow and they can progress, they can move towards mastery of whatever skills they're trying to master. Now we have the ability, hopefully, or getting that ability to achieve that and to actually scale that. And same thing with diversification, that one of the big obstacles has been how do you do it? Right. We recognize that some students are good at speaking, some students are good at writing, some students are good at X, Y and Z, and same thing with the workforce, and that we should have these different methods of assessment. But then you say, okay, that's great. If you have ten learners, what happens when you have 100,000 or you have a company, right? You have a company where you're trying to assess everyone. Well, no, you can't do it, right? It is impossible. You would have to have a giant team just to do that and let alone to track things. Right. That's just to get the assessment out there. Now, if you also want to track how people are doing and come up with these learning programs and these personalized learning programs, then it blows up even more. And AI will, if we use it correctly, allow us to do that. Right. Really stick to learning principles and thinking about what makes learning sticky, what gets us to self reflect, what makes us self aware, what gets that kind of meta knowledge that we all kind of want, because that's very adaptable in school as is in the workforce. So, yeah, I don't think AI is doing anything different there. I think it's just going to hopefully give us the ability to realize something that we've wanted for a long time.

[21:49] Tom Moriarty: Yeah, it's almost like the problem of a lot of people, understand the ideal. It's just that the ideal is not practical. And I think what you're saying is hopefully AI, because of, frankly, the time component that it conceive you, starts to make the ideal look a lot more practical. And I think that could be something compelling for our audience.

[22:11] Jason Gulya: And a lot of it comes down to principles. I'm constantly going into groups where we start talking about AI and there's a lot of nervousness. And sometimes one of the best things you can do is just hit pause on the conversation and just table AI. Don't even talk about it for a few minutes and just ask something very basic, like what is good teaching or what is good learning? Just like go back to that, because the principles of those have not changed, right. I think we're sort of tricked into thinking that they have because AI is developing so quickly and it changes so quickly. But the human mind hasn't. The human mind takes a really long time to learn how to do things differently. And so these principles of good learning, good learning and development, good teaching, these have not changed. And so just hitting pause for a second, hitting time out for a second, going back to those conversations, and then working AI in, thinking about, oh, how can I help you learn something right? Then doing that, it allows you to keep the technology secondary or tertiary. And I think that that needs to happen too, that we all, especially with the hype, it is so tempting to make everything AI centered. And if we do that, we're losing a lot of what we're really interested in, what are trying to do, because for me, the technology should serve the purpose, not the other way around.

[23:35] Tom Moriarty: Yeah, I totally agree with that theory. The technology is what it is, right? It's a tool. That's the word I always like to use because I think it's about understanding how you're going to use that tool to achieve the outcome you're trying to achieve, right? Which is really your decision. I'm curious, you mentioned the example earlier of the rubric creation, right? Taking a four to five hour exercise and turning it into a four to five minute exercise. And then what you were able to do with that time is repurpose the extra 3 hours and 55 minutes you now have back in your life to making maybe just that same thing, rubrics that much more effective and personalizing them, right? And then making them a much better tool, therefore making you better at your job. I know in my experience, as I've personally dabbled with chat GPT specifically in a variety of ways in my job, and just seeing how can I use this to make myself more efficient, more effective, right? I'd imagine it probably took a little bit. You had that whole three month period you talked about earlier, right, to really learn how to use AI, right? I found in my experience, there's some nuance to the prompts and you might prompt something, the same thing five different times in a row and get five different answers, right? And there's a learning curve to that. How do you recommend people start actually diving in and learning, right. If they've listened to this conversation at this point and they realize, okay, getting that 3 hours and 55 minutes back in my life, to do other parts of my job, either that I like more or that it'll make me more effective, sounds really compelling. How do I do that? Where would you suggest someone starts learning?

[25:22] Jason Gulya: My personal recommendation is to focus on skills over tools. If you go online and you look at those lists of all the AI tools that have been released over the last month, it's maddening. I highly recommend that no one do that. No one go and try to learn 100 programs for different reasons. Reason number one, and I don't mean this in a negative way, most of them will go belly up over the next six months, right? There's going to be that turning point when venture capital starts to fade away. And now over the last year, it's been the case that you go into a meeting with an AI product, you can come out with a multimillion dollar deal even if you don't have a product, right? They didn't need a prototype. They just needed an idea and they were getting funded. So many of them will fade away. And then there's also, and I know I mentioned this already, this worry that we're really making the tools front and center. So I would say start with the skill, whatever you want to learn, or if you're a professor, whatever you want to teach. And then picking one tool to play with, I think it's actually better to learn a single tool. Well, I would use the foundational one. Chat GPT is kind of the classic example you can use, Claude. There's every indication that that's going to remain a thing. It's being baked into more and more products. Google Bard if you want, but choosing that one that you feel really comfortable with and just play with it, right, and do it in small spurts. I think that we tend to think like, oh, I'm going to spend 4 hours this weekend learning ChatGPT. And I actually think, and this is just spaced learning, that we actually get more out of going in playing with it for 15 minutes. Run an experiment, right? Just something that you're not sure if it can do, and see if you can do it. Go back the next day 15 minutes and just give yourself that sort of leeway to play with it. And I think so I've done this with chat GPT a lot, and it constantly teaches me not just about that program, but as I look at more and more programs, once you do start to transfer and look at other things out there, you can analyze them a little faster, right. You get a sense of what's actually working. Once you play enough with chat GPT, you look at bard and kind of see very quickly, oh, this is what it's doing. This is what it's not doing. And I think that just focusing on that one tool and the skills you want to develop, I think it allows us to be in that experimental stage a little bit, and it also eases the pressure so that we're not spending 4 hours in a weekend learning something. We're spending like 15 minutes playing, tinkering, see what you can do with it. And it also allows you to kind of reflect a little bit more and try to be very honest with yourself that we might all go through hype cycles. And I try to do this on social media. I try to write about how I've gone through my own sort of hype cycle and how only now, after a year, it's starting to wear off, right. I feel like I'm starting to get to a space when I can assess. Oh, that AI use case was actually helpful. That one was not. And so just being honest with ourself about just where we are in the process and also letting yourself fail and being honest that it failed. One of the paradoxes of using AI is that it can allow us to be very efficient. It can save a lot of time, but that's only after you put time into it. There actually is a lot of work that needs to go into learning how to prompt. Right. It's not like I created a prompt in two minutes, ran it through, and suddenly started saving all this time. It did not happen that way. And prompt engineering, or whatever you want to call it, is so complicated for a couple of reasons. One, it kind of learns. It takes some learning just how to prompt something, and also the systems are changing so quickly, and this can be sort of depressing. So one of the things that I recently did is I retired one of my prompts. It was one that I felt good about. It worked really well. I got the outputs I wanted, and I put it aside for about two months. I wasn't really using it for anything. I went back and I said, all right, I already have this prompt. I'm going to run it through. And outputs were awful. They were so bad. The system had changed so much that the prompt no longer worked. I had to rewrite the whole thing.

[29:51] Tom Moriarty: Really interesting.

[29:52] Jason Gulya: It's this ongoing process of experimenting and playing with it that can be rejuvenating, that can be sort of debilitating, hopefully not depressing, but just giving yourself the ability to experiment, focusing on skills over tools, because those tools are changing constantly, I think that relieves the pressure a little bit.

[30:11] Tom Moriarty: Yeah.

[30:11] Jason Gulya: Letting yourself fail.

[30:13] Tom Moriarty: Amen. Amen. I couldn't agree more. Jason, I'm curious, if you're comfortable, could you share the specifics? Like, what was that prompt?

[30:22] Jason Gulya: So the prompt that I created that I was happy about was actually for giving learner feedback on an assessment. So, basically, what the prompt allowed me to do was I could read something created by the student. I could put basic notes right. So not worrying about proofreading, not worrying about anything like that. I can put basic notes on whatever I'm assessing for that challenge. I call them for my students. And then it would spit out using my own designed template, something that actually just gives a lot of formative feedback to the student. So it's something that I created and really allowed me to streamline a lot of my feedback. Yeah. And so I put it aside for a few months, and I went back in and I noticed a couple of things. The first was that it wasn't following instructions, it was jumping over sections of it, which is weird because you think, oh, the program, the base model is just getting better, right? But that actually might have weird unintended consequences that what worked in a previous version worked because it wasn't being literal with my language. So as the model got better and was actually being more literal and understanding in giant quotes, more of what I was saying, actually, my prompt itself was misleading. I actually went to the prompt. I actually see why you did that. Right, but you only caught it because the model is better now, right?

[31:54] Tom Moriarty: Right.

[31:54] Jason Gulya: In the past I thought, all right, output is good, prompt must be good. And I think we made that assumption all the time. And I actually had to go and I used structured prompting with bracketing and everything else to say, oh, I just want you to do this because otherwise it would skip over instructions or it wouldn't write in my own voice that I found myself having after every generation going back in using capital letters and saying, just remember, use my own voice. Right? Otherwise it has all these weird chat GPTisms which are so odd. Like, I want to commend you for like, no, I would never write that in a sentence. And so I just had to go back and just rewrite it, be very literal with what I wanted, which just wasn't necessary a few months earlier with that same model.

[32:45] Tom Moriarty: That's really interesting, Jason, this has been great. I think we could probably continue this conversation for another 3 hours and have millions of takeaways for our audience. But in the interest of their time and yours, where can they go to find out more about you and what you're up to? Where's the best way to reconnect with Jason?

[33:07] Jason Gulya: The best way right now is LinkedIn. I try to write on there constantly, so if anyone listening to this sends me a message, pings me however you want to get in touch with me. I try to be very responsive with that. One of the things that I've done, especially over the last six months, has been trying to, as I'm experimenting, just share them. Sometimes they work, sometimes they don't, and just trying to help people out, because I think we're all sort of muddying through this very complicated mess of technology in many ways. So that's probably the easiest way to do that. And I have my contact information on there, so people want to email me, but the easiest way is just directly through LinkedIn.

[33:42] Tom Moriarty: Awesome. We'll make sure to share your info in the show notes and what's the last closing thought you'd like to leave the audience with today?

[33:54] Jason Gulya: Play. Just keep playing. Don't stop. This technology as it's changing, I think we just need to learn not to just rest on our laurels, right? Not to just say, oh, I have this figured out. No one does. The popularization of AI has created this weird culture in which everyone was an expert or wanted to be an expert, but very few people actually were. I would say that just getting yourself in that growth mindset when you're playing and experimenting and sharing makes all the difference. I think that there's a lot of vulnerability that goes into it. But for me, it's the only way that I learn, the only way that I grow. So I want to encourage everyone to just experiment and play and be welcoming when other people experiment and play, that's the other side of it, especially what they're sharing with you. But that's my big takeaway, what we can be doing with this technology right now.

[34:56] Tom Moriarty: Awesome. I love that. Just go play with it. Have fun, right? And be open minded. And I think that there's ultimately, across the number of examples that you shared today, the opportunities are really limitless. It's a tool that can allow you to be better at your job if you use it effectively. So I think that inspiration to go play and be open minded about it and use it for the growth that it can create for you personally, professionally, is great advice. So, Jason, thank you so much for your time, man. It's been a pleasure chatting.

[35:28] Jason Gulya: Thank you so much. Pleasure is all mine.

[35:31] Tom Moriarty: Thanks for listening to the Secret Society of Success, a podcast by Mimeo. To find out more about how corporate L&D teams use Mimeo for smarter content distribution, visit www.mimeo.com. Also, don't forget to subscribe to get our episodes as soon as they launch. Enjoy your day.

Select your platform to login.