This is the transcript generated using AssemblyAI and then formatted using AudioJots software.
The corresponding website (it is also on Substack) is here:
▶️ @00:00:00: Welcome to a brand new episode.
Mike Driscoll welcomes Ines Montani to the Python show
▶️ Ines Montani: Mike Driscoll, the Python show.
▶️ Mike Driscoll: Hello and welcome to the Python show. I'm your host, Mike Driscoll, and today we have my friend Inez Montani from explosion. She's the co founder and CEO. Welcome to the show.
▶️ Ines Montani: Yeah, thanks for having me. Very excited.
▶️ Mike Driscoll: Yeah, it's great to have you. I see that you also work on some other projects. I mean, you're super busy with being a co founder and CEO, but you also find time to be a developer on open source projects, too.
▶️ Ines Montani: Yeah, I mean, it's always been kind of important to what we're doing. And I think it's also kind of in the DNA of the company that both me and my co founder, Matt, we're both developers, we develop in Python. And I think especially if you're building developer tools, you need to actually develop and stay close to that and not lose the connection.
▶️ Mike Driscoll:
▶️ @00:01:05: So how much do you think you still develop nowadays?
▶️ Ines Montani: I mean, less than I used to. Like, I think it's more in kind of these sprints. Like sometimes I'm working on a thing and then I spent like a day or two programming, and then there's some time where it's more, a lot more. I don't know about the general vision and concept and product, but, yeah, definitely. I also really enjoy doing front end development. I've always done that. And it's also. I used to pretty much do all of that for the company when we were smaller. And then, for example, I had some idea for a new interface for annotation tool, and then I just built that and spent a day on it.
▶️ Mike Driscoll: That's cool.
How do you found a company around an open source software package?
Well, let's take a step back so you can tell us just a little bit about yourself and your journey to programming.
▶️ Ines Montani: Yeah, so I've actually. I've always programmed. I started as a teenager on the Internet. I was more like an indoor, indoor kid. I discovered that you could make websites in Microsoft Word and upload them to the Internet.
▶️ @00:02:03: So that was like, super exciting. And yes, I really got more into that. Building better websites, really writing the code from scratch, redesigning my website once a month, like, that's the kind of what you did, being part of these online communities. And then I didn't. I didn't actually do computer science, so I don't have a classic tech background because at the time when I had to figure out what I wanted to do with my life, I didn't really feel like a programmer. Like, it's weird looking back because, you know, of course, I was like, you know, writing code and doing these things. But I didn't really identify with the typical trope of like, you know, the voice from like, you know, school who were into computers. Yeah, so I did, yeah, yeah, it's, I mean, yeah, that's, I think, I think a lot of women in tech kind of share that story, but yes, I did communication science, media science and linguistics. I was always into like language as well.
▶️ @00:03:01: And, yeah, so in what I'm doing now, I basically found like an area that combines everything I'm passionate about. So programming, working with language, building things that I use for interesting products and, yeah, actually by coincidence, I met my co founder Matt, who was also the original author of our library Spacey in Berlin. We started talking about it and then we realized, wow, there's actually like a lot to do or to innovate. Like that was back in the day, like 2015. There wasn't much, even much software around for people to use commercially or really use in production. Like, a lot of it was very research based and that's also reflected in the code and the presentation. Like, those were academic projects. And so we're like, oh, let's just, we should work on this together and build something that's really designed to be used in real projects, in real products and, yeah, so that's how it all started. I started working on Spacy as a developer as well. And then shortly after, we founded a company around it.
▶️ Mike Driscoll:
▶️ @00:04:02: That's really cool. I'm just curious, how do you found a company around an open source software package?
▶️ Ines Montani: Yeah, I mean, that's, that's a really good question. I mean, I think that's also something that's discussed a lot. And there are different, different avenues here. So one that some companies do is having more like an open core model or selling support. But we always saw that as like a bit problematic because often if you do support as your main business model, the incentives are a bit misaligned. Like, if you don't, like, we want our docs to be really good, it should be easy to get started, easy to be productive, and then if your docs are too good, like, our goal was always, hey, if our software is good, people shouldn't need us to use it. And there's always stuff on top. Like, we still sometimes do engagements with companies to help them use it even more effectively. But we're like, well, that's support is not really the way to go. And the plan was always to really have a business around it. And so we thought, hey, if we build something that's useful and open source. We can show people, hey, we can build great software, and there are always products that we can offer on top of that.
▶️ @00:05:02: And especially in machine learning, like, what we saw is a lot of it is not just the code. Now you have code plus data, and the data is actually what's very specific and very valuable. So that's why our first product that we started developing very early on was an annotation tool that was also scriptable in Python as a developer tool, and that just helped people train their models more effectively, evaluate them, work with their data, annotate. And so that's. That's kind of how our product, prodigy, came about.
▶️ Mike Driscoll: That sounds to me like you guys were kind of ahead of the curve in a way, because you guys are working on Prodigy and Spacey, and now. Now llms are, like, really big now.
▶️ Ines Montani: Yeah, I mean, it's, like, early on, it's definitely true. We were, like, kind of early also with the annotation tool or something around data. We were quite early to market. Like, back then, a lot of companies still had this idea of, like, a, yeah, data. You just outsource that to some cheap clickworkers. It's just boring work.
▶️ @00:06:01: And of course, that wasn't true. Like, the quality matters. It needs to be consistent, and it needs to be really part of the development process. You need to develop your data like, you need to develop your code. And that was definitely something that wasn't intuitive when we started. Now, that's a lot, you know, more obvious to companies that you have to do it in house. And also this idea that, like, no crowd work isn't the future, that's also become a lot more clear now with, like, large language models that basically kind of replace that need entirely for having, like, third party, you know, annotators work on, like, your specific problem. Like, you don't need millions of examples anymore. You need a really small set, but that needs to be good.
▶️ Mike Driscoll: That makes sense to me.
You've been going around the Python conference circuit this year
I've seen that you've been going around the Python conference circuit this year. How have you found that? Have you liked that?
▶️ Ines Montani: Yeah, I mean, it's, like, really exciting again to be at real conferences.
▶️ @00:07:00: Like, we really missed that over COVID. So this is, like, the first full year where we've been really at it again. We also did our first conference booth, first ever at PI data Amsterdam, which was amazing. Like, having a little booth with, like, we have nice stickers, nice swag. We got to demo our new product. Prodigy teams talk to users, customers. Yeah, it's great. I love the python community. And, yeah, conferences are great and still not done. Still have more coming up this year.
You spoke at some conferences about how LLMs fit into a practical workflow
▶️ Mike Driscoll: Yeah, I saw you were speaking at some of those conferences. How did that go?
▶️ Ines Montani: Yeah, really, really good. Like I did this year. Of course, llms are like, you know, a really hot topic that people want to know about. So I had, like, my talk, or that was also my european keynote on how that fits into a practical workflow and also taking away a bit of the hype and really looking at practical use cases. What does that mean? And it was really nice to also talk to people who are really working at different companies solving these problems and also really seeing our vision resonate with people because I feel like they kind of, if you just scroll through your LinkedIn feed, people might get the idea that, like, oh, NLP is practically solved.
▶️ @00:08:11: And that's, of course, very far from the truth. So there's also. There are a lot of people really working hard on solving specific company problems, and that's always been the case, and that will also be the case in the future.
▶️ Mike Driscoll: Yeah. So I know Europython will upload their videos. Does PI data do that, too?
▶️ Ines Montani: I think so, yeah. Yep. Everything was recorded. So, yeah, I'll try to include. Yeah, for your show notes, I can definitely send you some links. I think Europython is now online and. Yeah, I think some other conferences are following.
▶️ Mike Driscoll: Okay, I'll do that. So other people can check out your amazing content.
What are your favorite python packages that aren't ones that you've helped create?
So we talked about spacy a little bit and prodigy.
▶️ @00:09:01: What are your favorite python packages that aren't ones that you're. That you've helped create?
▶️ Ines Montani: Yeah. So this is. I also get that asked sometimes, and I often feel a bit bad because I don't want to, like, I don't know, single out specific packages or forget. Forget something. I mean, they're like, there's just so many great things happening, I mean, in terms of kind of little ecosystems that have developed. So, yeah, we really, like, always, like, pedantic. That's something we work with a lot in our libraries. And it's also, I think it's also nice to, yeah, give a shout out to, like, a package. It often actually works behind the scenes rather than, you know, being, you know, the thing people necessarily import. So there's a lot of features that we've built for prodigy and also for spacy especially, that are powered by pedantic under the hood, like the configuration system, where you can write, like, python config, parser style config files that have all your settings for your models every, every possible configuration. You have that in one format, you validate it, you can auto update it.
▶️ @00:10:02: Like, there's a lot of kind of data passing and validation features there that are super, make the features super useful. So that's one. I've also been really intrigued by the whole rich ecosystem. I've always had a soft spot for making things look nice, both on the web and in the terminal. I haven't been able to play with it as much as I wanted, but, yeah, there's some exciting stuff happening.
▶️ Mike Driscoll: You mentioned the terminal. Have you checked out the textural project yet?
▶️ Ines Montani: Yes. Yeah, that's also really cool. Actually, my colleague Vincent, who you might also know from the Python community, he had a mini project. We could probably also put that in a show. Notes, rebuilding the prodigy Ui in the terminal. So prodigy has, like, it has a sidebar. It has like, the annotation interface where you look at the task and then you can interact with that and at least for classifying things, it looks exactly like it does in the browser.
▶️ @00:11:03: Sounded like a fun experiment, but super cool and shows what's possible.
▶️ Mike Driscoll: Yeah, that sounds cool. I'd love to have some links on that.
▶️ Ines Montani: Yeah.
How did your company Explosion come about?
▶️ Mike Driscoll: So I'm curious, how did your company explosion come about? What's the origin story, if you mind?
▶️ Ines Montani: Yes. So it was really. Yeah, just meeting Matt, working together, and then really seeing that, like, hey, there is, like, there is this potential, and there's also Spacey had just been released and had taken off. People were interested in that and seeing like, hey, there's more we can do here. So let's really focus on that full time. And in the beginning, we were actually quite small. I mean, we're still very small, about 20 people. But in the beginning, for a long time, it was really just us. And then slowly, one, two, three more people. And we also started out profitable because we saw, like, hey, actually, what? Also, it's really important to start out early, running a business and seeing like, hey, does that idea we have work?
▶️ @00:12:02: Can we convert users from our open source library who like our stuff into paying customers? Is there something sustainable there? And it was always super important to do that early and not delay that. So that's also why. Yeah, we actually ended up, we raised money for our SaaS product that's just currently in beta, but we actually did that pretty late in the life of the company.
▶️ Mike Driscoll: That's awesome. I love hearing success stories with Python.
▶️ Ines Montani: Yeah.
spaCy is a natural language processing library that can be used in production
▶️ Mike Driscoll: Why don't you tell us a little bit more about what all you do with Spacy? What are some use cases that are common for it.
▶️ Ines Montani: Yeah. So the idea that, as I said before, the concept of spacy has always been to really be a library that can be used in production. So it needs to be fast, it needs to be extensible, needs to have a good user experience of the experience around it. And it's really built on the idea of you want to process text.
▶️ @00:13:00: You have like text in pretty much any industry that comes in, and you want to extract structured information about that text, whether it's what the text is about, things like person names, company names, dates that are talked about, how things are related to each other, how, you know, like what's a verb, who does what, that sort of thing, you know, becomes really important if you really want to understand text better. And so the way spacey approaches, this is the idea that you pretty much always want to have a step, a series of steps that you perform, or like a pipeline of different components that you apply to extract different things. They can interact with each other and you can also mix and match techniques. So there's, we do have neural network models that are like, good at solving the task and that you can train. But for a lot of cases, you might want to use some rules or implement, you know, some custom matching that Spacey also provides really good support for feed in some internal database, you have link mentions of company names to some entry and a knowledge base.
▶️ @00:14:16: And all of these are components you can basically assemble to a pipeline. So they're like, and there are lots of different use cases from all kinds of industries. I mean, if you think about it, like, everyone has text, so you might have like, financial use cases. We have use cases in a medical field extracting like drug names and mentions from text. There's also a lot of stuff that, like, you is incredibly valuable within companies and often doesn't get much attention because it's not the super hot machine learning stuff. But there's so much in terms of internal company reports making processes more effective from having tech support work better to better do work, or help analysts that were previously performing work manually.
▶️ @00:15:09: Those are all use cases that we have. So there's kind of, there's pretty much no industry that's not working with NLP these days. And for prodigy, one important thing in general of our stack is that we've always focused on data privacy and allowing users to run things on their own hardware because, yeah, we think that's important. There's kind of no benefit in making people upload stuff to us if they don't have to. That also means that we have a really strong foothold in these areas and industries where that's really important, like finance, healthcare, there are a lot of. Yeah, a lot of these use cases really require teams to build something in house.
▶️ Mike Driscoll: Yep, that makes sense. Thank you for explaining that, because when I hear natural language processing, I don't think machine learning, for some reason, those words just don't gel with me for machine learning, so.
▶️ Ines Montani:
▶️ @00:16:01: Oh, really?
▶️ Mike Driscoll: It's good to hear.
▶️ Ines Montani: Yeah. For me, I'm like, what do you think of?
▶️ Mike Driscoll: I'm like, is that something where you take somebody's voice and you turn it into text and then you analyze the text? You know, to me that's, that's what I think of. I don't know why, but I hear that word. Yeah, I voiced a text or something like that.
▶️ Ines Montani: Yeah. I mean, it's also, it also doesn't help that sometimes, you know, the boundaries are not very clear. Like, also a lot of the new generative stuff that's happening that, you know, everyone sees from, like, you know, question answering summarization, basically, you know, stuff like chat, GPT, that also kind of falls under the same umbrella, although the workday is quite different. So actually, what I think is a very important distinction in NLP is the generative capabilities and predictive tasks where you get so generative, you get text in and then you generate some output. And for predictive tasks, you get text in and you generate some structured information that's often also machine readable where like, basically in a lot of use cases, and I would say actually the majority of NLP and production today are components that are part of a larger system where like, the output is then used in a database or in some other process.
▶️ @00:17:17: And that's also that that stuff is very important. And that's actually also something where, you know, there's a lot of, you know, room for improvement also with these larger and larger language models because they add a lot of great capabilities for generating stuff. But there's a whole other level of how do we take that generated output and clearly all that knowledge that's in there and put it into the best possible structured format.
▶️ Mike Driscoll: Yeah, yeah. I keep seeing people talking about the smaller ones being more valuable since they'll be able to run those on like, cell phones and tablets, whereas the giant ones are really valuable for, you know, having a, having like a web app on top of it because they can run it on aws or whatever.
▶️ Ines Montani:
▶️ @00:18:04: Yeah. And also, I mean, I think there's also the difference of, well, do you want to run it, you know, what do you want to run in production? Do you want to use it during development? And yeah, they like definitely advantages of the large models, but they also, they know very little about the very specific thing you want to do, but a lot about the language. And then there are a lot of interesting ways of, you know, taking basically the best that you get from these really large models and putting that into something you're building that's a lot smaller and more specific. Because, you know, one great thing about chat GPT is that, hey, it doesn't know what you want to ask it and it can still produce like a really good response that's like really exciting. But if you often use cases are quite specific, like if you're working in a company, you know, this is what we want to analyze. Here are some constraints. And being able to constrain your system like that is really useful. And then.
▶️ @00:19:00: Yeah, also that's why I see it moving models, they don't have to be this large for everything. I think there's a lot of potential there for the future.
▶️ Mike Driscoll: Yeah, I agree.
What are some of the most notable applications that were created using spaCy?
So what are some of the most notable applications that you've seen that were created using Spacy or prodigy?
▶️ Ines Montani: Yeah, I mean, it depends on how you, how you define notable. So one project I really like is one that actually, we can have a link to that as well. It's, we did a case study on that. It's the Guardian. They used Spacey and prodigy internally for a lot of interesting projects. And also what I liked about that one is that it really also shows the importance of, you know, developing good guidelines for what you want to annotate, how you want to structure the project. And they did that in, they did that very well because they all have had like a journalism background and were used to, like, thinking about language. A lot of that actually matters. And you have to think about edge cases. How do you define what a quote is?
▶️ @00:20:00: Or some of these even more basic things? Like everyone will say, everyone will know what a person name is, but if you want this in a structured format, like should the title like doctor being included in the name, they're like, you immediately end up in all of these rabbit holes. And I think that was some of the most extensive work, not only training a model or building a project, but also really thinking about how do we handle the data and ensure consistency. And they did some cool customizations. So that's a project I really like. And then there's some other ones that are notable. I also want to shout these out because they're notable in the sense that they provide an insane amount of value, but are internal and kind of unsexy. So that's like, it's, you know, for example, we just, there was a project recently with the financial services company where they're analyzing, you know, internal notes about trading, and there's like, you know, a lot of these things are incredibly valuable in the sense of a company or their projects where, hey, getting the model to run faster and more efficient can really save millions of dollars.
▶️ @00:21:15: And that's, that's also stuff I find cool, like, you know, a team or a team at a large company, they were small team, they were able, using Spacey, they were able to get started with just a small team, ship 30 models to production, use prodigy to constantly improve them. And, you know, yeah, I met someone from the team. They told me, hey, this has saved us like tons of money and made our life so much easier. And that's, that's also the project I really like.
▶️ Mike Driscoll: Yep, that makes sense to me. Those would make me proud of my project or proud of my package.
Getting into programming and getting into development has been easier than ever before
How do you see all these, how do you think all these new llms will affect your company explosion or will they affect that in any way?
▶️ Ines Montani:
▶️ @00:22:02: Yeah, I mean, it's definitely, it's very exciting. I think there are kind of two levels of it. One is that actually getting into programming and getting into development has been easier than ever before with models that can actually help you program and assisted coding that has actually allowed a lot of people who previously weren't really able to pick up tools and do things themselves to really get productive very quickly. So we actually see, hey, there are a lot more people who have a problem that previously maybe they wouldn't even have focused on or solved in their team because they didn't really know how to. And they can now use something like copilot or any of the other tools to help them program. And all of these, because Spacy has been around for so long, and a lot of these tools are actually very good at writing spacy code. We also finally paid off that we, we put a lot of work into ensuring backwards compatibility, keeping our APIs stable.
▶️ @00:23:06: That was a significant effort. And of course, not because we had that in mind, but it now shows like, yeah, chat DBT is great at writing spacy code, and you can ask it like, hey, write me some rules for extracting us addresses. And it can do that. So it's kind of a meta level. Instead of, you don't need chat GPT to extract us addresses, but you can make it give you code to do that. So that's like the one side that actually, hey, it allows more people to get into that stuff. And then on the other side, of course, we also see a lot of new capabilities that we can use and integrate. So one is of course the generative capabilities. It's like something that finally works and it's much easier to add. Like if you want to summarize a text. So generation was never really part. That's kind of out of scope for spacy. But still there might be a use case where, hey, you want to take a long text, summarize it, and then extract structured information from it in some way or predict something.
▶️ @00:24:07: So that's very cool. And then of course, using these models to help bootstrap annotation and training of smaller and distilled models, like, because before a big problem was that, like, even if you, you know, you're ready to, you know, be hands on and like work on this, you kind of needed at least, yeah, 40 hours of annotation in order to have enough data to train at least a decent model or shit model that you could improve. Like, the problem with machine learning is like, oh, you start and there's kind of nothing, and then there's this long on ramp where like, nothing happens and your model isn't learning. You don't know what's going on and you can't tell. Well, do I not have enough data or do I have a bug in my code? This sort of calls, this is like a huge problem.
▶️ Mike Driscoll: Yeah, yeah.
▶️ Ines Montani:
▶️ @00:25:00: And that is something that really, these large models are good at. Like this. Absolutely. You know, you don't really don't need to crowdsource any annotations. You can use a large language model at development time to help you create labeled examples for your specific problem. So you can really take this very general knowledge, then correct it because there might still be edge cases or stuff specific in your data that you want to look at. And then very, very quickly you can create a smaller, very specific data set. And then using that you can train a model that's much smaller, like maybe under a gigabyte, super fast, runs entirely on your own hardware that can be a part of the system. And having this sort of end to end workflow, that's something that also we find really exciting and that we'll continue working on in the future. I really imagine like, hey, you can just look at your data, create some examples, get like a baseline.
▶️ @00:26:02: Here's how GPD four does out of the box. And then as you create more data, you'll see, hey, I've now beaten that accuracy because that's definitely possible like this, you know. Yes, it's very impressive what you get out of the box, but we're really interested in like, oh, if that's what we can do, how can we get further? Like we can go, how can we get better, faster, more efficient, private?
▶️ Mike Driscoll: Yeah, yeah, that makes sense. I mean, a lot of that's above my head, frankly. But I think machine learning is super cool and really interesting and you brought up that sometimes you start the machine learning process and you don't know what's happening. Has that changed over the years where actually that gives you some feedback and says, hey, I'm stuck, I'm broken or anything, or is it just a black box?
▶️ Ines Montani: It depends normally what you do. You train, you get a number at the end and if that number isn't going up, like, yes, there's some, you know, just like with debugging you can learn what to do.
▶️ @00:27:09: I mean, you get some cryptic, you know, just like in python you get some error and like when you first see that, you're like super confused. And if you've seen this a couple of times you're like, ah, I know what's wrong there. Like actually, yeah, just today I helped debug a thing where, you know, this error of like, oh, indices need to be integers. This can be incredibly confusing if you don't know that. Ah, what happened here is that my code is trying to access a dict that's a string that sort of did this sort of thing. Like I remember like first time I encountered this, it was like, like took me a long time. Now I'm like, ah, that must be it. And it's just, it's the same thing. You kind of know what to look at, but there's still, you know, there's still this long road where kind of, you know, nothing really happens. And if you, you know, if you have good intuition, yes, you can do that, but if you're just getting started, like example, if you're saying like, hey, you kind of, you know, you haven't done this much and like you want to, you have a great idea and want to train something.
▶️ @00:28:08: There were definitely a lot of roadblocks that would have made it difficult. And I think now if you're trying to do this now, they are like, you know, you can at least get a system that you can benchmark against. Like you can start out, define your problem, see, and get like sort of a baseline and then you can go from there and make it better.
▶️ Mike Driscoll: Yeah, yeah, that makes sense.
What currently excites you about the machine learning space right now
So I think my last question for you was what, what currently excites you about the machine learning space right now?
▶️ Ines Montani: Yeah, actually I feel like I just.
▶️ Mike Driscoll: Anything that you haven't covered yet that excites you?
▶️ Ines Montani: Yeah, it just kind of came up. Like, I'm excited about many things and I talk about stuff I'm excited about and I mean, I definitely think like, yeah, what I just talked about, like, hey, how can we also, how can we, you know, think past just like, you know, with chatbot or something?
▶️ @00:29:03: Like chatbot or like, you know, a dialogue system. That's what most people kind of have in mind when they think of large language models. And what we kind of want to do is go beyond that. Like, what else can we use this for even, can we use this on a more meta level, like, you know, to help use a large language model to help people be better at applying best practices when they're building machine learning systems? Like, there's really a lot there that, yeah, we haven't really had the time to explore yet, so that's definitely something we want to do and, yeah, also it's really nice to see, like in general there's always been like a big ecosystem and a lot happening in open source, which is nice because we can integrate with that. We can kind of collaborate and work together. There's also a lot of movement on like open source models, you know, making things, you know, more like available.
▶️ @00:30:06: And I do think definitely, you know, what we see is like, yeah, open source is still going to, going to win and become or be important. Like, it's not all happening behind closed APIs. There's an advantage to working with models via an API and, you know, you can have the scaling effects. There's like a lot there, but there's also, you know, a whole other area of things that people want to do and also it doesn't, you know, things people want to do and solve are just magically changing. Like even before computers companies and people have solved certain problems and have done certain things and computers have made this easier, AI is making this easier. But yeah, there's a lot happening. It's exciting, but I think at the same time it's also important to like, you know, take a step back and sort of not get too distracted by like, you know, a feed of like every day something comes up, new comes out, it can be very overwhelming.
▶️ @00:31:11: That's also something I often hear from other developers, like, how do you stay up to date. How do you. How do you know what's. What to pay attention to? And that is genuinely hard.
▶️ Mike Driscoll: It is. It's really hard because you're like, right now, aa is hot. But, you know, five years ago, the web. Web world was the hot place to be. Everyone wanted to program websites or.
▶️ Ines Montani: Yeah, or for files. Yeah. For a while, crypto was hot, and then. Yeah, that kind of crypto.
▶️ Mike Driscoll: Yeah.
▶️ Ines Montani: There was this kind of audience. I remember when I started going to, like, meetups or AI, like, small conferences, there was like, there was a lot of this, like, hyp crowd of people who didn't really, you know, know anything about programming, but were all like, oh, AI. And also, you know, those. Yeah.
▶️ @00:32:00: And some of these people also weren't very pleasant to interact with. And then there was this whole crypto thing, and then these people kind of moved on to blockchain and I guess now they're back in AI and with large language models. But I don't know, it's also, you know, we just. We just reprogram and do work and build tools. That's kind of. That's kind of what we focused on.
▶️ Mike Driscoll: Yeah.
Thank you so much for being on my show today
Well, I think we've reached the end of my questions. I just want to thank you so much for being on my show today, Inez, and I really appreciate that.
▶️ Ines Montani: Thanks for having me. It was very fun.
▶️ Mike Driscoll: Yeah, it was good to have you. I hope we can find some time to hang out again. You can tell me more about llms and increase my knowledge on machine learning.
▶️ Ines Montani: Yeah, absolutely. I'd love to. And, yeah, I'll send you the links for the show notes so that the listeners can check on all of the things I mentioned.
▶️ Mike Driscoll: Yeah, great. That's awesome. Well, thank you to you all.
▶️ @00:33:01: For who? All my listeners who are listening in. And I'll have lots of fun notes for you guys to go check out and links to read up on what she's been up to. Thanks again.
▶️ Ines Montani: Make sure to leave a review. This makes our day and fuels future episodes. Mike Driscoll, the python show.