
%20(1).jpg)

How do engineering leaders balance the pressure to move fast with the responsibility to ensure quality? In this podcast episode, host Maulik Sailor sits down with Matthew Watts to explore the evolving role of AI in shaping engineering productivity, mentorship, and leadership in the age of automation.
They discuss the critical role of trust in AI, the risks of over-reliance, and how leaders can embrace AI without losing the human elements of creativity, judgment, and mentorship. Key Themes Covered
Top Takeaways
Tune in to hear Maulik and Matthew’s conversation on balancing innovation with responsibility and how AI is reshaping the future of engineering.
Maulik Sailor (00:10)
those who are joining us today, I am Malik Saylor.
founder and CEO of Notchup. Today we have an interesting guest whom I know from some early validations of Notchup. I happened to connect with him. We spoke about his startup and what I'm trying to build. had some amazing feedback from the guest I'm going to introduce in a second. And more recently, he has gone on to do his second startup. Second or third? I think it's probably third.
if I'm not wrong, and has got tremendous many years of experience of building and leading engineering functions across multiple startups and scale-ups. And without wasting too much time, let me introduce Matthew Watts, founder of, I forgot, sorry, Metraxia, if I pronounced that right. Yeah, yeah, yeah.
Matthew Watts (00:56)
Matrixia, yeah, that's right. That's right. Matrixia. Thanks, Mollik.
It's a pleasure and yeah, we can get straight into it as soon as you want.
Maulik Sailor (01:04)
Yes, cool. think that's pretty good. Like Matthew, let's just start with your background and your introduction, right? For our audience, it would be great to know your career journey so far until the founding of Metrexia.
Matthew Watts (01:17)
Of course, course, Matrexia, it's good. So previous to Matrexia, I was the Chief Technology Officer as well as co-founder of Lendflow, who is ultimately an SMB marketplace, one of the leading marketplaces in the United States. Prior to that, I was the early CTO of EasyVan. And prior to that, I've worked in other procurement level roles, building up to the problems that we solve here at Matrexia.
Hopefully that kind of gives us a good highlight.
Maulik Sailor (01:44)
Yeah, that's pretty good. Interestingly, I have also worked within the landing space long time back. That probably was one of my early... I wasn't the founder, but I was the first person on the ground. So I was employee number one or probably zero. ⁓ I was there before the payroll was actually set up. We built this whole platform out about like a bike to lat.
Matthew Watts (01:59)
Mm-hmm.
Maulik Sailor (02:07)
commercial modelages here in the UK and the business is doing pretty well. I hope they go IPO very soon so I can liquidate some of my holdings. I really hope so. Anyways, let's dig in to this topic. Today, like everybody else, everybody is getting more and more excited about AI agents and AI tools and all and I think it's for the right reason. There's a lot of noise, there is a lot of
Matthew Watts (02:26)
Mm-hmm.
Maulik Sailor (02:31)
false positives, right? But generally, AI, like if you think about B2B SaaS startup, which was all about changing your capex to OPEX and unlocking your PNL, right? Or PNL overheads, reduce overhead, allow you to be more dynamic, more variable, right? That's all that happened with B2B SaaS startup. But with AI wave that is happening right now,
It's all about productivity gains, right? AI generally, like there are use cases where AI can do something different than what you would normally be doing, but majority of the tooling or the platforms are designed to basically either accelerate the manual work that you are already doing or improve the quality of the work that you are already doing, right? It's not about just coming out with completely out of blue workflows or processes, right?
Matthew Watts (03:05)
Mm-hmm.
Maulik Sailor (03:23)
In some cases, you still do that, right? But majority of the AI tools is about improving or accelerating the work that you are already doing, right? But as you understand, it's all about productivity gains. But oftentimes, that productivity gains is really difficult to measure. You don't really know whether these AI tools are actually adding any value to your operations or not.
or they're just increasing your costs or they're just making some of the workplaces, maybe in some cases, a little bit more complicated, more overheads and all. So instead of improving productivity, they might be integrating the productivity. So that's the topic we're going to talk about. But with your recent experiences, let's start with understanding of, what kind of tools have you recently used or are familiar with that
Matthew Watts (03:52)
Mm-hmm
Maulik Sailor (04:09)
I think could be relevant for businesses in general, but more like technology, like my background is technology, your background is technology. So within the, you know, the city or org, you know, what are the, some of the like developer productivity tools or something that are more relevant or more useful in your opinion.
Matthew Watts (04:13)
Mm-hmm, yeah, for sure.
Yeah.
For sure, for
sure. I would say the... I categorize them into two things, right? There's something that we call a creator and a quality gate. And what these two different things help us do is hold the line before anything merges into a upper environment or outside of our development environment. So that those then can be surfaced later on, either through a dashboard or other tooling to hold us accountable.
Maulik Sailor (04:48)
Mm.
Matthew Watts (04:54)
And so what are these creator and quality gates? Well, one of them is like cursor, right? Which is an AI assisted editor that helps us to create templates of the code that we're working on. It helps us do multi-file changes, safer refactors. But we also wanna couple that with a automated quality gate, which can really help explain things very explicitly.
as to whether or not something should be moving up to the direction. So the main two tools when it comes down to tooling is you can choose what you want to use, whether it's ⁓ GitHub Copilot, Cursor, Cloud AI. There's so many services right now that are releasing. But I think ultimately, as long as you have that creator element and something that's holding it accountable at the end of the day before it moves, that's where the majority of your advances are going to come. Because just like in every tech organization,
It has to follow these processes. And so the faster, the smaller you can make those changes and the faster you can impact those changes, the more trust and productivity you'll have in production.
Maulik Sailor (05:57)
Okay, cool. Now you talked about this creator stuff, right? And generally a lot of this generative AI category of AI tooling is about creating, you know, you're generating something new. And suddenly in my opinion, people are just really getting excited about the generative capabilities. Hey, look, I give you this problem, but it generates the whole video for me, the whole audio for me, or even write codes for me, right?
Matthew Watts (06:09)
That's right.
Mm-hmm. Mm-hmm.
Maulik Sailor (06:22)
But apart from that, you also have plenty of other AI capabilities, for example, pattern detection or predictive analytics, where you can identify a growing threat or, sorry, my throat is just a little bit coffee today. But you can detect your vulnerabilities or attacks, vital attacks on your system and so on.
just at a very high level of the entire AI landscape, what segment are you most excited about?
Matthew Watts (06:48)
For us, it's the AI in general through large language models, the generative AI component of things can be leveraged in multiple different areas using not only automations, but very clear definitions, contracts, if you like, of how a specific piece of functionality needs to perform. And those are the things in an engineering organization are what really moves the needle.
is really slicing things down into ways that we can really define clear contracts for so that we do all of those workloads with artificial intelligence to help on those areas and leave the strict rollback ⁓ criteria, higher-end criteria to the actual senior engineers so they can contribute in that area. So it is all generative AI, yes, but we control how that generative AI works inside of our ecosystems, right?
For example, if you're doing a simple refactor of a function and you know what the output of that function should be, that's something simple AI can take. There's no reason for you to allocate a day or two days of resources time to do that when you know the contract, you know what the output of that function is. And so ultimately you can get AI to refactor it for you, run it. You can use systems like at LEMflow, we used an open source library called BANL. Very, very, very good at this contracting.
Maulik Sailor (07:40)
Yeah.
Matthew Watts (08:07)
component, which is basically, is this working in the desired way before it goes to production? And they use the language models for that. It's a very lightweight version of that contract. yeah.
Maulik Sailor (08:16)
So switching our focus back to technology and setting up your engineering and product and data things and everything else, what do you think within the whole CTO stack? What's the segment or sections of the whole technology management, like writing code, maintaining code, adding and removing people?
know, tooling and processing and all that thing. What do you think? Which segment is largely conquered by different kinds of AI tools? Right. And which segment you think is still like, you know, the AI is still not that to do that. And I'll give you an example. For example, like credentialing, you know, when you're hiring somebody credentialing a user, like, you know, checking for technical
Matthew Watts (08:53)
Hmm.
Maulik Sailor (09:01)
skills and all, I think AI can do a pretty decent job. But what they are not good at is basically understanding whether the person is genuine in his responses or not. It can flag a little bit, but not to a very good level. And right now there's a whole AI versus AI going on in that space. So I think in one side, AI is doing a good job, but on the other side, AI is not doing a good job.
Matthew Watts (09:11)
Mm-hmm.
Yeah.
Maulik Sailor (09:23)
That's just one example, but overall, within the whole engineering leadership or management stack, what do you think? What section is good? There's a good impact of AI.
Matthew Watts (09:32)
That's actually
a really easy answer. So for me, with the experience I have, it's trust. It's the one thing AI, I wouldn't necessarily say it's the fault of the AI, but more implementation. Managers in engineering organizations need decisions that are predictable, explainable, and replayable at any time. And so those same inputs
produce the same outcome. So without trust, AI stays as a sideshow, where we'll use it for a few things. But with the trust component, it becomes the path to production, where teams would only rely on it when they see the reasoning and they rerun it. And so ultimately, having these contracts in place, like I was telling you previously with BANL and how those things work, means that the permissions are correct.
personal data isn't leaked, operations are safe to retry, those kind of things. And the more small tasks you can scale, then you can get to the bigger tasks, but you're basically really getting the enterprise into a position where they can scale artificial intelligence in a way that's reliable.
Maulik Sailor (10:38)
Okay, so, you know, talking with about AI again, often time people are talking more about speed, know, the speed at which you can get things done. Generally, the quality of the work you produce could be better as well. You know, subjective, you sometimes you can have AI hallucinating and producing incorrect results, right? And that ties back to the trust that, okay, if you trust your AI to a level, like, you know, certain threshold, but then
in beginning it might be overperforming, but over a period of time it may degrade, or there might be a false negative, where it of went under down and you didn't detect that. So the triangle is basically speed, quality and trust. But trust kind of really drives the attitude. What do you think?
Matthew Watts (11:05)
Mm-hmm. Mm-hmm.
Maulik Sailor (11:22)
is the one, I mean, I know the answer already, but what do you think, which is the one thing that the leaders should really focus more on?
Matthew Watts (11:28)
I think ultimately it's changing the conversation to refactor it to smaller batches instead of having a much larger batch. We've all been in inside of these PR queues where somebody's changed 13,000 lines of code, right? That's a nightmare for me to review. And it's the same thing for an AI, right? It's just unrealistic expectation, cutting it down into being much smaller work and work that has a
Maulik Sailor (11:36)
Mm.
Matthew Watts (11:57)
contract bound to it means we can enable trust and scale in the org and we can get those things done faster instead of having to wait through going through hoarding through all these big massive requests that are coming through. It's the same thing. So even if it's something as small as reviewing user logs, for example, which you have to do for compliance every quarter, those kinds of things are great for AI. But ultimately you want to make sure that you're careful where that AI is hosted.
Maulik Sailor (12:17)
Mm.
Matthew Watts (12:23)
Again, if it's a compliance thing, you want to make sure that you have it locally hosted versus using OpenAI. But ultimately, that's how I would say it. Start small and use those automations in AI to solve that problem there for trust, then scale up. Yep.
Maulik Sailor (12:36)
Yeah, right now,
what kind of tooling are you using or planning to use for your company? Is it more about AI supported code generation? Would it be AI DevOps? I read about AI DevOps engineer. So instead of finding a real DevOps, you just hire an AI DevOps. There is a platform where it allows you to prioritize your user tickets or your product roadmap.
based on different signals and all. And there's so many different kind of tools. Like every single type of it, I think there's an AI tool for it. Can you name some of the tools that you are planning or already using in your organization?
Matthew Watts (13:13)
I would say, say, even, you can just use hypotheticals. I wouldn't, as an executive, choose to completely automate a role of a business function. Instead, I would want to have a human that has the ultimate say, who is experienced, to work with AI. So have AI complement their job. Here's a scenario, which is realistic and could happen. If AI makes an assumption and makes a mistake on production, your business is dead.
Maulik Sailor (13:21)
Mm.
Matthew Watts (13:41)
⁓ It's good to have a human in front of it that can basically have these safeguards and guardrails to prevent that from happening. But ultimately, when we're running a business, using automation platforms like Zapier and N8n are the first step to getting that completed, right? After that, then you can level AI on top of those frameworks to perform those tasks, irrespective of what they are. Are you scanning through your emails and triaging
Maulik Sailor (13:41)
Mm-hmm.
Matthew Watts (14:08)
what emails you should be responding to. We all know how long emails take us to do, so that's a good first pass. Yes, there are services that do that, so ultimately you could be leveraging that. From an engineering perspective, using cursor, those creators to really scaffold out what you're working on. So instead of starting at 0%, you're starting at 60%. I wouldn't trust cursor to build out complex functions for me, especially mathematical functions. So that's something that a human has to do, but AI can help them get to that point.
and then securing contracts around that process. So tooling really is, it differs from organization to organization. But I would say mostly think of that creator and think of that quality gate, because those two things, irrespective of what you use, will protect the business. Yeah.
Maulik Sailor (14:52)
Okay,
cool.
What part of things, right? So like, you know, when it comes to engineering leadership, you have the technology side where you need to create, build a piece of technology, right? And support that. And then you have the other side, you know, hiring people, nurturing your talent, you know, upskilling them and all. Which side do you think AI can have better impact?
Matthew Watts (15:13)
But I think both, to be honest. I think there is plenty of potential for AI to become a mentor for engineers to really help them understand maybe they're concentrating on a specific area of a class they've been working on and they've neglected another area. Really good for a mentorship perspective there where they could be providing pointers on PRs as they're coming through to help them get better as an engineer to problem solve.
Maulik Sailor (15:16)
Mm.
Matthew Watts (15:39)
And it's the same in the other direction. think ultimately, as long as there are guardrails in place for the AI, then it's great. Then it's a good thing.
Maulik Sailor (15:48)
Okay, cool. On the people's right, right? You you touch about mentoring and all, right? Now there's a little bit wider debate happening in the industry where particularly the beginners and the freshers, know, they're finding it's really difficult to get into the job market because they are competing against AI agents and all, right? And a lot of businesses, are making decisions. Should we deploy an AI tool or should we
hire a bunch of graduates, train them up and hopefully they will perform. And then just we are talking like, okay, AI being able to nurture and mentor this junior or talents coming through.
Matthew Watts (16:17)
Mm-hmm.
Maulik Sailor (16:27)
Do you think there is a potential risk there where the new talent that is coming through, the new generation of engineers that will be coming through, they might become so dependent on this AI agent or AI coding tooling and whatever, That they are always becoming slaves to that rather than the masters of the agents.
Do you see?
Matthew Watts (16:47)
There's
always going to be that risk because as soon as you are taking away critical thinking, which is what you're doing from the process, engineering five years ago was a very different field to what it is today, where the only resources you had was Stack Overflow and whatever you learned in your degree. And you were there to piece things together. Things have moved a lot faster now and we are less concentrating on the bootstrapping, scaffolding.
Maulik Sailor (16:50)
Yeah.
Matthew Watts (17:15)
of coding right now because AI is really good at that. And there's no reason for a human to be good at that because it's just the same thing over and over again. But the things where we really excel at is helping on the critical thinking side from a business perspective on how we can protect the user, how we can impact you and things that maybe AI is not as good at, but you can also make AI good at that as well with the right contracts. So it depends how far you go, but it's all ultimately about.
Maulik Sailor (17:17)
Mmm. Mmm.
Mm. Mm.
Matthew Watts (17:43)
how fast you want to get a production, also how safely. So I think yes and no. It has the potential of being detrimental, but I think at the same time is massive opportunity there for us.
Maulik Sailor (17:46)
Okay. Yeah.
What would be your tips for somebody new just graduating now and entering the job market? What would you recommend that person to upscale or do things that will help him or her to be a post-AI engineer?
Matthew Watts (18:12)
That's a really good question. I would say it's more of a mental model for me. The people that I've seen be really successful in this field have been the people, Shreyas Doshi is a really good product manager that used to sit in as an advisor for us at Landflow. And he said very clearly that it's people of high agency, the people that are going to be successful. Somebody that instead of not working on the problems directly, they find a solution.
to solve their problems, right? And so somebody that has that high agency that's able to use a breadth of tools, not just stuck in one set, that those people, problem solvers in a completely different era now because of AI, we have just more resources available to us, they're the people that are gonna be really successful. And the people that are stuck in a single programming language or a single interface are gonna be the ones that have those issues, is what I think. Facebook used to have this poster across the whole campus, which I loved.
which is nothing at Facebook is someone else's problem. So they enforce the fixer. Yeah. Yeah.
Maulik Sailor (19:10)
Yeah, that's good.
Yeah, that's
actually a very, very good statement, right? That it's our problem, right? I actually, you know, just that reminded me of something like, I can't remember exactly, but there was an event I attended many years ago. And there was this guy, some product manager from Facebook talking about something where he printed the whole profile of the person. like, you know, some real user, but they printed the whole profile.
Matthew Watts (19:16)
Mm-hmm.
Maulik Sailor (19:37)
And the whole page was like, you know, like, you know, connected on the page and it was like 100, 200 meters long. And then he dragged that whole little long strip of paper into the meeting and was talking about the problem, the hair, how would you really preserve this history for this person and so on? You know, I can't exactly remember what it was, but that that was like something crazy that I heard once. And, you know, I wondered, you know, I really wondered like how
Like Facebook and Apple are the ones who really groom these product managers and Google, right? Those three, ⁓ you know, the product thinking, the user first thinking are these companies, Anyways, you know, I have an interesting question just on that. Hypothetically, right? Just imagine this scenario. So you have possibility of using AI agents, you know, like...
Matthew Watts (20:07)
That's right.
Maulik Sailor (20:24)
cursor or something else, which can you give the instruction and they'll write the code for you end to end. Versus you have a real person who will do the job, but maybe a little bit slower, maybe needs a little bit more insights on getting the job done. But that person could be complemented with an AI personal agent or a personal assistant, which replicate the user's way of working.
and can produce similar output or at least get the low level moon and dust done on behalf of the person. So you as an employer has a choice. Do you go with a generic agent, which will regardless of the kind of employees you have will always perform at the same level? Or would you go with an AI clone of your employee, which will perform differently depending on that person's background and experiences?
Matthew Watts (21:12)
It depends. I think if I was to rephrase your question to say if there was a contract behind everything it was doing and it was guaranteed, I would go for the AI because two reasons. The AI is not sleeping. So it's going to be doing this the whole time and everything that the AI is responsible for is bound to a contract. And so as long as there's an explicit response to that, I would trust the AI.
probably a little bit more, but there's the part of the human component of things, which is understanding units more empathetically. it depends on the role, right? It really does depend on the role. But it depends. My answer is yes, I would go for AI. But under specific situations, I wouldn't want AI doing what we're talking about,
Maulik Sailor (21:54)
Yeah,
but that on AI, would you trust a generic AI or would you trust an AI which is more trained to a particular user or a particular engineer?
Matthew Watts (22:05)
I would definitely say that the training, the context aware nature of having something being trained on a specific task is infinitely more better. Yeah. For obvious reasons. I mean, unless you go into artificial general intelligence, AGI, and then that's a very different scenario. But I think as things stand today, having a...
Maulik Sailor (22:09)
Mm.
OK, cool. All right.
Mm.
Matthew Watts (22:30)
DeepSeek model trained on my corporate information inside of our repos is better than a generic non-trained repo AI.
Maulik Sailor (22:37)
Yeah,
yeah, totally. Right. Anyways, you know, I think we are coming like, you know, we almost had an half an hour talking about this. And, you know, before we start wrapping up, you know, I would love to know from you, you know, given your YC background and all, are there any YC or non YC companies, you know, maybe probably at seed or series A stage?
that you're really, really excited about.
Matthew Watts (23:03)
I've been busy building a lot. And so I tell you, there's one company that I'm super excited about is BAML, the people that I talking to you about, which is the interpretation layer, the workflow layer between the language model and the programmatic layer. And so if you have a chance, go check them out. They're doing phenomenal work. And yeah, that's probably the number one area I'm super excited about right now. Who else?
Who else at YC or in the industry? I think that's definitely the way I would go because really the communication between AI and programming interface is something that we need to solve for pretty quickly. And they're doing phenomenal work there. Yeah.
Maulik Sailor (23:46)
Okay, cool, wonderful. Anyways, know, Matt, I think that's really insightful from you, you know, talking about just say this data AI within technology and engineering practices, right? So I think, you know, thanks a lot for your time today. I think I would love to wrap up this conversation, but I'm sure that we'll have a lot more conversations, know, ideally when, you know, when I hope to see you in person, hopefully very soon as I come to SIF.
⁓ in your future. ⁓ You're in Seattle, right? If I'm not wrong.
Matthew Watts (24:11)
That's right. Yeah.
I'm up in Seattle, that's right. But I'm down in San Francisco quite frequently.
Maulik Sailor (24:17)
Yeah.
Okay, cool. That would be wonderful. You know, I would love to see you in person, you know, and have a coffee or a beer. I don't know what you prefer, but would love to do that. Both coffee and beer together. Yeah. Hopefully, hopefully soon. Yeah. Thanks a lot, Matt. And to our audience, right. I think it's about time that we that we wrap up. you want to listen to more of these kind of talks, then do follow our
Matthew Watts (24:28)
both at the same time. Yeah, there you go. It's been great.
Maulik Sailor (24:44)
podcast on both YouTube or Spotify, whichever you prefer, or you can sign up to our newsletters. If you're a tech talent looking to upscale yourself, achieve your career goals, just sign up to Noiswap.com as a talent and achieve your career goals. We have quite a few modules for you around that, getting the right job, getting hired in the right teams, understanding your background and your skills relevant to the company or the project you're working on. And if you're a CTO looking to automate your team ops,
for higher engineering productivity. Just reach out for us to have a demo with us. And that's it for today, guys. Thanks a lot, Matt. Once again, really wonderful podcast today.