"The Engineering Culture is Unmatched" at Treasure Data
Carlo, a Staff Software Engineer, shares how Treasure Data’s latest AI initiatives are opening up unprecedented opportunities for both the company, and his career.
Carlo Luis Espinosa is a Staff Software Engineer at Treasure Data, where he’s worked for over four years.
Why have I stayed? Well, the engineering culture is unmatched, in my opinion.
“ I feel even if we’re growing,” Carlo said, “people still want to genuinely build stuff. Like cool stuff. That didn’t change, even if we are switching to AI.”
Initially, he was hired for the platform API team, but two years ago the company put together a small AI team to explore how LLMs could work with their platform.
We had this small team that worked on it . . . maybe less than five [developers]. We created a really nice product and it was well-received by customers, and it became one of the core products. That’s why we’re currently expanding the teams.
“I think that’s why I’m doing this interview!” he laughed.
How AI Is Making Things Better
“As you may know from the previous interviews with David and Tyler,” Carlo said, “Treasure Data is fundamentally a Customer Data Platform [CDP]. . . . We bring together different company data into a single unified platform, and on top of that platform we offer products that use their data to [help] them better understand it for their use cases.”
Now Treasure Data is finding new ways to incorporate AI into their products and platform, which reduces the technical burden on engineers and account managers, but also helps the customers interpret their data more easily.
“In my opinion,” he went on, “our Treasure Data [platform] is very powerful, but the learning curve is really high. And sometimes you need to be really good in SQL, for example, to really get the most of your data, you know?
”With AI, that simplifies it a bit. The frontier models are really good in SQL, for example. . . . Generating a simple chart based on hundreds of tables, or multiple customer databases, it should be easier for AI to do that.
“That’s a very simplistic use case, but you can make it more complex, and I would say it really helps quality of life—for Treasure Data, the teams interacting with customers, and the customers themselves.”
Integrating the capabilities of “frontier” AI models—that is, AI models that push the known envelopes with the broadest scale and most advanced capabilities—into the Treasure Data platform has created enormous benefits for the company. But the company itself is not dependent on AI.
“That’s one of the strengths of our AI solutions,” Carlo explained. “There’s an AI wave right now, like everyone wants to ride the wave, but our company has a story. We didn’t do AI just for the sake of AI.”
He contrasted Treasure Data with some of the AI startups in the healthcare industry, which focused on integrating medical history and data with AI. These were abruptly undercut by the launch of OpenAI for Healthcare.
I think the biggest difference is that other companies, let’s say the really big tech companies, have lots of engineers, and they go all in on AI, but they don’t have the data behind it. For us, we’re the flip side. We have the data, so we can carefully invest in which AI path we’re going [down].
“Since we have our own product and our AI solution augments that, I think we’re in a good spot.”
How His Team Works
His team is currently working on Treasure Data’s AI suite. “As a simple example,” Carlo said, “we provide a platform to enable users to customize their own AI agents using frontier AI models.”
There are actually several AI teams at Treasure Data. “But for my team, for AI engineering, we focus more on the platform, so backend systems, you know?”
OpenAI and Claude have their own way of shaping the data and calling their APIs. Basically our system wraps around all of those, so you can plug and play any of those models.
“Plus,” Carlo added, “it bridges those LLMs to our Treasure Data microservices, inside the Treasure Data ecosystem.”
His team members have the opportunity to work both on features, and on research and development. “Especially since the generative AI field is ever changing, we try to get a good balance,” said Carlo. “We productize and do the operations on features that were already approved. But you can opt to work on R&D as well.”
Some people would say it’s innovator versus operator. Some people really love the innovation part, but it’s like they don’t focus as much on making it scale, or making it ready as a product. Their main focus is, ‘Will this work?’
“For me, I want to scale things,” Carlo said. “But at the same time, sometimes I work on R&D.”
Team members can easily swap back and forth between features and R&D whenever they like, though Carlo noted that members rarely switch back and forth every sprint. Instead, they usually commit to one or the other for a longer period of time.
The AI Challenges They Tackle
“It’s fast paced, right?” said Carlo, when asked about some of the AI-specific challenges his team faces. “It’s good and it’s bad. For example, in the normal software development life cycle, you have your product, you have your requirements, and the software development team will code that and release that.”
But in the AI space, it’s so fast. I think every three months something might change. Something you built five months ago might be obsolete. The best practice might be different now.
“So I would say that part is additional overhead, because on top of adding features for our AI platform, there’s that R&D layer as well. So you need to have that hypothesis. You have to make POCs to check if this really is the correct path. . . . Because of course you need to think about the roadmap going down, right? Like, is it future-proofed?
“I feel that the additional layer is really exciting and challenging,” he concluded, “because, you know, maybe five months from now, LLMs are not the way to go. Maybe there’s new tech.”
Carlo gave an example. “In the early days the way to go was RAG, retrieval augmented generation. For example, we have our data, we snapshot the data, and we vectorize and index the data. In the early 2020s, that was the way to go.
“But as the LLMs evolved, they can now generate really complex SQL. So you don’t need to snapshot and vectorize the data anymore, because LLMs can query the data themselves directly and it’s good code.”
This leads into another challenge of working with AI: deciding when it’s called for. For example, Treasure Data customers sometimes want to know why, when they repeat a question to their AI agent, the agent displays results and insights differently. “So for us and the product team,” Carlo said, “you need to say to them, that’s how LLMs work. It’s all probabilistic. It’s not like programming where you get the same [results].”
If you want a one-time report, Carlo pointed out, using AI makes sense. But if you want a consistent dashboard of information, using LLMs means it won’t always look the same. “If you want a dashboard, build a dashboard the traditional way.”
“It’s usually another team who talks more [with customers] about that,” Carlo clarified. His backend team mostly deals with internal requests. “But we sometimes have to remind [our coworkers], because sometimes we get suggestions on solutions.” And not all of those suggestions are fully cognizant of either AIs abilities, or its limitations.
I feel that it’s on the engineers to say that. They need to articulate, ‘I don’t think that’s a good AI solution,’ or ‘That’s a solution that AI is for.’”
Finally, they take careful precautions against the tendency of LLMs to hallucinate. “You need to have guardrails in your prompts. Our prompt engineers really set up the guardrail of, you know, ‘Do not hallucinate. Work with the data that you have.’ It’s still a challenge because, for example, if they get incomplete data they make stuff up.
“So it’s an ongoing challenge. There are multiple ways to fix that. . . . But for now, it’s all guardrails.”
The Balance Between Management and IC
“ I think one of the core strengths of Treasure Data,” said Carlo, “is the engineering is really good. I see it as the best in my opinion.”
And one of the reasons that’s the case, he feels, is that individual contributors [IC] are highly valued. “I would say it’s been really IC-centric the past couple of years.”
“You still have management,” he clarified. “In my personal philosophy, it’s impossible [to have] no management. Even as an IC there’s a bit there, because you need to coordinate schedules with QA, for example, or other teams.”
Carlo would know—he worked as an engineering manager and tech lead at Rakuten. Now, at Treasure Data, Carlo is balancing his new management role with the AI work he’s passionate about. “I said to my managers, ‘I don’t want to be burnt out with peer management.’ I was actually saying that. ‘Are you sure? Maybe I feel I’m more valuable as an IC than a manager.’”
AI is everywhere now. I really feel there are just a couple of times in tech history that are like this. Not getting my feet wet as a coder or as an architect in this age is a waste.
“But they were like, ‘You can do seventy-thirty. If you still want to do it, it basically [means] you have more freedom, that you can choose [what to do] with your free time. If you want to do more architecture or R&D, then yeah, why not? As long as your teams are humming, you can do that.’”
His flexible arrangement is possible because Treasure Data’s org chart is so horizontal. “I feel the path for EMs [Engineer Managers] and ICs at Treasure Data are really parallel,” he said.
It’s easy to switch between the staff and engineering manager [roles] because they’re in the same band. I would say it’s a parallel jump, not a promotional jump.
Where TD Is Going Next
As Carlo pointed out, he’s working in an almost unprecedented era of technological innovation. Not only has AI shaped his career, but it’s opened up new opportunities for the company as a whole.
“Treasure Data before [had] a startup mentality,” said Carlo. “Although it is, technically, a mid-sized company. But I think right now everyone, not only engineering, thinks about growth and scaling more. . . . The direction changed. The fact that we’re investing so much into AI means that we need to provide more products beyond our current offerings.”
Because I feel the Customer Data Platform space [has] a ceiling, maybe. . . . We need to think beyond CDP. That’s where AI and other solutions align, which is really good, because it basically opens up more possibilities on where the company can go, rather than if you just focus purely on CDP.
Why Work at Treasure Data
We asked Carlo why developers should apply to Treasure Data, and he enthused again about the top-notch engineering culture, and how he’s inspired by those around him. “For example, do you know Sada? He made Fluentd. He’s a co-founder. . . . There are other Ruby contributors in Treasure Data as well, and there are people who [have authored] books. . . . You get to pick the brains of those people.”
I think that, in itself, is [why] I would recommend another developer to be here. Being around people like them accelerates learning, and raises the bar across the board.