In front of a packed house, we sat down with Ramp’s Head of Analytics Engineering and Data Science, Ian Macomber, to get his philosophy how to build a high-performing data team. Ian shared how he creates influence and trust throughout Ramp, what he looks for in top-tier data candidates, why to skip A/B tests, if AI is going to steal data jobs, and his take on data team ROI (our favorite topic).
Note: This interview has been edited for brevity.
Ian: Ramp had a valuation of $8 billion with a four-person data team before I got here. Our initial product roadmap had an extremely strong opinion on what Ramp needed to build — largely feature parity with competitors under one hood, with a high bar for quality, and a differentiated focus on saving money and time for customers. That was really successful.
Now, we're starting to get to the point where we’re running out of feature parity to build, and instead, we’re leveraging our wealth of data and and distribution to build products no one else can. One example of this is our travel product. We embedded a data scientist from day one to build “Market Rates” before the launch of Ramp Travel. We wanted to show from day one that we are the experts in corporate travel costs. Increasingly, we are going to build our product roadmap by embedding data faster and earlier.
This will be the spiciest take I have: use a data science model or an A/B test as an absolute last resort. You can get 80% of the way there almost all the time with a six-line SQL case statement.
I think the phrase “measure everything” is actually really lazy. And the reason I say that is because measuring everything slows down velocity and covers up a lack of product thinking. I've been at orgs incentivized by making proxy metic go up, where they say, “We want to keep a holdout part of the product exactly like it was a quarter ago, so we can have a sense of how much better our it’s gotten.” For Ramp, we would never consider doing that.
We invoke experimental design or A/B testing only in situations where you truly cannot proceed without the results of the test. An example would be a large brand campaign — you need to be structured ahead of time to know what decision you're trying to make. Then you look at the estimated coefficient from the test to make a decision about how much to allocate to the brand budget next quarter.
That's a really different scenario from using data to inform a product launch like Ramp’s international reimbursements. We knew Ramp needed international reimbursements, we knew what it needed to look like, so there was absolutely no reason to A/B test the launch. There is no counterfactual world where we wondered, “I wonder what Ramp is like without international reimbursements?” So, it didn’t make sense to run a clean experiment and deprive half of our customers of international reimbursements to see a lift. We rolled it out and didn’t look back.
It is not, which is a luxury. It's a hard question to answer, but I'll talk about why that is and what our approach is.
Typically, what I’ve experienced is: if you have five data people, you stretch them across all of the surface areas. The way that we build at Ramp is different. We say, “What is the number one area where we need to be world-class six months from now? Let's invest heavily in that, and ignore everything else for the time being.”
The first three data scientists we hired at Ramp were all on the Risk team. We needed to be world-class at risk, frankly, before we even needed to have a world-class product. We're still not as great at spending money on Facebook as I'd like, but a world class product and risk team is where we invested. We picked those areas and embedded data people deeply. We don't necessarily have to answer questions about ROI because we aren't a service org.
When a new hire joins, we really just say: go figure out a way to have impact. It’s not that we don't estimate the ROI of the data team specifically, but we look at whole product pod as a whole, and influence and impact is an expectation of everyone. We don’t just tell data people, "Do data things with your head down."
To give an example, we brought someone in whose first project was on CX. The first thing they had to do was get data out of two tools that didn't have Fivetran connectors. First, they had to self-teach some data engineering to build custom API scripts in python + Airflow. The next step was bread-and-butter analytics engineering, dim+fact tables, presenting analysis in Looker. Finally, make a recommendation on how to staff our CX team next year, they had to think about:
What does Ramp’s growth look like?
What do Ramp’s incident rates look like? What do our CX deflection rates look like?
Given that, what does our CX hiring scenario planning look like?
All of these questions use a very different part of the brain than what's needed to read DAGs and write Airflow and Python.
But all of this together is impact, and our expectation is that people can pick up what’s needed at the time to move their project forward. There's not much at Ramp where you can parachute in for two weeks, understand everything about a specific area, build a classifier model and leave. It takes a long time to get up to speed, and the only way people are effective is if they can create that cross-functional influence.
In my career, the specific pieces of feedback I remember the most: “How do you elevate your influence and communication to the caliber of your analysis?”
I think data people are going to be more career-capped and constrained based on their ability to communicate, more than technical ability. That's increasingly true in the AI world where the technical ability comes a little faster. At Ramp, if we’ve built something, we spend the time to think:
How can I distribute this?
How does this connect to the company's initiatives?
What do senior leaders care about?
How can I attach my work to something that will move a needle they care about?
How can I make sure that an action is taken based on the results of my work?
That's a lot of what we've been trying to push with the team.
AI will not take our jobs. It might take a few jobs. There will always be more questions, because as you make it easier to answer questions, the cost of asking questions goes down. The remaining ones that make it to the data team will be the more complex ones. We’ve seen this through technology over the last 20 years.
The number one thing I look for is curiosity and passion. My job is to push out the total universe of everything our data team can do and accomplish — and if I’m the high watermark on all things, we’re in a lot of trouble. I need to create an environment where individuals can quickly learn topics and diffuse that knowledge to the team.
This means I need to have people that are very passionate on our team, across a variety of interests. There are some people who are super opinionated about devex, and there are others who are really opinionated about how to present data.
If after an interview call, if I am confident that after this person's been at Ramp for a month, everyone will go to them to learn more about “their thing,” and they will be able to pick up any tool in this space, be excited about it, and teach everyone about it — that's the most important signal.
Narrow the focus even tighter, and put people on the highest-leverage projects. If you are helpful, people will ask you more questions, which is a good thing. But it also means that you will need to spend more of your day saying no to people, and being eloquent about communicating why you’re working on “the most important thing” instead. Being very precise in public about what you're working on (and what you're not) is a great sign of seniority.