How System1 sets its data science team up for success

by John Siegel
June 21, 2018
system1 venice tech startup
photo via system1

When the data science team at System1 first began to conduct tests designed to unlock data-based insights into a customer's intent, they realized they had their work cut out for them.

Because of the complexity of each individual experiment, team members essentially had to hand code and supervise far too much of the process to be sustainable as the company scaled.

So they fixed it.

Under the guidance of co-founder and CTO John Fries and Chief Data Officer Nathan Janos, the team devised a system that would automate much of the dirty work, enabling them to compare scores of variants and conduct more experiments than ever before.

We spoke with Janos to learn more about how he sets his team up for success, the types of experiments they conduct and the mindset he values most in applicants.

Nathan Janos
Chief Data Officer • System1

 

How is System1’s data science team set up to succeed?

We spend a lot of time and effort on data engineering. While modeling and statistics are critical to our business, they require that the data be reliably collected, cleaned, stored and queryable.

We try to have a single, measurable objective function across the entire organization. Focusing on a metric that makes sense across the entire company helps everyone direct their own work with much less overhead communication.

Data science at System1 is as much about experiments as it is about statistical modeling. A model — a hypothesis or theory — is basically a guess about reality. Guessing correctly all the time is impossible, so we try to focus on whether a particular model performs better than a competing model in our real production environment.

Data science at System1 is as much about experiments as it is about statistical modeling.”

How did automating much of the experimentation process impact day-to-day operations?

When we started System1, every experiment was essentially coded by hand. Our first significant experiment took months to set up. If we wanted to revisit an experiment at a later date, we would have to try to remember everything we had done to set up the original experiment.

Eventually, we developed tools which allowed us to abstractly describe the experiments we are trying to do. The ability to programmatically replicate any experiment — combined with reliable measurement of the company’s end-to-end objective function — means that we can now compare dozens or even hundreds of variants of different models with minimal human supervision. Instead of having a human laboriously evaluate all the models and choose a single winner, we are now able to deploy multi-arm bandit techniques to continuously compare competing models.

You can categorize the experiments that we do at System1 into two main groups. One is a more manually-defined experiments framework that is built into the underlying serving code of the company. This allows account managers and business analysts to launch any kind of A/B test at any point in time. The newer experiments framework is a programmatically controlled by our data science optimization system. It allows the system to automatically adjust the amount of traffic that we treat with different algorithms.

 

What traits do you look for in potential team members?

We foster a lot of trust and expect responsibility from each other. It goes without saying that we look for smart people, but beyond that, I look for great attitudes and flexible personalities.  Everything else being about equal, I think a good positive attitude outweighs other factors.

Jobs at System1

Los Angeles startup guides

LOCAL GUIDE
Best Companies to Work for in Los Angeles
LOCAL GUIDE
Coolest Tech Offices in Los Angeles
LOCAL GUIDE
Best Perks at Los Angeles Tech Companies
LOCAL GUIDE
Women in Los Angeles Tech