3 Decisions We’ve Made While Growing Drop’s Tech

Darren Fung
Drop Engineering
Published in
9 min readJun 15, 2019

--

Building a product from the ground up is no easy task, as many startup founders will tell you. There is also a noticeable lack of insight as to what breaks as a startup’s tech stack becomes more complicated, and the decisions that facilitate the scaling of a growing product’s tech stack.

In this post, I’ll give you an overview of the Drop technology stack, our processes, as well as the decisions and principles we’ve followed to better scale both as we grow.

What is Drop?

Drop is an intelligent rewards platform aimed at helping you level up your life. We do this by rewarding you with Drop points for shopping at brands that you already know and love, as well as helping you discover new brands that resonate with your lifestyle. We accomplish this through our Drop mobile app. After joining Drop, you connect your credit or debit cards through the platform, which is accomplished securely via various partners (for example, Plaid).

Now that you’ve connected your bank to our platform, you can simply spend with selected partners (both online and in person) with your connected cards to earn points seamlessly. As well, we have a large number of partners where you can earn by shopping through the Drop app — Apple, Sephora, Postmates, and Lululemon to name a couple.

With the points you earn by shopping with our partners, you can redeem them for instant gift cards at brands such as Amazon, Starbucks, and Whole Foods. These gift cards are sent directly to you via email so you can use them immediately.

Redeeming a gift card on Drop

Our 3 Decisions

The Drop engineering team has grown significantly over the last 3 years: from a 2 person engineering team (including myself), to over 20 in multiple full-stack engineering teams.

Early on, we made some decisions around our technology stack in order to position our engineering team to stay more productive and focused on the right problems through this growth. This isn’t an exhaustive list, but includes the 3 biggest decisions that have made the most impact to how we’re able to build and innovate today.

We’ve kept our technology stack simple

The Drop technology stack is purposefully kept very simple. A simple technology stack makes it much easier for our engineers to get up to speed, maintain context, and build quickly, as they’re using familiar tools they have experience with. Our technology stack mainly comprises of: Ruby + Ruby on Rails, TypeScript + React + React Native, Python + Airflow, PostgreSQL, Redis, ElasticSearch, Amazon Redshift, Docker, and Kubernetes.

By keeping all aspects of our technology stack simple, our engineering team has been able to remain nimble when building all kinds of awesome features for our members.

Our backend platform is built on top of Ruby on Rails. I know what you might be thinking: “Rails doesn’t scale” or “This isn’t 2010, why are you using Rails instead of X?” We made this decision for a couple reasons:

  1. Rails is a very mature and opinionated framework. When hiring for Rails developers, they’re more likely to know the best practices of a framework that’s existed for over 10 years, and can make immediate impact.
  2. Rails is very well suited for rapid prototyping. I’m not saying that you should necessarily install every Rails gem (library) that you encounter, but I’m pretty sure that there’s a gem out there that you can install that does exactly what you need.
  3. Rails allows us to focus on the right problems. We don’t like prematurely optimizing, so we optimize for scale and performance when it becomes an issue.
  4. Rails scales a lot further than you think. In fact, companies like Airbnb and Shopify still have Rails as a part of their core.

Although Rails is an opinionated framework, we have opinions of our own as to how to structure our Rails app for ease of development. For example, we extensively use the concepts of view models, form objects, and service objects. We follow software engineering practices to ensure that we’re positioning our code base for future growth, which means loose coupling, well defined interfaces between our service objects, and components designed for test-ability. We default to keeping everything as a part of our monolith, but design our business logic to be easily extractable into microservices. This is a conscious decision, as we believe that a monolith goes a long way. As long as we’re following sound software engineering principles, we can extract key workflows into microservices if necessary.

For our mobile app, we made a big bet early on with React Native. Its promises were hard to ignore: a native application for iOS (and later Android), that was built using a single codebase using JavaScript and React. It provided us the flexibility to use native code in the rare cases where performance would be an issue. It would also enable our engineers to build on both web and mobile, as we also use React on the web. The app in its entirety is built using React Native and Redux, tested with Jest.

We also like to keep our infrastructure pretty simple. All of our projects are built using some combination of PostgreSQL, Redis, and Elasticsearch, all hosted on AWS. We lean quite heavily on AWS to minimize the overhead of managing the stability of our systems. However, we also make decisions on tools to ensure that we’re not boxing ourselves into AWS as our infrastructure vendor. By keeping all aspects of our technology stack simple, our engineering team has been able to remain nimble when building all kinds of awesome features for our members.

We’ve made it easy for anyone to deploy

We’ve always believed that at all sizes, engineering teams should own product delivery from end to end — from brainstorming the problem and requirements with product managers, to deploying their own code to production, all the way to analyzing results and iterating. A huge part of this involves understanding the deployment process, exactly what a deployment is doing, visualizing what is being changed, and what has already been deployed; no-one should be intimidated or scared by deployments. How does this process look in practice? Let’s start by going over the development and deployment process at Drop.

Every engineer on the team is encouraged and empowered to use our internal tools to deploy as they see fit.

Our production environment is powered by Docker containers. We have a pipeline built on Jenkins that builds, lints, tests, and pushes Docker images to our internal Docker registry. We have heavily invested in Kubernetes at Drop for container orchestration, and all of our configuration files are located in the same repository as the app being deployed. By co-locating the configs, it allows anyone to see the exact configurations that are updated on every deployment, and engineers can see all the parts of the service being deployed. Our Kubernetes configuration files are templated so variable substitution is easy, and deployed using kubernetes-deploy. This tool allows us to easily configure configurations per environment and service, deploy the objects in order, and also monitor the status of the deployment.

On top of all this, we use Shipit to better visualize the deployment process. It allows us to see what commit was last deployed, the commits that have yet to be deployed, and the state of the continuous integration pipeline for each of the commits. For each application and environment, Shipit enables us to create checklists to ensure there is little room for error when deploying.

We also have a simple process for updating our Drop mobile app. Because we use React Native, we can easily push new JavaScript bundles with CodePush to update the code on devices without going through the app store. We’ve developed a process whereby we make weekly (or ad-hoc) bundle updates through CodePush and Shipit, and monthly native app store releases through the app stores.

At Drop, we don’t have any type of release management team by design. Every engineer on the team is encouraged and empowered to use our internal tools to deploy as they see fit. An important part of managing your own production deploys as an engineer involves monitoring and instrumentation of operational and application performance metrics. We have all of our alerts and monitors on operational and performance metrics built on top of Datadog.

Datadog allows us to integrate with all of our vendors and tools to consolidate all of our metrics into one easy to use platform. For example, it aggregates all of our AWS CloudWatch metrics, as well as serves as our APM tool. This way, we can tie infrastructure metrics to application level metrics, all with one tool. This allows us to see the full picture of all of the metrics across all of our infrastructure, and also enables us to easily build alerts that detect anomalies from normal behaviour.

A sample Kubernetes deployment on Shipit

We’ve invested early in a data foundation

At some point, our team began to get inundated with requests from different departments for insights and analytics. The business development team needed to see the performance of a partner’s offer; the product team needed to see the engagement of a subset of our members. Eventually, data scientists would need access to raw data to do data explorations, and machine learning engineers would need to build predictive models. At this point, a large amount of the engineering team’s time was spent writing queries to support the rest of the team. As you can imagine, this was not optimal.

We needed to build infrastructure and foundation that allowed anyone to not only access the data, but to get insights that they can understand and trust. This meant that the infrastructure that we built also had to include standardized metric definitions, and robust documentation to make sure everyone is speaking the same language when talking about metrics.

Our team ultimately adopted Airflow to manage our growing data pipelines infrastructure. Airflow allowed us to easily track our ETL (extract, transform, load) jobs that were pulling from and pushing data to internal and external data sources. It structures jobs as a set of directed acyclic graphs (DAGs), so it allowed for easy management of dependencies between jobs.

One of our DAGs in the Airflow web interface

To ensure standardization of metrics, we use Looker as our business intelligence tool. It allows us to define and query our metrics from the same interface. This ensures that our metrics are always calculated the same way for analysis, at the time of query.

Most of our data flows into our Amazon S3 data lake, and eventually into our data warehouse powered by Amazon Redshift. Any analytic queries generally run on Redshift on transformed and modeled data. Our data lake exists to be the place where all of our data lives— both raw and transformed. One of the benefits of our data lake in S3 is the ease of integration with Amazon’s data tools (for example, Amazon EMR, Athena, Redshift). It also enables our data scientists and machine learning engineers to easily build and train our machine learning models on raw data, and quickly deploy them to be used in production.

What’s next?

Over the last three and a half years, these decisions have better positioned the Drop technology stack to better enable our engineers to focus on building Drop into a product that accomplishes our core mission. As well, we’re spending a lot of time investing in our developer infrastructure to ensure developer productivity remains high.

Some examples of current or future projects are:

  1. Re-architecture of our transaction ingestion engine to be more performant, and minimizing the time it takes between members spending at our partners, and members getting the Drop points for spending there.
  2. Using our spend data to build machine learning models to better understand our members, for the purpose of surfacing relevant offers at relevant times.
  3. Defining patterns and libraries for how services will communicate with the Rails application and vice versa, for extracting out key flows from the Rails app.
  4. Building an experimentation platform that allows us to quickly define experiments, audiences, metrics tracked, and be able to see its performance easily

We’ll be writing more about some of the challenges involved with the cool things we’re building, so check back for more posts by the Drop engineering team.

Does the way that we build resonate with you? Well you’ll be happy to know that we’re hiring! Take a look at our open roles here.

--

--