Have you built sizeable applications for thousands of users? Do you value simplicity over over-engineered solutions? Do you want to work on an elegant solution to infrastructure monitoring that just works for our enterprise and startup clients? If so, read on.
We’re on a mission to bring sanity to cloud operations and we need you to build the data pipelines to ingest, store, analyze and query hundreds of billions of events a day.
Join us to build powerful and resilient data systems.
What you will do
- Build distributed, high-throughput, real-time applications
- Do it in Go and Python, with bits of C or other languages on the back-end and Reactjs (with flux and redux) and d3.js on the front-end
- Use Kafka, Redis, Cassandra, Elasticsearch and other open-source components
- Join a tightly knit team solving hard problems the right way
- Own meaningful parts of our service, have an impact, grow with the company
Who you must be
- You have a BS/MS/PhD in a scientific field
- You’re comfortable with the whole stack: from tuning a sql query to writing a front-end widget
- You have experience running systems with lots of traffic 24/7
- You have significant experience with Go or Python on the back-end and Angular, Backbone or Reactjs on the front-end
- You can get down to the low-level when needed
- You tend to obsess over code simplicity and performance
- You want to work in a fast, high growth startup environment
- You wrote your own data pipelines once or twice before (and know what you did wrong)
- You have battle scars with Cassandra, Hadoop, Kafka or Numpy
- You are very curious about Apache Spark
- You have a strong background in Stats