Datatron’s full stack engineers are responsible for building the systems and tools that make our teams productive and the technology stack that powers the applications our customers use every day. We believe standing up a healthy service should be fast, standardized, and intuitive. We can ship code to our customers continuously. We’re empowered to use tools and technologies that provide the Datatron community with the best possible experience.
As an engineer on our team, there’s no limit to the impact you can have on the business. All of our engineering teams are responsible for deploying and supporting their own services, and because of this they look to us for advice, guidance, and stability. We invest heavily in infrastructure because we know that engineers are happiest when they’re shipping code.
We believe in picking the right tools for the job, whether that means evaluating third party vendors or building something in house. We aren’t dogmatic about technologies and we adapt our systems based on the needs of the organization. Currently you’ll find us writing Python, Scala and integrating our services with a suite of Amazon Web Services, Jenkins, Splunk, and Graphite, just to name a few.
Developing and maintaining the platform that runs all of Datatron’s services
Writing and maintaining cloud automation software and internal tools to support developers deploying, running and monitoring individual Datatron’s services
Championing best practices for building scalable and reliable services
Conducting root cause analysis on production issues with other engineers
Responding to production incidents and determining how we can prevent them in the future.
Contributing your ideas on how we can continuously improve our systems and processes
Lead development of architecture and standards for a business metric warehouse
Develop and maintain ETL infrastructure and processes
Implement systems for tracking data quality and consistency
Work closely with data scientists, engineers, and analysts to design and maintain scalable data models and pipelines
5+ years software experience
Startup experience a big bonus
Extensive software engineering experience with an object-oriented, scripting language (python, Java/Scala, ruby, perl)
Experience architecting data systems from scratch
Extensive professional experience with a distributed, column-store architecture (Redshift, Vertica, Greenplum, Teradata)
Proven track record of leading projects through design, development, release, and maintenance phases
Ability to work with varied forms of data infrastructure, including: RDBMS (PostgreSQL, MySQL); NoSQL (MongoDB, DynamoDB, Redis); MapReduce (Hadoop, Hive, HBase, Pig); Logging/messaging systems (Kafka, Scribe, Flume, Kinesis, SQS)
You love to code, and you’ve worked with multiple programming languages.
You love to build tools that enable a whole organization to rapidly produce software products and services.
You have an insatiable craving for making applications more consistent and reliable over time.
You believe you can automate everything, and you can identify opportunities to remove manual processes.
You understand scalable web architectures and have implemented a few.
You enjoy working in a collaborative environment, and you’re committed to driving projects to completion independently and creatively.
You're a great communicator, and can advocate for your proposals while also empathizing with your teammates' goals and priorities.
You graciously help others who look to you for feedback and guidance.
You think ahead and build for the future.
Our ideal candidate possesses some of the following:
Experience with UNIX systems administration including solid scripting skills in Shell, Python, Scala, Java
Knowledge of configuration management systems such as Puppet, Chef, Salt, or Ansible
Experience building and running RESTful web services on the AWS platform
Contributions to open source projects
A passion for sustainability and/or big data