At Oribi, we’re looking to change the way the world does analytics. Oribi offers a code-free way for businesses of all sizes to finally understand their marketing analytics. Instead of being tied to developers and having to struggle through Google Analytics, Oribi users get easy-to-read insights so they always know which steps to take next. Based in Tel Aviv, we’re a team of 50 creative, fun-to-work-with people who love what they do.
We are looking for a Backend Big Data Developer to join our team, helping to leverage our infrastructure to support our rapid increase in scale.
What makes this opportunity so BIG?
You will be joining our talented dev team, designing and working on large scale infrastructure that handles millions of events daily and thousands of active customer sessions per second. As our system is in a challenging evolving stage of extending, you will have the opportunity to take an active part in its redesign, using cutting-edge data solutions.
We are a small, fast-growing startup where every team member has a huge impact on the company. You will be part of our unique DNA, along with hand-picked, bright, and dedicated minds who challenge convention and push the future of web analytics.
In this role, you will:
Lead the design of Oribi’s core infrastructure in order to improve our flow's performance with the growing scale, using advanced solutions and tools.
Work within a high-paced, agile environment, be open-minded toward improvements and creative approaches to solving complicated challenges.
We’d love to hear from you if you have:
Experience with designing and implementing solutions for large scale data systems
High standards for code quality, testing, and performance
A can-do attitude, are a fast learner, and have great interpersonal skills
A product-oriented mindset with the ability to transform a difficult problem into one or more simple ones that you can easily solve
Requirements:
5+ years of proven backend development experience, in at least one language (Java / Scala / Kotlin / Golang / Python, etc.)
Hands-on experience in stream/data processing technologies, such as Kafka, SQS, RabbitMQ, Google Dataflow, etc.
Deep knowledge of and experience working with both Relational and NoSql databases of a significant scale
Experience with AWS or an alternative cloud environment
Experience with Hadoop ecosystem (Spark, Hive, MapReduce etc) - this is a big plus
Bonus points for:
Experience with data science and modeling (Statistical and ML)