Out customer, New York based global media valuation and data collection company with operations in San Francisco, London, Tokyo and Sydney, is working on bleeding-edge of Big Data technology to provide audience and quality of advertisement campaigns analytics. We work with 100x TB sized datasets, real-time processing systems across the world to help our customer collect, store and use collected information.
- Participate in design and development of Big Data analytical applications from product vision to implementation.
- Design, support and continuous enhancement of project code base, continuous integration pipeline, etc.
- Investigation and resolution of performance and stability issues in production systems
- Work inside the team of software and DevOps engineers
- Collaboration with global distributed team, corporate and customer’s IT services
- Strong knowledge of Java (collections, multi-threading, JVM memory model, etc.)
- Experience with version control systems: Git, Subversion
- Understanding of general OOP and Functional programming concepts
- Desire and ability for quick learning of new tools and technologies
- Good communication skills and technical English
Will be a plus
- Experience in scripting on Bash and any of Ruby, Python, Perl
- Experience with Hadoop stack (Hadoop MR, HBase, Pig, Hive, Flume)
- Experience with in-stream processing (Storm, Kafka, Cassandra or other technologies)
- Network protocols knowledge (TCP/IP, SSH, HTTP & etc)
- General knowledge of Linux kernel and hardware architecture
- Knowledge of public clouds (Amazon, Google CE or other)
- Monitoring systems (Ganglia, Graphite, Zabbix)
- CI servers configuration (Hudson/Jenkins, Cruise Control)
- Programming experience with Scala