Big Data Engineer
Местоположение и тип занятости
About the client:
This metasearch company has entered an interesting re-branding stage and is expanding its` horizons to create worlds distinguished hotel meta search adventure. With 5 year experience in one of the most competitive and mature industries and great talented team, we are looking for a creative professional, unafraid of challenges to share our ambitious aim of becoming an independent source, that gives our users the power of making conscious choices and finding the most suitable hotel at the most suitable price, operating on more than 100 markets, compatible with all devices and languages. We’ve a great space for development for truly gifted and devoted Big Data Engineer in Amsterdam.
If analyzing huge amounts of data and coming up with new strategies, based on it is not only not a problem, but the greatest passion for you, if you have creative and smart approach to problems, Big Data Engineer position with relocation in our data team will be perfect for you! Working with us you will have great influence on our decision-making and the business overall, so the career opportunities for Big Data Engineer with relocation are promising. Big Data Engineer job with relocation to Amsterdam entails:
- Guide design and application of our core data;
- Assemble and process raw data at scale: design a well-founded maintenance-free ETL pipeline;
- Plan and construct economical data warehouse using technology of your choice;
- Found automated decisions systems for Big Data;
- Create analyst-friendly and programmer-friendly interfaces for the big data devices, designed by you;
- Support the implementation and assessment processes of machine learning samples on huge datasets;
- Keep your knowledge of the most cutting-edge technology and best practices at the top level and integrate it within your work process.
- Proven expertise in Big Data stores (Hadoop and/or MPPs like Amazon Redshift, Cloudera Impala);
- Mastery of Python;
- Background handling and constructing automated ETL pipelines for huge amounts of un/structured data from various data sources;
- Understanding of machine learning strategies would be advantageous;
- Extensive experience with cluster computing systems (Spark would be much appreciated);
- Great skills with relational databases (MySQL, PostgreSQL) and NoSQL stores (MongoDB, Redis);
- Seasoned user of Python libraries for data science (Numpy, pandas, PyTables, scikit-learn) would be desirable;
- Proven expertise creating web interfaces (e.g. Flask) and data visualizations;
- An inquisitive mind, fearless of making mistakes and challenging the status quo;
- Good attention for detail and practicality.