Accessibility Links

Hadoop / Java Developer - Scala, Spark, Hbase

  • Salary: £400 - £500 per day
  • Job type: Contract
  • Location: Nottingham, London
  • Sector: Other
  • Date posted: 25/11/2016
  • Job reference: J350797A

We're really sorry, but it looks like this job has already been filled.

Register your CV with us, see our latest jobs or use the search below.

Hadoop Developer - Java

We are currently looking for experienced and highly motivated developers to join the our Data Engineering team, with a primary focus on Acquisition Marketing.

In this role, you will contribute to the the development of data pipelines on an enterprise-scale Hadoop cluster using technologies such as Hive, Falcon and Oozie, as well as coding in Java and working with relational database systems such as Teradata. You will be part of an agile team, constantly iterating and improving the way we work and deliver new features to the business.


  • Develop data loading/ ETL processes for the Hadoop environment.
  • Write clear, efficient, tested code.
  • Develop code as part of a wider team, contributing to code review and solution design
  • Provide support and guidance to more junior developers
  • Contribute to both program and system architecture as appropriate.
  • Contribute to evolution of development standards and design patterns.
  • Work with business stakeholders to deliver on requirements in an agile manner.
  • Deploy and maintain applications in production environments.
  • Communicate and document solutions and design decisions.

Experience and Skills:
It is essential that the successful candidate has strong commercial experience in a software engineering role, including substantial use of the following technologies/ tools:

  • Java
  • SQL
  • Hadoop

We also require commercial experience with Linux, and use of a version control system (preferably Git).

Any experience in the following would be beneficial and an add on:

  • Knowledge of Acquisition Marketing (Search-Engine Marketing, Meta Search)
  • ETL/ data load process development
  • Hive (and other SQL-on-Hadoop tools)
  • Experience dealing with large and/or complex data sets
  • Unit-testing frameworks (JUnit, Mockito etc) and Test-Driven Development.
  • Maven
  • Cloud solutions, particularly Amazon Web Services
  • Massively parallel (MPP) database systems such as Teradata
  • Oozie
  • Falcon
  • Talend
  • Other Hadoop data processing tools (Cascading, Spark, Pig, MapReduce etc.)
  • Other Big data/ NoSQL technologies
  • Microsoft SQL Server

Candidates should submit their CV in the first instance.

Similar jobs