Accessibility Links

Big Data Support Engineer

  • Salary: £300 - £365 per annum
  • Job type: Contract
  • Location: Blackpool, Lancashire
  • Sector: IT
  • Date posted: 09/11/2018
  • Job reference: BBBH92446

Experienced Big Data Support Engineer, to be located in Blackpool. This role needs to support the live HDWS and will need read access to live customer data so needs working onshore. Looking for a Big Data Support Engineer who has build and management of Hadoop Environments using HortonWorks Distribution including HDFS, Hive and Hbase, support the analytics platform includes HortonWorks, Tableau and Talend and Data Science workbench (including R Studio and Knime) as well as potential management and implementation of Software upgrades, patching and fixing.

Job title: Big Data Support Engineer
Duration: 6 months
Rate : £365.00 per day
Location: Moorland Building, Mythop Road, Blackpool - remote working allowed

Requirements: Big Data Support Engineer

  • Hands of experience in Hive, Spark , HBase and HDFS system
  • Excellent exposure to Enterprise Hadoop Hortonworks Data Platform (HDP)
  • Experience in supporting large scale Big data implementation
  • Exposure to Oracle DB, Tableau, Linux, MS SQL, R, Data Science Workbench
  • Exposure various data injection mode batch, near real time and real time.
  • Exposure to work with multiple teams spread geographically,
  • Good communication skills

Job Description and Responsibilities:

  • Support Production Data Lake consisting Oracle DWH and Hadoop Data Lake
  • Job Scheduling mechanism using Ooziee or other popular tools
  • Failure and recovery of jobs
  • Able to implement and support security
  • Able to debug data from Oracle and HDFS system
  • Able to HDP cluster capacity planning and able suggest appropriate HW sizing in future
  • Creating Hive Tables , directories in HDFS , Hbase using bet practices
  • Performance tuning of HDP cluster
  • Deploy newly built models, code in all environment
  • Able to implement patch if required
  • Adherence to client specified SLAs

Skills & Experience:

  • 10+ years of experience in deploying and supporting large scale Hybrid DWH Implementation data processing pipelines especially in open source, data-intensive, distributed environments.
  • Hands-on experience working on HDP, HDFS, Hive, Spark, Oozie, Hbase, Postgres, etc.
  • Implemented and Support project/s dealing with the considerable data size (TB/ PB) and with high complexity.
  • knowledge of Data science, algorithms, data structures, and performance optimization techniques.
  • Strong communication and client-facing skills with the ability to work in a consulting environment if required.
  • Broad understanding of all of the following, below:
    • Hadoop security (Kerberos, Ranger, Knox)
    • Data Mining, Statistical Modelling and Machine Learning
    • Data Architecture, Master Data Management and Governance

Similar jobs
View more similar jobs