Hadoop Administration - Big Data Software Engineer - VP - job id 33775

Your Way To Work™

Hadoop Administration - Big Data Software Engineer - VP

$200,000 - Per Year Base Salary F/T Employee

Jersey City, NJ

How to Apply


George Konetsky


(646) 876-9562


(212) 616-4800 ext-180

A F/T position at a leader in financial services, offering solutions to clients in more than 100 countries with one of the most comprehensive global product platforms available..

Pay Options: F/T Employee.

Contact George Konetsky. call (646)876-9562 / (212)616-4800 ext.180 or email george@sans.com with the Job Code GK33775 or Click the Apply Now button ().

Location: Jersey City.

Skills required for the position: HADOOP, BIG DATA, HBASE, YARN, KAFKA, SPARK2, LINUX.

Optional (not required):JAVA

Detailed Info:

The candidate will be responsible for application development supporting a data platform, which is an Investment Bank-wide strategic initiative to improve the operational and customer service capabilities of the firm.

  • Will have responsibility for design, implementation and testing of a highly scalable data layer using HDFS, Hadoop, HBase, Phoenix, HortonWorks, Spark, and related tools.

  • Must have a thorough understanding of both the tools and concepts that underlie Big Data, such as MapReduce, Massively Parallel Processing (MPP), and HDFS job scheduling.

  • Must have a sound grasp of development best practice and system architecture. Will be expected to produce high-quality deliverables that can pass critical peer review, and to work under a high-pressure and timeline-driven environment.

  • In addition, the candidate must be highly proficient in core Java technologies. The role requires hands-on Java coding as part of the Hadoop/HBase/Phoenix build-out.

  • Should have solid technical knowledge and the ability to communicate ideas is an integral part of the role.

Development/Computing Environment:

Core skills:

  • Hands on Hadoop Administration (preferably HDP/Hortonworks).

  • Experience in HBase administration. Experience in Yarn, Yarn queue setup and Node leveling administration. Experience with Kafka and Spark2.

  • Demonstrated experience implementing big data use-cases, understanding of standard design patterns commonly used in Hadoop-based deployments.

  • At least 4 years of HDP installation and administration experience in multi-tenant production environments, with experience on HDP 2.6.x/HDF 3.x versions.

  • Experience in the following:

  • Designing and deploying production large-scale Hadoop architectures.

  • Implementing software and/or solutions in enterprise Linux or UNIX environments, including shell/Python scripting.

  • Experience with various enterprise security solutions such as LDAP and Active Directory. Experience with Ambari, Ranger, Kerberos, Knox and Atlas.

  • Experience with Hive.

  • Experience with other RDBMS, such as Oracle, MS SQL server.

  • Good troubleshooting skills, understanding of HDP capacity, bottlenecks, memory utilization, CPU usage, OS, storage, and networks..

The position offers competitive compensation package.

Job Id: 33775