We are working with a great Energy Tech company – helping to redefine digitalization and work on nation wide initiatives that help make our buildings, companies and cities smarter.
The business is in a stage of immense growth, having beaten its competitors in winning a large and exciting project. We now need those talented individuals to come on board and make this project a reality.
Join a specialist team that looks at harnessing ways to better optimize energy storage systems through data and network driven technology.
Work with some of the leading pioneers in the Renewable Energy world within a team of highly talented individuals.
- Be able to manage, deploy, optimize and monitor corporate big data platforms making sure that the systems are stable at all times.
- Have an in depth understanding of enterprise big data platform architecture being able to solve problems and address growth within the business and data storages.
- Develop scripts that allow automation of big data operations and maintenance, helping to monitor alarms and troubleshoot when required.
- Be responsible for cluster services including Hadoop, HBase, Spark, Kafka and other related tools to ensure continuous delivery, emergency response, and planning.
- Have at least 3 years of work experience in Internet and network operation and maintenance projects, with at least 2 of those in Big Data Platforms.
- Be able to troubleshoot and have good technical sensitivity to identify any risk to the platforms.
- Proficiency in more than one scripting language (e.g. Python, Shell etc.) and be familiar with Http, tcp/ip and other protocols
- Experience using Linux systems – both software and hardware environments for system management and optimization. Be proficient in deploying and optimizing various services.
- Familiar with Big Data ecosystems – specifically Hadoop, including but not limited to, HBase, Hive, Yarn, HDFS, Kafka, Spark, Flume, Elasticsearch, kibana, MySQL, Redis and Ambari.
- Know principals and implementation for each component of Hadoop with experience maintaining and managing large-scale data platforms.
- Familiarity with common security protocols so one can perform configuration management permissions on security and Kerberos of each Hadoop component. Be familiar with SSL, ACL and Kerberos in Big Data scenarios.
- Know the common operation and maintenance monitoring tools e.g. nagios, ganglia, zabbix, grafana, open-falcon; and related plug-ins.
- Be a good team player with great communication skills – being aware and proactive in the work environment.
- Must be currently in Singapore due to travel restrictions
For a confidential discussion on this or any other opportunities available in the market please contact Angie Wakefield at firstname.lastname@example.org - Direct Line: +65 6340 1949
EA License No: 16S8303 - EA Registration No: R1781517