Are you up for a challenge ? Can you handle HUGE data sets ? Well, then you’ve come to the right place because we need someone who can tackle traffic of over 10 million users for a webpage, a month !
You would have to help engineer the data in a way that extracts the right information, transforms it and then loads it onto relevant pipelines. If you are up for the challenge and working at a product-based, engineering firm, we’d be keen to have a chat with you.
The team you’d be working with is a lean data engineering team within a wider technology firm that is currently leading Asia’s market. The culture of this firm is very open and multinational, emulating a start-up with the flexibility to work remotely and even take unlimited leave!
- Be able to work with real-time data, in large data sets, writing out good quality code
- Design, build and deploy big data solutions that can handle the load at which the data flows into the company.
- Capable of handling and building scalable ETL pipelines that can ingest and manage the data points which are varied and of a great number.
- Work with a team of functioning engineers, able to pick up a pipeline of projects that need to be deployed
- Have a strong communication style and ability to work in a multi-national environment
- Hold at least 5 years of experience in Data Engineering, preferably 3 of which are handling Big Data volumes
- Must have experience with ETL scripting, preferably on PySpark
- Proficient in Big Data tools Hadoop and Spark, previously building out the architecture of applications and pipelines
- Handling of real-time data tracking tools, Kibana highly preferred
- Proficient in Python coding
- Good to have Java or Scala experience
- Also good to have exposure to Cloud technologies AWS or Google Cloud Platform
For a confidential discussion on this or any other opportunities available in the market please contact Angie Wakefield at firstname.lastname@example.org - Direct Line: +65 6340 1949
EA License No: 16S8303 - EA Registration No: R1781517