Computer Vision Data Scientist

Location: Singapore
Job Type: Permanent
Discipline:
Salary: Up to S$0.00 per annum
Contact: Maximilien Nabarro
Email: email Maximilien
Reference: BBBH14915_1747201586
Posted: 6 days ago

Responsible AI Scientist (Computer Vision)

Location: Singapore (Hybrid)

Sector: AI Safety & Assurance

A fast-growing venture at the forefront of AI assurance is building a world-class team of scientists and engineers to tackle one of the most pressing challenges of our time-ensuring the safety and integrity of artificial intelligence systems.

We are looking for a Computer Vision Data Scientist with experience evaluating and testing deep learning models in real-world production environments. This role is ideal for someone who thrives at the intersection of applied research, model evaluation, and responsible AI, and who wants to shape the future of AI governance.


What You'll Do

  • Partner with clients to evaluate and stress-test computer vision models deployed in domains like law enforcement and healthcare.

  • Develop rigorous evaluation frameworks that assess robustness, explainability, fairness, privacy, and security of computer vision systems.

  • Work with a variety of model types, including object detection, image classification, segmentation, and pose estimation.

  • Stay current with the latest advancements in CV and AI to integrate emerging techniques into model evaluation.

  • Contribute to technical publications, conference presentations, and industry thought leadership in the field of responsible AI.

  • Experiment with advanced evaluation methods such as adversarial testing and synthetic data generation.


What You Bring

  • 2-5 years of hands-on experience training and deploying computer vision models that directly impact end users or business outcomes.

  • Deep knowledge of performance metrics (e.g., mAP, IoU, F1-score, precision-recall) and their use in evaluating model effectiveness.

  • Proficiency in Python and familiarity with frameworks such as TensorFlow, PyTorch, or Keras.

  • Experience working with large-scale datasets like ImageNet or COCO, including dataset curation and annotation quality management.

  • Experience building automated pipelines for model evaluation and reporting.

  • Familiarity with adversarial robustness, data augmentation, and synthetic data generation techniques.

  • Understanding of MLOps tools and cloud platforms (AWS, GCP) for scalable evaluation workflows.

  • Strong communication skills with the ability to present technical findings to both technical and non-technical audiences.


Nice to Have

  • Published research in computer vision or AI safety.

  • Experience with explainability tools such as Grad-CAM, LIME, or SHAP.

  • Knowledge of model deployment in edge computing or real-time systems.

  • Prior work in sensitive domains such as healthcare, public safety, or biometrics.

  • Familiarity with privacy-preserving machine learning or secure model evaluation techniques.


Why Join

  • Join a mission-driven team pioneering the standards for trustworthy AI.

  • Work on technically challenging and socially impactful problems in AI assurance.

  • Be part of a venture backed by one of Asia's leading institutional investors.

  • Shape industry best practices for safe and responsible AI adoption across sectors.