Data Scientist (Masters) — AI Data Trainer
About The Role
What if your expertise in machine learning, statistical inference, and data engineering could directly shape how the world's most advanced AI models think and reason? We're looking for data scientists with advanced degrees to challenge, audit, and improve cutting‑edge AI systems — pushing them to their limits and documenting exactly where they fail.
This is a fully remote, flexible contract role. No prior AI industry experience required — just deep, battle‑tested knowledge of data science and a sharp eye for technical accuracy.
* Organization: Alignerr
* Type: Hourly Contract
* Location: Remote
* Commitment: 10–40 hours/week
What You'll Do
* Design Advanced Challenges — Craft complex, domain‑specific data science problems spanning hyperparameter optimization, Bayesian inference, cross‑validation strategies, dimensionality reduction, and more
* Author Ground‑Truth Solutions — Build rigorous, step‑by‑step technical solutions — including Python/R scripts, SQL queries, and mathematical derivations — that serve as the gold standard for AI evaluation
* Audit AI‑Generated Code — Evaluate AI outputs across libraries like Scikit‑Learn, PyTorch, and TensorFlow for correctness, efficiency, and technical soundness
* Sharpen AI Reasoning — Identify logical failures in AI outputs — data leakage, overfitting, improper handling of imbalanced datasets — and provide structured, actionable feedback to improve how models reason through problems
Who You Are
* Currently pursuing or holding a Master's or PhD in Data Science, Statistics, Computer Science, or a quantitative field with heavy emphasis on data analysis
* Strong foundational knowledge across core areas — supervised and unsupervised learning, deep learning, big data technologies (Spark/Hadoop), or NLP
* Able to communicate complex algorithmic concepts and statistical results clearly and precisely in writing
* Naturally detail‑oriented — you catch errors in code syntax, mathematical notation, and statistical conclusions that others miss
* No prior AI or data annotation experience required
Nice to Have
* Experience with data annotation, data quality evaluation, or AI evaluation systems
* Familiarity with production‑level data science workflows — MLOps, CI/CD for models, or similar
* Exposure to model interpretability, fairness, or robustness testing
Why Join Us
* Work directly with industry‑leading AI research labs on cutting‑edge language model development
* Fully remote and async — work when and where it suits you, on your own terms
* Freelance autonomy with meaningful, intellectually stimulating work
* High‑agency contractor environment with international reach
* Ongoing project opportunities and contract renewals as new initiatives launch
#J-18808-Ljbffr