Connect with experienced AI safety professionals from leading organizations. Join our community of 4,500+ members from over 100 countries.
Our algorithm connects mentors and mentees based on shared AI safety interests.
Mentors provide personalized guidance on research and career paths.
Join a diverse community of AI safety professionals worldwide.
Principal Researcher at Anthropic
Expert in AI alignment with 8+ years of experience in interpretability research.
"Understanding Latent Objectives in LLMs" (NeurIPS 2024), "Interpretability Methods for Transformer Models" (ACL 2022)
Structured biweekly meetings with clear goals. Helps with paper drafting and research direction.
Policy Researcher at GovAI
Specializes in AI governance and international policy coordination for AI safety.
"Regulatory Approaches to Advanced AI" (2024), "International Coordination on AI Safety Standards" (2023)
Reading assignments, policy discussions, and connecting mentees with valuable network contacts.
Research Lead at DeepMind
Focuses on robustness and safety guarantees in advanced AI systems.
"Formal Verification Methods for Deep RL Systems" (ICML 2024), "Safety Guarantees for Multi-Agent Systems" (2023)
Collaborative projects where we tackle concrete safety problems together with hands-on technical feedback.
PhD Student at MIT
Researching neural network interpretability with a focus on language models.
Contribute to alignment research at organizations like Anthropic or MIRI after completing PhD.
Hopes to gain: Research direction and publication strategy guidance
Availability: 3-5 hours weekly for mentorship activities
Policy Researcher at AI Policy Institute
Exploring AI governance frameworks and regulation approaches in Asia.
Work at international organizations on AI governance or join AI policy teams at research labs.
Hopes to gain: Guidance on international AI governance approaches and networking
Availability: Evenings and weekends (JST), 2-3 hours weekly
ML Engineer at Robust AI Startup
ML engineer interested in transitioning to AI safety research.
Transition from general ML engineering to dedicated AI safety research and implementation.
Hopes to gain: Guidance on career transition to AI safety research
Availability: Evenings and weekends, 5-8 hours weekly for projects
Join our global community dedicated to ensuring AI systems are safe and beneficial.