AI Safety Mentorship

Connect with experienced AI safety professionals from leading organizations. Join our community of 4,500+ members from over 100 countries.

How It Works

Expert Matching

Our algorithm connects mentors and mentees based on shared AI safety interests.

Structured Guidance

Mentors provide personalized guidance on research and career paths.

Global Community

Join a diverse community of AI safety professionals worldwide.

Featured Mentors

Dr. Sarah Chen

Principal Researcher at Anthropic

Expert in AI alignment with 8+ years of experience in interpretability research.

Key Research:

"Understanding Latent Objectives in LLMs" (NeurIPS 2024), "Interpretability Methods for Transformer Models" (ACL 2022)

Mentorship Approach:

Structured biweekly meetings with clear goals. Helps with paper drafting and research direction.

Focus Areas:

Alignment ResearchInterpretability
Available: 4-6 hrs/monthPreviously mentored: 8 researchers

James Wilson

Policy Researcher at GovAI

Specializes in AI governance and international policy coordination for AI safety.

Key Research:

"Regulatory Approaches to Advanced AI" (2024), "International Coordination on AI Safety Standards" (2023)

Mentorship Approach:

Reading assignments, policy discussions, and connecting mentees with valuable network contacts.

Focus Areas:

AI GovernanceAI Policy
Available: 3-5 hrs/monthPreviously mentored: 12 policy analysts

Dr. Anika Patel

Research Lead at DeepMind

Focuses on robustness and safety guarantees in advanced AI systems.

Key Research:

"Formal Verification Methods for Deep RL Systems" (ICML 2024), "Safety Guarantees for Multi-Agent Systems" (2023)

Mentorship Approach:

Collaborative projects where we tackle concrete safety problems together with hands-on technical feedback.

Focus Areas:

Safety EngineeringRobustness
Available: 4 hrs/monthPreviously mentored: 6 engineers

Featured Mentees

Miguel Rodriguez

PhD Student at MIT

Researching neural network interpretability with a focus on language models.

Skills Seeking:

Research methodologyPublication strategy

Career Goals:

Contribute to alignment research at organizations like Anthropic or MIRI after completing PhD.

Courses Completed:

AI Alignment Fast-TrackIntro to Transformative AI

Hopes to gain: Research direction and publication strategy guidance

Availability: 3-5 hours weekly for mentorship activities

Yuki Tanaka

Policy Researcher at AI Policy Institute

Exploring AI governance frameworks and regulation approaches in Asia.

Skills Seeking:

Policy analysisInternational coordination

Career Goals:

Work at international organizations on AI governance or join AI policy teams at research labs.

Courses Completed:

Governance Fast-TrackEconomics of Transformative AI

Hopes to gain: Guidance on international AI governance approaches and networking

Availability: Evenings and weekends (JST), 2-3 hours weekly

Olivia Johnson

ML Engineer at Robust AI Startup

ML engineer interested in transitioning to AI safety research.

Skills Seeking:

Formal verificationSafety engineering

Career Goals:

Transition from general ML engineering to dedicated AI safety research and implementation.

Courses Completed:

ML Safety FundamentalsAGISF Technical Track

Hopes to gain: Guidance on career transition to AI safety research

Availability: Evenings and weekends, 5-8 hours weekly for projects

Ready to accelerate your AI safety journey?

Join our global community dedicated to ensuring AI systems are safe and beneficial.