Uber - 2026 PhD Software engineer intern (risk), United States

Deadline: As soon as possible

Internships

Companies

Location(s)

  • United States of America
CA-Sunnyvale

Overview

Interns at Uber don’t just observe — they build. In this PhD-level internship on the Risk Engineering team, you will tackle some of the most complex and high-stakes challenges in fraud and abuse detection, protecting the integrity of Uber’s global marketplace.

Details

Job Description

You will work at the frontier of applied AI, developing and deploying advanced machine learning systems that detect anomalous behavior, prevent abuse, and safeguard millions of real-time transactions across mobility and delivery. From training foundation models to building autonomous multi-agent systems capable of reasoning and collaboration, you’ll move beyond research prototypes to deliver scalable, production-ready solutions that operate at global scale. 

Embedded within a high-performing engineering team, you will collaborate closely with engineers, data scientists, and product partners. You’ll be trusted to own ambitious projects independently under the guidance of experienced mentors — navigating ambiguity, balancing precision and recall trade-offs, and delivering measurable impact in a fast-moving, adversarial environment. 

What you’ll do 

  • Design and develop novel machine learning algorithms to detect anomalous behavior, coordinated abuse, and emerging fraud patterns in large-scale, high-dimensional data.
  • Train and adapt foundation models (including LLMs and vision models) for risk-specific use cases such as identity verification, document understanding, behavioral reasoning, and agent-based decision systems.
  • Leverage techniques such as knowledge graphs, similarity search, reinforcement learning, and multi-agent architectures to build intelligent, autonomous risk detection systems.
  • Navigate ambiguous and adversarial environments, making thoughtful technical trade-offs between model performance, latency, explainability, and operational impact.
  • Collaborate cross-functionally with engineering, product, operations, and data science teams to translate technical innovation into measurable business outcomes.

Opportunity is About


Eligibility

Candidates should be from:


Description of Ideal Candidate

Basic Qualifications 

  • Currently enrolled in a Ph.D. program in Computer Science, Machine Learning, Statistics, Artificial Intelligence, or a related quantitative field.
  • Must have at least one semester or quarter of education remaining following the completion of the 12-week internship.

Preferred Qualifications 

  • Strong Python coding skills, with the ability to write clean, production-quality code
  • Deep expertise in one or more areas such as machine learning, anomaly detection, graph learning, reinforcement learning, large language models, computer vision, or multi-agent systems.
  • Experience building or researching fraud detection, trust & safety, adversarial ML, or large-scale risk modeling systems.
  • Demonstrated ability to deploy models in production environments and work with scalable systems and tools (e.g., Spark, Ray, distributed training frameworks, AWS/GCP).
  • Experience designing systems that balance model performance with interpretability, fairness, and operational constraints.
  • A track record of impactful research contributions or substantial technical projects in your field.
  • A resilient mindset with the curiosity and persistence required to stay ahead of evolving adversarial threats.

Dates

Deadline: As soon as possible


Cost/funding for participants

+ More Info / Application Save Opportunity Un-save Opportunity


find-dream
Search from 1770 opportunities in 164 countries

Internships, scholarships, student conferences and competitions.