MLOps Research

& Development Innovation

With over 50+ years of combined industry and academic experience in AI and ML Operations, Arthur is the only company to adopt a research-led approach to product development.

Our expert researchers and experimental approach drives exclusive capabilities in computer vision, NLP, bias mitigation, and other critical areas.

Together, we can shape the future of model operations while optimizing ML models for accuracy, explainability, and fairness to ensure compliance in highly regulated industries.

From the lab to the boardroom, we partner with global data scientists, ML directors and AI Center of Excellence leadership to launch real-world solutions worldwide. As enterprises embark on their AI maturity journey, we share researcher insights, advance whiteboard ideas, empower best practices, benchmark industry metrics, and inspire thought leadership.

TRUSTED INSTITUTIONS

University Research Experience

Georgetown
Carnegie Mellon
Harvard
NYU
University of Washington
University of Texas
University of Maryland
Brown
GEORGETOWN
Carnegie Mellon
HARVARD
NYU
University
of washington
University
of TEXAS
University
of MARYLAND
BROWN

John Dickerson

Chief Scientist & Co-Founder

John solves problems at the intersection of economics and artificial intelligence using techniques from machine learning, stochastic optimization, and computational social choice with a focus on healthcare. Grant work includes NIST, DARPA, ARPA-E, NIH (R01), NSF, and Google.

Expertise

Assistant Professor in the Department of Computer Science at the University of Maryland

Computer Science PhD, Carnegie Mellon

Keegan Hines

VP of Machine Learning

Keegan's previous roles included Capital One where he was the Director of Machine Learning Research and developed applications of ML to key financial services areas.

Expertise

Adjunct Assistant Professor in the Data Science Program at Georgetown University, Machine Learning & Deep Learning

Neuroscience PhD, University of Texas

Co-Founder & Chair, Conference on Applied Learning for Information Security (CAMLIS)

Jessica Dai

Machine Learning Engineer

Jessica's research interests include fairness, auditing, and characterizing and understanding the behavior of machine learning models.

In addition to her work at Arthur and Brown, Jessica has previously collaborated with researchers at Carnegie Mellon and Harvard.

Expertise

Computer Science ScB, Brown University

Fair machine learning: academic literature & application settings

Machine learning and policy

Kweku Kwegyir-Aggrey

Machine Learning Science Research Fellow

Kweku is broadly interested in machine learning and statistics with a specific focus on the design of algorithms that audit machine learning models for fairness and robustness. He is interested in questions which rigorously examine and critque data-driven technological solutionism.

Expertise

PhD Candidate in the Brown University Department of Computer Science

BS Computer Science and Mathematics at University of Maryland

Auditing machine learning algorithms in real world settings

Publication Library

Achieving Downstream Fairness with Geometric Repair

In this work, we propose a preliminary approach to the problem of producing fair probabilities such that fairness can be guaranteed for downstream users of the model, which we term all-threshold fairness.
read more

Amortized Generation of Sequential Counterfactuals for Black Box Models

We propose a novel stochastic-control-based approach that generates sequential Algorithmic Recourses (ARs), which is model-agnostic and black box.
read more

Counterfactual Explanations for Machine Learning: Challenges Revisited

Leveraging recent work outlining desirable properties of CFEs and our experience running the ML wing of a model monitoring startup, we identify outstanding obstacles hindering CFE deployment in industry.
read more

Counterfactual Explanations for Machine Learning: A Review
NeurIPS 2020 Workshop on MLRetrospectives, Best Paper Award

Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare.
read more

From Publishing to Practice: Bringing AI Model Monitoring to a Healthcare Setting: FAccT 2021

The FATE and robustness in AI/ML communities continue to develop techniques for measuring and partially mitigating forms of bias. Yet, translation of those techniques to “boots on the ground” healthcare settings comes with challenges.
read more

Presentations

NeurIPS

November 28 - December 3, 2022
new orleans, la
Learn More

Conference Highlights & Talks

FACCT MARCH 2021

Bringing AI Model Governance to a Healthcare Setting

From academic publishing to real
world practice, learn how Humana and Arthur worked together to transform the third largest health insurance provider 
in the nation.