[d] panel · [s] shuffle · [f] fullscreen · [r] reverse · [m] mode

Public research,
reproducible
methods

We publish our methods because action systems should be inspectable. The work is designed to be challenged, repeated, and tied back to real decisions.

Read our preprint↗︎

Public record of rigor

Why make it public

Most AI claims fall apart when you ask for the protocol. We publish ours.

Our work bridges academic behavioral science and enterprise deployment by reproducing landmark studies, validating them against real outcomes, and translating them into systems leaders can actually use.

That means replicable experiments, explicit assumptions, and published failure cases.

If the method cannot be inspected, it should not be trusted.

First-party validation

Our research

Our preprint validates behavioral simulation against 350+ published human studies across 20+ domains.

We also maintain an archive of the methods that matter: choice modeling, causal reasoning, and experimental design.

350+ studies. 93% accuracy. Public methodology.

Research themes

01

Simulated human behavior

How far can language models stand in for real participants, and where do those simulations break?

Our preprint
02

Choice modeling

From McFadden choice modeling to modern synthetic respondents, we focus on methods that predict trade-offs rather than surface preferences.

Third-party archive
03

Causal reasoning

We study whether models can explain the structure behind decisions instead of parroting patterns from text.

04

Applied decision science

Every theme is tied back to commerce, public policy, or organizational decision quality.

Featured work

01

Determining the Validity of Large Language Models for Automated Perceptual Analysis

A reference point for how seriously we treat evaluation, external validity, and model misuse.

02

McFadden's Choice Modeling

The econometric backbone behind serious decision modeling, not just one more AI benchmark.

03

AI-Augmented Surveys

A key bridge between traditional stated-preference methods and modern model-assisted experimentation.

04

Assessing Causal Reasoning in Language Models

Important because decision systems that cannot reason causally should not be trusted with consequential optimization.

05

Large Language Models as Simulated Economic Agents

Useful for understanding how synthetic populations can support market and policy questions.

06

Protecting the Integrity of Survey Research

A reminder that measurement quality and data fraud remain operational concerns, not academic footnotes.

// Methodology

How the lab works

Research is useful only when the method survives contact with audit, replication, and deployment.

01 ::

Replication first

We begin by reconstructing published work or explicit decision environments before making any claim about model capability.

02 ::

Observed outcomes

Simulations are paired with real human data whenever possible so accuracy claims are grounded in behavior, not vibes.

03 ::

Failure surfaces

We publish where methods underperform because safe deployment depends on understanding edge cases, not hiding them.

04 ::

Versioned protocols

Prompts, assumptions, and scoring logic are versioned so collaborators can reproduce, extend, or dispute the result.

05 ::

Privacy-conscious data use

Sensitive datasets are minimized, anonymized, or replaced with synthetic equivalents when possible.

06 ::

Commerce translation

Research outputs are written for operating teams, not only for academic readers.

Frequently asked questions

// Research inquiries

Contact us at partners@subconscious.ai for research collaboration.

01 ::

Do you only publish positive results?

No. Negative results and boundary conditions are part of the record because deployment quality depends on knowing where a method fails.

02 ::

Can enterprise teams audit the work?

Yes. We can share protocols, benchmark framing, and methodological detail with research and procurement stakeholders.

03 ::

How is this different from a content library?

The archive is curated around questions we believe matter for decision systems - validity, replication, causal structure, and operational usefulness.

04 ::

Can we collaborate?

Yes. We work with research labs, enterprise teams, and institutional partners on studies, benchmark design, and applied experiments.

// Archive at a glance

Research signals

SignalCurrent viewWhy it matters
Referenced works350+A working archive large enough to trace methods, disagreements, and trend shifts
Core lensReplicable experimentsWe prioritize work that can be tested and challenged
Primary bridgeBehavioral science to commerceResearch is curated for operational relevance, not citation theater
Decision standardShow the workMethods matter as much as outcomes when consequences are high
Benchmark scale350+ human studiesThe largest synthetic AI behavioral benchmark published to date

// Research to practice

Why this matters now

As model capabilities improve, the gap between a persuasive answer and a trustworthy answer gets more dangerous. Publishing our work is how we keep that distinction visible.

We want large organizations to adopt causal experimentation with the same seriousness they apply to finance, security, and legal review - because the decisions shaped by these systems are just as consequential.

Collaborate on research to learn how behavioral simulation can transform your decision-making.