Economists Adopting AI Tools

Economics profession's rapid integration of AI into academic workflows.

John Cochrane Refine post

Overview

John Cochrane, prominent economist, publicly documented his experience using AI tools (Refine.ink and Claude) for academic work, finding them revolutionary. Arnold Kling's commentary highlights the broader significance for the economics profession.

Key Experiences

Refine.ink for Academic Feedback

Cochrane submitted his inflation booklet to Refine.ink (AI tool for refining academic articles by Yann Calvó López and Ben Golub):

"The results are stunning. The comments it offered were on the par of the best comments I've received on a paper in my entire academic career. And more concise and organized than the best ones."

Refine's capabilities:

  • Identified core arguments in 80-page paper
  • Found algebra errors (e.g., negative sign in differential equation solution)
  • Suggested tightening presentation and operationalizing claims
  • Provided top 5% quality referee reports

Impact on profession:

  • Editors could feed every paper to Refine on receipt
  • Authors could submit with Refine report
  • Referees could use Refine before writing reports
  • Should produce better written papers, save enormous time

Claude for Code Generation

Cochrane used Claude for data visualization:

"I also tried Claude to update some graphs. My prompt was just 'write a matlab program that fetches data series xyz from Fred using the API, and make a graph that..' with pretty detailed description of the graph. It ran right out of the box, even doing a decent job of 'put text labels on the graph in a way that doesn't conflict with the plotted time series.' It went on and did things I didn't ask for, like offer summary statistics! Still, an hour job took 5 minutes."

Challenges:

  • Claude didn't find correct Fred data series (small manual task)
  • Produced code with unfamiliar commands (new verification challenge)
  • Went beyond prompt (offered unsolicited summary statistics)

The Call to Action

"This is all old news to most of my colleagues, who are integrating AI into workflows with great speed. But if you're not using these tools, the time to start is now."

Arnold Kling's Commentary

Kling tested Claude Opus 4.6 ($20/month) on the same paper, asking it to write referee's report. Claude produced similarly high-quality feedback despite only accessing first four chapters.

Key question raised: How much value does domain expertise (Refine.ink's economics specialization) add vs. frontier models (Claude)?

  • If domain expertise adds significant value → VCs should back domain-specific AI startups
  • Contrary view: Frontier models will master domains without significant training/modifications

Implications

For Academia

  • Refereeing and paper evaluation will be radically impacted
  • Quality and speed of peer review will jump
  • Economists will save enormous time
  • Better written papers overall

For AI Startups

  • Debate over value of domain-specific vs. frontier models
  • Will general-purpose LLMs (Claude, GPT) handle domain expertise naturally?
  • Or do specialized tools (Refine.ink for economics) provide lasting value?

Concerns (Cochrane's Update)

  • LLM capture: Future readers may only consume LLM digests, not original work
  • Training data imperative: Getting work into LLM training sets becomes critical (like SEO in 1990s)
  • Methodological bias: LLMs could enforce "settled science" or methodological orthodoxy
  • Bullshit detection: Need quantitative evaluation of academic bullshit (fancy equations wrapping empty claims)