0% found this document useful (0 votes)
70 views5 pages

Feyzollahi and Rafizadeh, The Adoption of LLMs in Economic Research

This paper presents a novel methodology for estimating the adoption of Large Language Models (LLMs) in economics research by analyzing linguistic patterns in 25 leading economics journals over a 24-year period. The findings reveal a significant increase in LLM-associated terms, particularly after the release of ChatGPT, indicating rapid integration of these models into academic writing. This research provides the first systematic evidence of LLM adoption in economics, offering insights for developing disclosure policies and standards for LLM-assisted research.

Uploaded by

spammailforgunts
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views5 pages

Feyzollahi and Rafizadeh, The Adoption of LLMs in Economic Research

This paper presents a novel methodology for estimating the adoption of Large Language Models (LLMs) in economics research by analyzing linguistic patterns in 25 leading economics journals over a 24-year period. The findings reveal a significant increase in LLM-associated terms, particularly after the release of ChatGPT, indicating rapid integration of these models into academic writing. This research provides the first systematic evidence of LLM adoption in economics, offering insights for developing disclosure policies and standards for LLM-assisted research.

Uploaded by

spammailforgunts
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Economics Letters 250 (2025) 112265

Contents lists available at ScienceDirect

Economics Letters
journal homepage: www.elsevier.com/locate/ecolet

The adoption of Large Language Models in economics research



Maryam Feyzollahi , Nima Rafizadeh
Department of Resource Economics, University of Massachusetts Amherst, USA

ARTICLE INFO ABSTRACT

JEL classification: This paper develops a novel methodology for estimating the adoption of Large Language Models (LLMs) in
O33 economics research by exploiting their distinctive linguistic footprint. Using a rigorously constructed difference-
C81 in-differences framework, the analysis examines 25 leading economics journals over 24 years (2001–2024),
D83
analyzing differential frequencies between LLM-characteristic terms and conventional economic language. The
A23
empirical findings document significant and accelerating LLM adoption following ChatGPT’s release, with a
Keywords: 4.76 percentage point increase in LLM-associated terms during 2023–2024. The effect more than doubles from
Economics research
2.85 percentage points in 2023 to 6.67 percentage points in 2024, suggesting rapid integration of language
Large Language Models
models in economics research. These results, robust across multiple fixed effects specifications, provide the
Natural language processing
Scientific production first systematic evidence of LLM adoption in economics research and establish a framework for estimating
Technological adoption technological transitions in scientific knowledge production.

1. Introduction across the discipline. Our methodological approach specifically ad-


dresses two fundamental challenges identified by Van Dis et al. (2023):
The integration of Artificial Intelligence (AI) in economics research detecting model-generated content in academic papers and accounting
has emerged as a transformative phenomenon, advancing methodologi- for the increasing sophistication of LLM outputs. These challenges,
cal frontiers across the discipline. Applications span from deep learning combined with heterogeneous disclosure requirements across academic
for data extraction (Dell, 2024) to algorithmic pricing (Calvano et al., journals (Dwivedi et al., 2023), highlight the need for systematic esti-
2020), algorithmic hypothesis generation (Ludwig and Mullainathan, mation approaches that can inform evidence-based policies for main-
2024), distributional welfare analysis (Corral et al., 2025), spatial eco- taining research integrity.
nomics (Khachiyan et al., 2022), heterogeneous agent modeling (Azi- In the absence of established detection mechanisms, we show that
novic et al., 2022), and computational macroeconomics (Kahou et al., LLM assistance leaves subtle but detectable patterns in academic writ-
2024). Within this broad landscape of AI applications, the adoption of ing, manifesting through systematic variations in word choice and ex-
Large Language Models (LLMs) represents a particularly significant de- pression style.1 We construct two rigorously curated word sets: a treat-
velopment. These models, exemplified by ChatGPT, Claude, and Gem- ment group capturing these LLM-characteristic linguistic patterns, and
ini, offer unprecedented capabilities for conducting literature reviews, a control group comprising fundamental economic terminology with
analyzing data, writing code, and preparing manuscripts (Korinek, historically stable usage patterns. Employing a difference-in-differences
2023). (DiD) framework, we analyze the differential frequency patterns be-
As economists navigate this technological transition, Ludwig et al. tween these groups across leading economics journals before and after
(2025) establishes foundational guidelines for integrating LLMs into the widespread availability of ChatGPT, thereby providing the first
empirical research. While their work advances the methodological systematic empirical evidence of LLM adoption in economics research.
framework for LLM implementation, the extent of actual LLM adop- The study offers both methodological and empirical contributions
tion in economics research remains unknown. This paper fills this to understanding the impact of language models on academic re-
gap by developing a novel proxy-based methodology that exploits search. Through linguistic pattern analysis, it provides the first sys-
the distinctive linguistic patterns of LLMs to estimate their adoption tematic evidence of LLM adoption in economics research, advancing

∗ Corresponding author.
E-mail addresses: [email protected] (M. Feyzollahi), [email protected] (N. Rafizadeh).
1
While these linguistic markers capture a significant dimension of LLM influence, we acknowledge that they may provide conservative estimates given both
the ongoing pipeline of LLM-assisted papers amid publication lags and the broader spectrum of LLM applications in research production beyond writing, such as
data analysis and code generation.

https://doi.org/10.1016/j.econlet.2025.112265
Received 10 November 2024; Received in revised form 22 February 2025; Accepted 3 March 2025
Available online 14 March 2025
0165-1765/© 2025 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
M. Feyzollahi and N. Rafizadeh Economics Letters 250 (2025) 112265

Table 1
Selected journals and their fields.
Journal Field
Economics Letters General Economics and Methods
European Economic Review General Economics
Journal of Economic Theory Economic Theory
Journal of Economic Dynamics and Control Economic Theory and Computation
Games and Economic Behavior Game Theory and Decision Theory
Journal of Econometrics Econometric Methods
Journal of Economic Behavior and Organization Experimental and Behavioral Economics
Journal of Macroeconomics Macroeconomic Theory and Policy
Journal of Monetary Economics Macroeconomics and Monetary Theory
Review of Economic Dynamics Macroeconomic Dynamics
Journal of Financial Economics Financial Economics
Journal of International Economics International Trade and Finance
International Review of Economics and Finance International Economics
International Journal of Industrial Organization Industrial Organization
Journal of Development Economics Development and Growth
Journal of Public Economics Public Economics and Policy
Labour Economics Labor and Demographic Economics
Journal of Health Economics Health Economics
Economics of Education Review Economics of Education
Energy Economics Energy Markets and Policy
Resource and Energy Economics Natural Resource Economics
Ecological Economics Environmental and Ecological Economics
Journal of Environmental Economics and Management Environmental Economics
Journal of Urban Economics Urban Economics
Regional Science and Urban Economics Regional and Spatial Economics

All journals are Elsevier publications to ensure consistent text analysis methodology.

beyond anecdotal observations to deliver quantitative assessment of variations of terms (e.g., ‘‘underscore’’, ‘‘underscores’’, ‘‘underscored’’,
technological transition in scholarly communication. Our methodol- ‘‘underscoring’’). Given that heterogeneity in search algorithms across
ogy specifically addresses current challenges in detecting LLM writ- publishers could introduce systematic measurement error in frequency
ing assistance while documenting diffusion patterns in academia. The analysis, especially in the treatment of word stems and compound
findings have important implications for stakeholders across the aca- terms, this standardization is fundamental. Second, we seek broad
demic landscape—researchers, journal editors, academic institutions, representation across economic disciplines. As shown in Table 1, our
ethics committees, and policymakers—providing an empirical basis for selected journals span general economics, economic theory, and applied
developing disclosure policies and standards for LLM-assisted research. economics. This disciplinary breadth strengthens the external validity
of our findings, ensuring they reflect broader patterns in LLM adoption
2. Data across economics research rather than subfield-specific characteristics.

This study utilizes a three-dimensional dataset (Journal × Word × Word dimension. We construct two equally-sized word sets for our
Year) comprising 25 leading economics journals and 50 words over a analysis: treatment words that are characteristically associated with
24-year period (2001–2024). The construction of the dataset incorpo- LLM-assisted writing, and control words that represent traditional aca-
rates several methodological strategies to ensure accurate estimation of demic writing patterns, as detailed in Table 2. The treatment words
LLM adoption patterns. are selected based on two criteria. First, we analyze a large corpus
First, the methodology employs binary counting at the paper level, of confirmed LLM-generated academic text to identify words that ap-
where a paper contributes exactly one observation if it contains a pear with systematically higher frequency compared to human writing.
given word, regardless of the word frequency within that paper. This Second, we cross-reference our selections with existing literature on
approach mitigates potential bias from papers with repeated use of language model patterns (e.g., Kobak et al., 2024; Uribe and Maldupa,
certain terms. Second, to address substantial variation in publication 2024) to validate our choices.3 The control words are selected based
volumes across journals and time periods, we normalize measurements on two criteria. First, these words represent established economic and
by calculating the proportion of papers containing each word relative to econometric concepts that have maintained consistent usage patterns in
the total number of papers published in each journal-year, rather than
using raw counts.2 Third, methodological consistency is maintained by 3
Our word selection methodology extends beyond the core set presented in
restricting the analysis to three article types: research articles, review
Table 2. Through systematic corpus analysis of LLM-generated academic texts,
articles, and short communications. The dataset explicitly excludes
we identified additional characteristic terms across three semantic categories:
auxiliary content such as editorials, book reviews, correspondence, process-oriented words (e.g., articulate, delineate, elucidate), analytical expres-
discussion papers, errata, news items, and conference information. sions (e.g., framework, paradigm, robust), and contextual terms (e.g., facilitate,
integrate, optimize). The selected 25 treatment words distinguish themselves
Journal dimension. Our journal selection process is guided by two pri-
through four key features: (1) highest frequency differential between LLM
mary criteria. First, we restrict our analysis to Elsevier publications to
and human writing in our corpus analysis, (2) consistent appearance across
ensure methodological consistency in textual analysis through the Sci- different language models and academic disciplines, (3) minimal semantic
enceDirect platform. This platform uniformity is essential for our empir- overlap with traditional academic terminology, and (4) stable usage patterns in
ical strategy as it provides standardized word-identification protocols non-academic contexts, allowing cleaner identification of LLM influence. While
across all papers—particularly important for capturing morphological robustness checks employing expanded categories support our findings, we
deliberately chose this more conservative core set to maximize methodological
precision. This choice prioritizes identification rigor over exhaustive coverage,
2
Data collection for this study was conducted during the second week of though the consistency of results across alternative word combinations sug-
November 2024; accordingly, the use of proportional measures rather than gests our findings capture underlying characteristics of LLM-assisted writing
raw counts appropriately addresses the incomplete publication cycle in 2024. rather than artifacts of specific word choice.

2
M. Feyzollahi and N. Rafizadeh Economics Letters 250 (2025) 112265

Table 2 temporal demarcation between pre- and post-LLM availability peri-


Selected words for analysis.
ods. Second, the substantial publication lags in economics journals—
Treatment Words (25) typically spanning several months to over a year—ensure that papers
Bolster, Comprehensive, Contextualize, Crucial, Delve, Elevate, Empower, published immediately following ChatGPT’s release were conceived
Encompass, Escalate, Exacerbate, Foster, Foundation, Imperative, Interplay, and substantially developed without access to this technology, thereby
Intricate, Leverage, Multifaceted, Navigate, Nuance, Paramount, Resonate,
Stringent, Underscore, Unravel, Unveil
mitigating concerns about anticipatory effects.
The baseline econometric specification takes the following form:
Control Words (25)
Additive, Asymmetric, Asymptotic, Binary, Bounded, Coefficient, Concave, 𝑦𝑖𝑗𝑡 = 𝛽0 + 𝛽1 (Treat𝑗 × Post𝑡 ) + 𝜃𝑖 + 𝛾𝑗 + 𝛿𝑡 + 𝜖𝑖𝑗𝑡 (1)
Continuous, Convex, Covariance, Discrete, Elasticity, Endogenous, Equilibrium,
Exogenous, Homogeneous, Isomorphic, Linear, Monotonic, Parameter, Quadratic, where 𝑦𝑖𝑗𝑡 represents the proportion of papers in journal 𝑖 containing
Recursive, Stationary, Symmetric, Variance word 𝑗 during year 𝑡. The indicator Treat𝑗 denotes treatment word
Treatment words are selected based on systematic patterns in AI-assisted writing, while status, while Post𝑡 marks the post-ChatGPT period. The specification
control words represent stable technical economic terminology with consistent usage includes journal (𝜃𝑖 ), word (𝛾𝑗 ), and year (𝛿𝑡 ) fixed effects to control
patterns in academic writing. All words within each group are listed alphabetically. for time-invariant journal characteristics, word-specific patterns, and
common temporal trends, respectively. The coefficient of interest, 𝛽1 ,
captures the differential change in word frequency attributable to LLM
academic writing over our sample period. Second, they are semantically adoption.
unrelated to our treatment words, ensuring that any potential changes Building on this baseline, we employ five specifications with in-
in treatment word frequencies do not spillover to or correlate with creasing sophistication in the fixed effects structure:
control word usage through meaning associations. Importantly, these
(1) Base: 𝑦𝑖𝑗𝑡 = 𝛽0 + 𝛽1 (Treat𝑗 × Post𝑡 ) + 𝜃𝑖 + 𝛾𝑗 + 𝛿𝑡 + 𝜖𝑖𝑗𝑡
words are technical enough to appear primarily in the main text rather
than in auxiliary sections like acknowledgments or references, ensuring (2) Journal-Year: 𝑦𝑖𝑗𝑡 = 𝛽0 + 𝛽1 (Treat𝑗 × Post𝑡 ) + 𝛾𝑗 + 𝜃𝑖𝑡 + 𝜖𝑖𝑗𝑡
clean measurement of their frequency. (3) Journal-Word: 𝑦𝑖𝑗𝑡 = 𝛽0 + 𝛽1 (Treat𝑗 × Post𝑡 ) + 𝛿𝑡 + 𝜃𝑖𝑗 + 𝜖𝑖𝑗𝑡
Time dimension. Our analysis spans 24 years (2001–2024), with 2023 (4) Combined: 𝑦𝑖𝑗𝑡 = 𝛽0 + 𝛽1 (Treat𝑗 × Post𝑡 ) + 𝜃𝑖𝑡 + 𝜃𝑖𝑗 + 𝜖𝑖𝑗𝑡
and 2024 constituting the treatment period following ChatGPT’s public (5) Full: 𝑦𝑖𝑗𝑡 = 𝛽0 + 𝛽1 (Treat𝑗 × Post𝑡 ) + 𝜃𝑖 + 𝛾𝑗 + 𝛿𝑡 + 𝜃𝑖𝑡 + 𝜃𝑖𝑗 + 𝜖𝑖𝑗𝑡
release in late 2022.4 The choice of annual frequency in our analysis
is motivated by several methodological considerations. First, annual where 𝜃𝑖𝑡 captures time-varying journal-specific effects and 𝜃𝑖𝑗 controls
aggregation mitigates noise from seasonal fluctuations in academic for journal-specific word usage patterns.5
publishing, particularly the systematic variation in publication volumes Table 3 presents the empirical findings across three panels, each rep-
across academic cycles. Second, the substantial publication lags in resenting different treatment period specifications. The analysis begins
economics journals, which typically range from several months to over with Panel A, which employs both 2023 and 2024 as the post-treatment
a year, render higher-frequency analysis less informative by obscur- period, establishing the aggregate impact of LLM availability. The
ing the precise timing of LLM adoption. Third, annual data provides subsequent decomposition in Panels B and C isolates the treatment
sufficient observations for robust statistical inference while preserving effects for 2023 and 2024 respectively, enabling examination of the
meaningful variation in linguistic measures. This temporal aggregation temporal evolution in LLM adoption patterns. For each panel, the anal-
also aligns with established practices in academic research evaluation, ysis presents estimates across multiple specifications with progressively
where impact and trends are conventionally assessed on an annual basis more demanding fixed effects structures to assess the robustness of
(Donthu et al., 2021). the findings. To ensure valid statistical inference, the standard errors
The final dataset comprises 30,000 observations (25 journals × are clustered at the journal level throughout all specifications, allow-
50 words × 24 years). This three-dimensional framework enables the ing for arbitrary within-journal error correlation while maintaining
control of journal-specific characteristics, word-specific trends, and independence across journals.
time-varying factors, thus isolating the impact of LLM adoption on The results reveal compelling evidence of increasing LLM adoption
academic writing patterns while accounting for heterogeneity across over time. When considering both post-treatment years (Panel A), the
journals and systematic changes in terminology over time. Fig. 1 illus- analysis documents a significant increase of 4.76 percentage points
trates the comparative trends using two arbitrarily selected word pairs in the frequency of LLM-associated terms, with the effect maintaining
(intricate versus coefficient and underscore versus binary), chosen solely remarkable stability across all specifications. This stability suggests the
for visual clarity. The figure reveals parallel pre-trends before ChatGPT findings are robust to various fixed effects structures and are not driven
and marked divergence post-2022, though formal econometric analysis by journal-specific word preferences, temporal changes in word usage,
of these patterns follows in the next section. or journal-specific time trends. The temporal decomposition in Panels
B and C reveals an accelerating pattern of LLM adoption. The initial
3. Method and results impact in 2023 (Panel B) shows an increase of 2.85 percentage points,

We exploit the release of ChatGPT as a natural experiment to iden-


5
We do not include year × word fixed effects in our specifications, as
tify the causal effect of LLM availability on academic writing patterns.
their inclusion would preclude identification of our treatment effect 𝛽1 due to
The identification relies on a DiD framework that analyzes differential
perfect multicollinearity. To see this formally, let 𝐃𝑗𝑡 = Treat𝑗 × Post𝑡 denote
frequencies between LLM-associated terminology and conventional aca-
our treatment indicator. Then 𝐃𝑗𝑡 can be expressed as a linear combination of
demic terms across pre- and post-release periods. The validity of this ∑ ∑
the year × word fixed effects: 𝐃𝑗𝑡 = 𝑗∈ 𝑡∈ (𝛾𝛿)𝑗𝑡 𝟏𝑗𝑡 , where 𝟏𝑗𝑡 are indicator
identification strategy rests on a central assumption: the exogeneity of variables for each year–word pair. More precisely, since Post𝑡 = 𝟏{𝑡 ≥ 2023}
treatment timing. Two institutional features strengthen this assump- and Treat𝑗 is a binary word-type indicator, the treatment variable lies in the
tion. First, ChatGPT’s unanticipated release in late 2022 creates a sharp column space of the year × word fixed effects matrix, i.e., 𝐃𝑗𝑡 ∈ span{(𝛾𝛿)𝑗𝑡 }𝑗,𝑡 .
This perfect linear dependence violates the full rank condition necessary for
identification in the DiD framework. This is a specific case of the more
4
ChatGPT was released on November 30, 2022. Given typical publication general identification challenge in DiD designs where including fixed effects
lags in economics journals, we classify December 2022 publications in the at the same level as the treatment assignment ( ×  ) renders the treatment
control period, as these papers were likely submitted and revised before effect non-identifiable due to perfect multicollinearity with the fixed effects
ChatGPT’s release. structure.

3
M. Feyzollahi and N. Rafizadeh Economics Letters 250 (2025) 112265

Fig. 1. Comparative word frequency trends in economics publications (2001–2024).


Each point represents the mean proportion of papers containing the respective word across all journals in that year. To facilitate visual comparison, the treatment group proportions
( )
are normalized by a scaling factor that equalizes the pre-treatment averages between groups scaling factor = (control group pre-2023 mean)/(treatment group pre-2023 mean) .
Shaded areas represent 95% confidence intervals (±1.96 standard errors) around the mean proportions, with standard errors clustered at the journal level. The vertical dashed line
marks ChatGPT’s release in 2022.

while the effect more than doubles to 6.67 percentage points in 2024 4. Conclusion
(Panel C). This escalating pattern suggests accelerating LLM adoption
as researchers become more familiar with these tools and publication This paper develops a novel methodology for estimating LLM adop-
lags begin to clear. tion in economics research through analysis of linguistic patterns. The
The magnitude and evolution of these effects merit particular at- difference-in-differences framework, applied to 25 leading economics
tention from both statistical and economic perspectives. The treatment journals over 24 years, documents significant and accelerating adoption
effect estimates are precisely estimated across all specifications, with of language models following ChatGPT’s release. The analysis reveals
standard errors clustered at the journal level indicating strong statistical a 4.76 percentage point increase in LLM-characteristic terminology
significance at the 1% level. The economic significance is equally during 2023–2024, with the effect rising from 2.85 percentage points in
noteworthy—the documented effects represent substantial deviations 2023 to 6.67 percentage points in 2024. These findings remain robust
from pre-treatment patterns in academic writing. Moreover, these esti- across multiple specifications and likely represent lower bounds on
mates likely constitute a lower bound on actual LLM adoption for two actual LLM adoption given publication lags and unobserved dimensions
main reasons. First, given the substantial publication lags in economics of language model assistance.
journals, many papers conceived and written with LLM assistance The methodological framework demonstrates the feasibility of sys-
remain in the publication pipeline. Second, the linguistic markers cap- tematic estimation of LLM adoption in academic research without
ture only one dimension of LLM assistance in research production, relying on direct detection or self-reporting. By focusing on linguis-
potentially missing other forms of language model usage such as data tic patterns while controlling for journal characteristics, word-specific
analysis or code generation.6 trends, and temporal effects, the approach establishes a basis for study-
ing technological transitions in research production. The findings carry
immediate implications for research integrity policies, highlighting the
6
Beyond these main factors, several additional mechanisms reinforce the
lower bound nature of our estimates: (1) authors may deliberately modify LLM-
generated text to mask its origin, (2) newer language models may produce style. Furthermore, as language model capabilities expand into areas such as
text with less detectable linguistic patterns, and (3) LLMs might influence theorem proving and empirical specification search, their impact on research
research design and methodology in ways that do not manifest in writing may become increasingly subtle and complex to measure.

4
M. Feyzollahi and N. Rafizadeh Economics Letters 250 (2025) 112265

Table 3
Difference-in-differences estimates of LLM adoption.
(1) (2) (3) (4) (5)
Panel A: Treatment Period 2023–2024
Treatment × Post 0.0476*** 0.0476*** 0.0476*** 0.0476*** 0.0476***
(0.0101) (0.0102) (0.0103) (0.0104) (0.0104)
Constant 0.00993 0.0354*** 0.0345*** 0.0345*** 0.0599***
(0.0128) (0.0124) (0.0125) (0.00820) (0.000433)
Observations 30,000 30,000 30,000 30,000 30,000
R-squared 0.635 0.709 0.757 0.831 0.831
Panel B: Treatment Period 2023 Only
Treatment × Post 0.0285*** 0.0285*** 0.0285*** 0.0285*** 0.0285***
(0.00884) (0.00893) (0.00903) (0.00912) (0.00912)
Constant 0.00933 0.0345*** 0.0366*** 0.0619*** 0.0619***
(0.0127) (0.00820) (0.0123) (0.000198) (0.000198)
Observations 28,750 28,750 28,750 28,750 28,750
R-squared 0.632 0.708 0.753 0.829 0.829
Panel C: Treatment Period 2024 Only
Treatment × Post 0.0667*** 0.0667*** 0.0667*** 0.0667*** 0.0667***
(0.0120) (0.0122) (0.0123) (0.0124) (0.0124)
Constant 0.00998 0.0347*** 0.0354*** 0.0601*** 0.0601***
(0.0128) (0.00834) (0.0122) (0.000270) (0.000270)
Observations 28,750 28,750 28,750 28,750 28,750
R-squared 0.630 0.707 0.751 0.828 0.828
Journal FE Yes No No No Yes
Year FE Yes No Yes No Yes
Word FE Yes Yes No No Yes
Journal × Year FE No Yes No Yes Yes
Journal × Word FE No No Yes Yes Yes

Dependent variable is the proportion of papers containing each word. Standard errors in parentheses are clustered at the
journal level. *** p<0.01, ** p<0.05, * p<0.1.

need for standardized disclosure requirements and ethical guidelines Dell, M., 2024. Deep Learning for Economists. Technical report, National Bureau of
across academic journals. Economic Research.
Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., Lim, W.M., 2021. How to conduct
Future work could extend this framework in two important di-
a bibliometric analysis: An overview and guidelines. J. Bus. Res. 133, 285–296.
rections. First, combining our linguistic identification strategy with Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdul-
citation network analysis could reveal how LLM adoption patterns in- lah, A.M., Koohang, A., Raghavan, V., Ahuja, M., et al., 2023. Opinion paper:‘‘so
fluence research productivity and innovation trajectories in economics. what if ChatGPT wrote it?’’ multidisciplinary perspectives on opportunities, chal-
Second, our methodology could be leveraged to examine the differen- lenges and implications of generative conversational AI for research, practice and
policy. Int. J. Inf. Manage. 71, 102642.
tial impact of LLM adoption on various aspects of economic research, Kahou, M.E., Fernández-Villaverde, J., Gómez-Cardona, S., Perla, J., Rosa, J., 2024.
from empirical methodology to theoretical development, thereby illu- Spooky Boundaries at a Distance: Inductive Bias, Dynamic Models, and Behavioral
minating how these tools reshape knowledge production in economics. Macro. Technical report, National Bureau of Economic Research.
Khachiyan, A., Thomas, A., Zhou, H., Hanson, G., Cloninger, A., Rosing, T., Khandel-
wal, A.K., 2022. Using neural networks to predict microspatial economic growth.
Data availability
Am. Econ. Review: Insights 4 (4), 491–506.
Kobak, D., Márquez, R.G., Horvát, E.-Á., Lause, J., 2024. Delving into ChatGPT usage
Data will be made available on request. in academic writing through excess vocabulary. arXiv preprint arXiv:2406.07016.
Korinek, A., 2023. Generative AI for economic research: Use cases and implications for
economists. J. Econ. Lit. 61 (4), 1281–1317.
References Ludwig, J., Mullainathan, S., 2024. Machine learning as a tool for hypothesis
generation. Q. J. Econ. 139 (2), 751–827.
Ludwig, J., Mullainathan, S., Rambachan, A., 2025. Large Language Models: An Applied
Azinovic, M., Gaegauf, L., Scheidegger, S., 2022. Deep equilibrium nets. Internat.
Econometric Framework. Technical report, National Bureau of Economic Research.
Econom. Rev. 63 (4), 1471–1525.
Uribe, S.E., Maldupa, I., 2024. Estimating the use of ChatGPT in dental research
Calvano, E., Calzolari, G., Denicolo, V., Pastorello, S., 2020. Artificial intelligence,
publications. J. Dent. 149, 105275.
algorithmic pricing, and collusion. Am. Econ. Rev. 110 (10), 3267–3297.
Van Dis, E.A., Bollen, J., Zuidema, W., Van Rooij, R., Bockting, C.L., 2023. ChatGPT:
Corral, P., Henderson, H., Segovia, S., 2025. Poverty mapping in the age of machine
Five priorities for research. Nature 614 (7947), 224–226.
learning. J. Dev. Econ. 172, 103377.

You might also like