“Why do some nudges work and others not?” — this is the title of the latest scientific article co-authored by researchers from NHF EUBA. Matej Lorko (NHF), Tomáš Miklánek (VŠE Prague), and Maroš Servátka (NHF and Macquarie University Sydney) explore why some nudges are effective while others are not, particularly in the area of tax compliance. The article presents the results of the very first economic experiment conducted in the BEE4R Lab at the faculty. It was published in the Journal of Economic Behavior & Organization. We spoke with Matej Lorko about this achievement.
What is the topic of your article?
In the past decade, there has been an intense discussion about the idea of behavioral nudging, through which governments (and businesses) can achieve changes in human behavior — even without commands, bans, or changes in economic incentives. In the area of tax compliance, several countries, including Slovakia, have introduced nudges. Some tax nudges appeal to morality, others to social norms or the creation of public goods. However, their results have often been mixed. One possible reason for these failures is that many governments have misunderstood nudges — as if they could change human decisions regardless of economic incentives.
In our experiment, we therefore focused on the relationship between nudges and economic incentives. We assumed that this relationship determines which nudges will work and which will not.
Can you explain this with a simple example — how can we imagine a working or failing nudge?
In our experiment, participants repeatedly earned money through a simple task and could always decide whether to declare their income and pay tax on it or not. For some, we set the probability of a tax audit at 10%, and for others at 60%. Given the size of the possible fine, it was not worth it for the first group to declare income, while for the second group it was.
In some rounds, we nudged participants to declare their income. We assumed that the nudge would have a long-term effect only if it aligned with incentives — that is, with a high probability of an audit. And that a nudge with a low probability of an audit — therefore contrary to economic incentives — would fail. And this is exactly what our research confirmed.
Your experiment was conducted in “our” BEE4R Lab at NHF. Why is it good to run a “tax” experiment in a lab? Wouldn’t it be better to use field data?
In the field, we cannot control economic incentives, nor can we accurately determine how people perceive them. When a taxpayer receives a letter from the tax authority at home, we don’t know whether they see it as a genuine warning about an audit or just as spam intended to scare a few taxpayers. We don’t even know if the letter reached them or whether they read it.
In the lab, we can control these aspects precisely. We can clearly communicate to participants the probability of an audit, on which the optimal decision to declare or not declare income depends. Another major advantage is the ability to observe longer-term effects of nudges — whether the taxpayer responds not only immediately but also in subsequent rounds, as if a year had passed.
You mentioned that after completing data collection, you spent over a year writing the article. Why did it take so long?
If a researcher wants to publish in a prestigious international journal, the article must be nearly perfect. Every sentence must be precise and fit into the whole without a single weak point. Such articles are reviewed by the best researchers in the world, who notice every gap in the argument, every slight inaccuracy in analysis or interpretation. In short, nothing escapes them, and they do not forgive mistakes.
In practice, this means that co-authors exchange the paper dozens of times until they are satisfied with it. Initially, entire paragraphs are revised during reviews; later, discussions focus “only” on replacing certain words with more accurate synonyms.
Another year passed between finishing the article and its publication. What happened during this period?
After finishing the article, we selected several target journals that might be interested. We first submitted the article to the Journal of Political Economy, one of the five most prestigious economic journals in the world. It went through the editor to reviewers, but we ultimately didn’t get a chance for publication. Several more rejections followed. In January of this year, we submitted the article to the Journal of Economic Behavior & Organization (JEBO), and two months later, we received very positive reviews with only minor comments. We addressed those, and the article was accepted.
The fact that you received only minor comments could suggest the article was theoretically fit for a better journal, even though I don’t doubt that JEBO is an excellent publication. After all, you tried to place it in top journals, but it didn’t work out. In this context, there is sometimes mention of discrimination against our region, where editors are a priori skeptical of publications from Eastern Europe. Can this influence where and whether an article is published?
There is probably no direct discrimination. But statistical discrimination — which, by the way, is quite rational — does play a role. EUBA is not known internationally as a research university, and you would hardly find it on the map of top economic research. If I were asked to review an article from a university in Zimbabwe or Bangladesh, I would probably also approach the text more cautiously. That is normal, and we have to count on it.
This is all the more reason for us to strive to publish in quality journals. If EUBA gains a reputation, the first impression of our articles will also improve in the eyes of editors.