The DHS Program User Forum
Discussions regarding The DHS Program data and results
Home » Topics » Child Health » Odds ratios vs Marginal effects
Odds ratios vs Marginal effects [message #8979] Fri, 22 January 2016 11:48 Go to next message
lexgw is currently offline  lexgw
Messages: 31
Registered: October 2015
Location: United Kingdom
Member

Hello DHS experts,

why is it that most of the existing literature using DHS data present results in form of odds ratios other than marginal effects? Is it wrong to use marginal effects? Thanks


Gabriel
Re: Odds ratios vs Marginal effects [message #8982 is a reply to message #8979] Fri, 22 January 2016 13:04 Go to previous messageGo to next message
Reduced-For(u)m
Messages: 292
Registered: March 2013
Senior Member

Dear Economist,

Please continue to estimate marginal effects. They are so much easier to interpret.

Yours,
Another Economist

*Ok, for real: it is just a matter of disciplinary convention, and any number of methods can work for any number of problems. You should feel free to estimate whatever kind of effects make sense in your research (and in your discipline).
Re: Odds ratios vs Marginal effects [message #8983 is a reply to message #8982] Fri, 22 January 2016 17:34 Go to previous messageGo to next message
lexgw is currently offline  lexgw
Messages: 31
Registered: October 2015
Location: United Kingdom
Member

Thanks dear Economist,

how about the notion i have seen on discussions from the forum that with DHS data you can not establish causation but relationship/associations between variables? I thought this was the reason why most authors do not use marginal effects (because slopes explain causation!) for DHS data. What is your comment on this? Thanks

Gabriel
Re: Odds ratios vs Marginal effects [message #8984 is a reply to message #8983] Fri, 22 January 2016 19:03 Go to previous messageGo to next message
Reduced-For(u)m
Messages: 292
Registered: March 2013
Senior Member

Gabriel,

Good question. I think these are two separate issues:

1 - how you report effect sizes: as marginal effects, odds ratios, hazard rates...whatever. This is just a choice of units. It has no real relationship to the correlation/causation question. A regression slope is (often) just another way to summarize the same information in an odd-ratio or a hazard.

Sometimes it might appear that choice of units relates to "causal interpretations", but that is probably because different disciplines use different terminology, have different standards for what constitutes a "causal effect", and tend to use primarily one of the potential effect size measures. So the units used for reporting effect sizes and the causal (or correlational) language used by practitioners who report those effects are often highly correlated, because they are similarly trained.

2 - I think when they say you can't get "causal estimates" from the DHS data, they are simply pointing out that this is observational data, and not the result of some particular experiment with experimentally-assigned treatment groups. From what I understand, this has been standard training in many biomedical fields for a long time. In the social sciences the development of the concept of "Natural Experiment", along with associated methodology, has led to a generation of practitioners trained to believe in both experimental and non-experimental methods for estimating causal relationships (Instrumental Variables, Regression Discontinuity, certain kinds of Difference-in-Difference). Ironically, I think the first Natural Experiment was actually the work of John Snow in Epidemiology*, but these days it is mostly Econ/Poli Sci that think about natural experiments.

That said, in order to use DHS data to do a "Natural Experiment" you usually have to import some sort of external data (an instrument, a policy roll-out, something). So in a sense, I agree that just using pure DHS data usually means estimating "correlations" or (as an idea I'm developing "Deep Correlations**", those purged of obvious and confounding observables) and not causal effects. But it isn't a given that no causal effects can be estimated using DHS data. I would argue that people who say that are really just saying that only experiments can generate causal estimates, and I think that is a rather narrow view of how we conduct statistical inference.

*See "Statistical Models and Shoeleather" by David Freedman for a discussion of Snow's awesome Natural Experiment
**This is not (at least yet) a well-defined or mathematically grounded concept, just an idea I have to distinguish certain kinds of deeply meaningful correlations from other kinds of more superficial correlations.

Thoughts? Reminder: I am not affiliated with the DHS and my responses do not necessarily reflect the views of anyone other than me.
Re: Odds ratios vs Marginal effects [message #9241 is a reply to message #8983] Fri, 26 February 2016 01:58 Go to previous messageGo to next message
lexgw is currently offline  lexgw
Messages: 31
Registered: October 2015
Location: United Kingdom
Member

Hello there, thanks for your feedback just seen it. It is very useful...
Re: Odds ratios vs Marginal effects [message #9382 is a reply to message #8982] Tue, 22 March 2016 20:29 Go to previous messageGo to next message
user-rhs is currently offline  user-rhs
Messages: 132
Registered: December 2013
Senior Member
Reduced-For(u)m wrote on Fri, 22 January 2016 13:04

Dear Economist,

Please continue to estimate marginal effects. They are so much easier to interpret.

Yours,
Another Economist

*Ok, for real: it is just a matter of disciplinary convention, and any number of methods can work for any number of problems. You should feel free to estimate whatever kind of effects make sense in your research (and in your discipline).


As someone who dabbles both in epidemiology and economics, I respectfully disagree with R-F. They are *both* equally easy to interpret. I have, on occasion, chosen to present odds ratios instead of marginal effects when the marginal effect was abysmal (~0.00005) and the odds ratio was much more impressive ("1.5 times more likely"). I suspect others do the same. Sometimes the funders don't want to see the "true results," and you have to be...clever in packaging your results. It's not exactly "dishonest" so long as you're not making up data or numbers. I've seen this done even in highly cited studies on interventions that claim to have large protective effects against disease X, but the biological plausability of the intervention is suspect, and potential moral hazard brought about by the intervention puts the "clinical significance" of the intervention under even greater scrutiny. To be fair, I had never heard of marginal effects until I started working under an economist, who, not incidentally, loathes odds ratios.

Yours,
Dabbler

NB: In all seriousness--yes, I agree with R-F's sentiments above. How you present your results depend on the research, your field/discipline, and your target audience. Public health and policy people might prefer odds ratios because "two times more/less likely" is easier to digest than actual percentage point increases/decreases

[Updated on: Tue, 22 March 2016 20:57]

Report message to a moderator

Re: Odds ratios vs Marginal effects [message #9559 is a reply to message #9382] Sun, 17 April 2016 06:32 Go to previous message
lexgw is currently offline  lexgw
Messages: 31
Registered: October 2015
Location: United Kingdom
Member

Thanks dear Dabbler, that was so useful. I had taken long visiting the forum. cheers

Gabriel
Previous Topic: Region Variables
Next Topic: Assessing inter-generational effect of malnutrition
Goto Forum:
  


Current Time: Mon Dec 30 11:41:21 Coordinated Universal Time 2024