Contraceptive Prevalence/Unmet need - reliability and Validity [message #9623] |
Sun, 24 April 2016 23:36 |
Renee
Messages: 1 Registered: April 2016
|
Member |
|
|
Hi There
I'm writing a paper on reliability and validity of self-report measures of contraceptive use...
Question 1:
How are the Section 3. Contraception and Section 6. Fertility Preferences used to calculate contraceptive prevalence and unmet need? Is it as simple as
Contraceptive Prevalence - those who answer yes to item 310;
Unmet need - those who answered no to item 310, but answered no more/none to 602 or months/years on 603?...(As a proportion etc)
Question 2:
In terms of reliability and validity, how did the DHS test the questions in sections 3 and 6 to confirm reliability and validity of these measures (internal/external, construct, criterion etc)? I can't very much published except for a couple based on old Morocco data, and wondered if such testing was done when the original surveys were developed?
Many thanks in advance,
Renee
Renee
|
|
|
Re: Contraceptive Prevalence/Unmet need - reliability and Validity [message #9740 is a reply to message #9623] |
Wed, 11 May 2016 17:52 |
Trevor-DHS
Messages: 805 Registered: January 2013
|
Senior Member |
|
|
I will try to answer you questions, however, I'm not sure which questionnaire you are referring to, and the question numbers are different in the questionnaire version I am looking at. I will be referring to the DHS7 Model Woman's Questionnaire
Question 1:
Contraceptive Prevalence is a simple indicator - the proportion of those women answering yes to Question 303 "Are you or your partner currently doing something or using any method to delay or avoid getting pregnant?". This is usually presented for currently married or in union women, or for all women. There are two key indicators CPR (Contraceptive Prevalence Rate) which is about women using any method of contraception, and mCPR (Modern Contraceptive Prevalence Rate) which is about women using a modern method of contraception. The latter requires the use of question 304 "Which method are you using?" to classify the type of method used as a modern or traditional method. We have a Youtube video that explains this indicators further.
Unmet need is a more complicated indicator - it is often summarized as a relatively simple indicator, but that grossly over-simplifies the indicator. I'm not going to go into the details of this indicator here, but rather point you to a couple of resources:
a) Revising Unmet Need for Family Planning. This documents the currently used definition of unmet need which was a revision to provide greater comparability of data on unmet need in DHS, and actually simplifies the definition (a little) over the prior version.
b) Unmet need topic page. On this page you will find diagrams of the original and revised definitions of unmet need. You will also find links to the 15 survey questions needed to collect the data for the unmet need indicator (note that the questions here are based on the DHS6 model questionnaire).
Question 2: (Response provided by Dr. Fred Arnold:)
When the contraception and fertility preferences questions for DHS were being developed, pilot tests were conducted and the results were assessed. Pilot tests have also been conducted for each new DHS model questionnaire, and pretests are conducted in the local language(s) in every DHS survey in both urban and rural areas before the questionnaires are finalized. In each case, there has been extensive debriefing of the field teams after the pilots/pretests to gauge the extent to which individual questions are understood by respondents, how often questions (or parts of questions) had to be re-asked to ensure comprehension, and whether or not the responses given were consistent with the meaning of the question. During training of the field teams, we also obtain feedback from the trainees on their understanding of the questions (especially the local translations), and appropriate changes are often made to question wording. During the fieldwork for the main survey, staff and consultants from the DHS Program and the implementing agency monitor the interviewing, and try to identify any problems in the wording of questions or field procedures. Although a formal validity test for these questions is difficult since there is no gold standard for the correct answers, the feedback we receive during training and pretesting are very helpful in determining whether any questions are problematic. We also monitor the extent of missing data on key questions. In the early 1990s, several reinterview surveys were conducted to estimate the reliability of responses, including reinterview surveys in Pakistan and the Dominican Republic.
|
|
|