Graduate Studies Reports Access
Graduate Course Proposal Form Submission Detail - EDF7498
Tracking Number - 2415
Edit function not enabled for this course.
Approved, Permanent Archive - 2011-03-11
Submission Type: New
Course Change Information (for course changes only):
Comments: to GC 1/24/11; Approved. To System Concurrence 1/28/11 - to SCNS 2/7/11. Approved by SCNS. Effective 3/1/11. Posted in banner. Was submitted as 7409; changed to 7498
- Date & Time Submitted: 2010-11-29
- Department: Educational Measurement and Research
- College: ED
- Budget Account Number: 171100 Educational Measurement and Research
- Contact Person: John Ferron
- Phone: 8139745361
- Email: firstname.lastname@example.org
- Prefix: EDF
- Number: 7498
- Full Title: Analysis for Single-Case Experiments
- Credit Hours: 3
- Section Type: C -
Class Lecture (Primarily)
- Is the course title variable?: N
- Is a permit required for registration?: N
- Are the credit hours variable?: N
- Is this course repeatable?:
- If repeatable, how many times?: 0
- Abbreviated Title (30 characters maximum): Single-Case Experiments
- Course Online?: C -
Face-to-face (0% online)
- Percentage Online: 0
- Grading Option:
R - Regular
- Prerequisites: EDF 7408
- Course Description: Methods for analyzing data from single-case experiments (e.g., multiple baseline, reversal, and alternating treatment studies) including applications of visual analysis, effect size estimation, randomization tests, and multilevel modeling.
- Please briefly explain why it is necessary and/or desirable to add this course: Offered as enrichment course (not part of program/concentration/certificate)
- What is the need or demand for this course? (Indicate if this course is part of a required sequence in the major.) What other programs would this course service? Single-case experimental designs, also called single-subject designs or interrupted time series designs, are becoming increasingly popular as a means to establish an evidence base for interventions. A search of the Social Science Citation Index (SSCI) on 5/28/2010 using the key terms “single-case” or “multiple baseline” or “reversal design” shows a substantial increase in the number of hits in the last 25 years (e.g., number of hits in 1985, 1990, 1995, 2000, 2005, and 2009, were 29, 20, 127, 143, 181, and 256, respectively). Given the increased interest in establishing an evidence base for interventions along with the difficulties which may be encountered in planning large scale experimental studies, it is not surprising that there has been a substantial increase in single-case experimental studies. Although the use of these methods has increased the analyses that are appropriate for single-case experiments fall outside what is covered in our existing data analysis courses. Consequently the department was asked to develop this course (originally by faculty in school psychology). It has been offered 4 times as a special topics course and each time has had about 15 doctoral students enroll.
- Has this course been offered as Selected Topics/Experimental Topics course? If yes, how many times? Yes, 3 or more times
- What qualifications for training and/or experience are necessary to teach this course? (List minimum qualifications for the instructor.) Doctorate with specialization in the area of educational statistics and research/scholarship in the area of single-case design and analysis
- Objectives: The primary purpose of this course is to help students gain an understanding of the logic, concepts, methods, applications, and limitations of the common approaches to analyzing single-case data. Specifically, we will address visual analysis, nonparametric effect size estimation, randomization tests, parametric effect size estimation, and multilevel modeling. The course emphasis is on the application of these procedures in the context of single-case research. Computer applications of the procedures will be integrated into the course.
The successful completion of the course requirements is expected to result in increased ability to a) intelligently read and evaluate the single-case research literature, (b) recognize the strengths and limitations of analysis approaches, (c) design single-case experiments, (d) communicate and collaborate with peers and other professionals on single-case research issues, and (e) conduct and interpret analyses of single-case data that are consistent with the ethical guidelines of professional associations such as the American Statistical Association (ASA), American Educational Research Association (AERA), and the American Psychological Association (APA).
- Learning Outcomes: We will consider visual analyses, nonparametric effect sizes, randomization tests, parametric effect sizes, and multilevel modeling. Students who successfully complete all course requirements should be able to:
a. Read, paraphrase, and evaluate single-case analyses.
b. Plan meaningful analyses for specific single-case studies.
c. Articulate rationales for the planned analyses.
d. Conduct single-case analyses using statistical software when appropriate.
e. Communicate the results of single-case analyses.
f. Articulate the basic assumptions and limitations of various single-case analyses.
- Major Topics: Single-Case Designs
Nonparametric Effect Sizes
Parametric Effect Sizes
Interval Estimates of Effects
Hierarchical Linear Modeling to Synthesize Across Cases
- Textbooks: None
- Course Readings, Online Resources, and Other Purchases: Ferron, J., & Rendina-Gobioff, G. (2005). Interrupted time series design. In B. Everitt & D. Howell (Eds). Encyclopedia of Behavioral Statistics (Vol. 2, pp. 941-945). West Sussex, UK: Wiley & Sons Ltd.
Ferron, J. (2005). Reversal design. In B. Everitt & D. Howell (Eds). Encyclopedia of Behavioral Statistics (Vol. 4, pp. 1759-1760). West Sussex, UK: Wiley & Sons Ltd.
Ferron, J., & Scott, H. (2005). Multiple baseline designs. In B. Everitt & D. Howell (Eds). Encyclopedia of Behavioral Statistics (Vol. 3, pp. 1306-1309). West Sussex, UK: Wiley & Sons Ltd.
Austin, J. (2002). Graphing single-subject design data in Microsoft Excel. Workshop presented at the Flordia Association of Behavior Analysis Conference, Daytona Beach, FL.
Parsonson, B. S., & Baer, D. M. (1986). The graphic analysis of data. In A. Poling, & W. R. Fuqua (Eds.), Research methods in applied behavior analysis: Issues and advances (pp. 157-186). New York: Plenum Press.
Kratochwill, T. R., & Levin, J. R. (in press). Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue. Psychological Methods.
Matyas, T. A., & Greenwood, K. M. (1990). Visual analysis of single case time series: Effects of variability, serial dependence, and magnitude of intervention effects. Journal of Applied Behavior Analysis, 23, 341 351.
Parker, R. I., & Vannest, K. (2009). An improved effect size for single-case research: Nonoverlap of all pairs. Behavior Therapy, 40, 357-367.
Ferron, J., & Jones, P. K. (2006). Tests for the visual analysis of response-guided multiple-baseline data. Journal of Experimental Education, 75, 66-81.
Hopkins, B. L., Cole, B. L., & Mason, T. L. (1998). A critique of the usefulness of inferential statistics in applied behavior analysis. The Behavior Analyst, 21, 125-137.
Edgington, E. S. (1980). Random assignment and statistical tests for one-subject experiments. Behavioral Assessment, 2, 19-28.
Wampold, B., & Worsham, N. (1986). Randomization tests for multiple-baseline designs. Behavioral Assessment, 8, 135-143.
Onghena, P. (1992). Randomization tests for extensions and variations of ABAB single-case experimental designs: A rejoinder. Behavioral Assessment, 14, 153-171.
Onghena, P., & Edgington, E. S. (1994). Randomization Tests for restricted alternating treatments designs. Behavior Research and Therapy, 32, 783-786.
Koehler, M., & Levin, J. (1998). Regulated randomization: A potentially sharper analytical tool for the multiple-baseline design. Psychological Methods, 3, 206-217.
Huitema, B. E., & McKean, J. W. (1998). Irrelevant autocorrelation in least-squares intervention models. Psychological Methods, 3, 104-116.
Kromery, J. D., & Foster-Johnson, L. (1996). Determining the efficacy of intervention: The use of effect sizes for data analysis in single-subject research. Journal of Experimental Education, 65(1), 73-93.
Huitema, B. E., & McKean, J. W. (2000). Design specification issues in time-series intervention models. Educational and Psychological Measurement, 60(1), 38-58.
Huitema, B. E. (1985). Autocorrelation in applied behavior analysis: A myth. Behavioral Assessment, 7, 107-118.
Suen, H. K., & Ary, D. (1987). Autocorrelation in applied behavior analysis: Myth or reality? Behavioral Assessment, 9, 125-130.
Ferron, J. (2002). Reconsidering the use of the general linear model with single-case data. Behavior Research Methods, Instruments, & Computers, 34, 324-331.
McKnight, S. D., McKean, J. W., & Huitema, B. E. (2000). A double bootstrap method to analyze linear models with autoregressive error terms. Psychological Methods, 5, 87-101.
Van den Noortgate, W., & Onghena, P. (2003). Combining single-case experimental data using hierarchical linear models. School Psychology Quarterly, 18, 325-346.
Van den Noortgate, W., & Onghena, P. (2007). The aggregation of single-case results using hierarchical linear models. The Behavior Analyst Today, 8, 196-208.
Van den Noortgate, W., & Onghena, P. (2003). Hierarchical linear models for the quantitative integration of effect sizes in single-case research. Behavior Research Methods, Instruments, & Computers, 35, 1-10.
Van den Noortgate, W., & Onghena, P. (2008). A multilevel meta-analysis of single-subject experimental design studies. Evidence-Based Communication Assessment and Intervention, 2, 142-151.
- Student Expectations/Requirements and Grading Policy: Grades will be based on 4 projects. A brief description of the projects and the rubrics used for grading are provided below.
Project 1 (20%)
Students will complete an analysis of single-case data provided by the instructor. The student will graphically display the data, compute nonparametric effect sizes and conduct a randomization test.
____ Accurate graphical display of data
____ Accurate computation of nonparametric effect sizes
____ Correct program syntax for randomization test
____ Reasonable choice of test statistic
____ Accurate computation of obtained test statistic
____ Accurate computation of randomization distribution
____ Accurate reporting of the randomization test
Project 2 (20%)
Students will complete an analysis of single-case data provided by the instructor. The student will graphically display the data, estimate and plot trend lines, and compute interval estimates of effect size.
____ Accurate graphical display of data
____ Correct program syntax for estimating trend lines
____ Accurate plotting of trend lines
____ Correct program syntax for making interval estimates of effect sizes
____ Accurate confidence interval assuming independent error
____ Reasonable estimate of autocorrelation
____ Reasonable confidence interval assuming dependent errors
Project 3 (20%)
Students will complete an analysis of single-case data provided by the instructor. The student will graphically display the data, and use hierarchical linear modeling to estimate the average effect size as well as individual effects.
____ Accurate graphical display of data
____ Reasonable choice of hierarchical model
____ Correct program syntax for estimating the hierarchical model
____ Accurate point estimate of the average effect size
____ Accurate interval estimate of the average effect size
____ Accurate point estimates of individual effect sizes
____ Accurate interval estimates of individual effect sizes
Project 4 (40%)
Students will typically complete analyses of single-case data of their own choosing. The student will identify and/or gather the data, develop the analysis plan, graphically display the data, run analyses to support inferences of treatment effect, and write a report on the study in APA format, where the results section should be of a quality suitable for publication.
____ Inclusion of rationale for study
____ Appropriate level of detail regarding participants
____ Appropriate level of detail regarding variables
____ Clearly described analyses, including any modifications
____ Accurate program syntax
____ Accurate graphical display
____ Appropriate effect estimate(s)
____ Accurate effect estimate(s)
____ Appropriate effect inference(s)
____ Accurate effect inference(s)
____ Limitations noted
____ Conclusions consistent with results
____ Publishable writing of results section
Grades will be based on a weighted total
Total = .2*Project1 + .2*Project2 + .2*Project3 + .4*Project4
Grades will then be assigned based on the total score
- Assignments, Exams and Tests: The following dates are based on summer 2010 offering as EDG 7931
May 20th Single-Case Designs and Graphical Display - I
May 27th Single-Case Desings and Graphical Display - II
June 3rd Visual Analyses and Nonparametric Effect Sizes
June 10th Randomization Tests for Treatment Effects - I
June 24th Randomization Tests for Treatment Effects - II
Assignment 2 due
July 1st Autocorrelation and Interval Estimates of Effects
July 8th Hierarchical Linear Modeling to Synthesize Across Cases - I
Assignment 2 due
July 15th Hierarchical Linear Modeling to Synthesize Across Cases - II
Assignment 3 due
July 22nd Project presentations
Final project due
- Attendance Policy: Course Attendance at First Class Meeting – Policy for Graduate Students: For structured courses, 6000 and above, the College/Campus Dean will set the first-day class attendance requirement. Check with the College for specific information. This policy is not applicable to courses in the following categories: Educational Outreach, Open University (TV), FEEDS Program, Community Experiential Learning (CEL), Cooperative Education Training, and courses that do not have regularly scheduled meeting days/times (such as, directed reading/research or study, individual research, thesis, dissertation, internship, practica, etc.). Students are responsible for dropping undesired courses in these categories by the 5th day of classes to avoid fee liability and academic penalty. (See USF Regulation – Registration - 4.0101,
Attendance Policy for the Observance of Religious Days by Students: In accordance with Sections 1006.53 and 1001.74(10)(g) Florida Statutes and Board of Governors Regulation 6C-6.0115, the University of South Florida (University/USF) has established the following policy regarding religious observances: (http://usfweb2.usf.edu/usfgc/gc_pp/acadaf/gc10-045.htm)
In the event of an emergency, it may be necessary for USF to suspend normal operations. During this time, USF may opt to continue delivery of instruction through methods that include but are not limited to: Blackboard, Elluminate, Skype, and email messaging and/or an alternate schedule. It’s the responsibility of the student to monitor Blackboard site for each class for course specific communication, and the main USF, College, and department websites, emails, and MoBull messages for important general information.
- Policy on Make-up Work: “Plagiarism is defined as "literary theft" and consists of the unattributed quotation of the exact words of a published text or the unattributed borrowing of original ideas by paraphrase from a published text. On written papers for which the student employs information gathered from books, articles, or oral sources, each direct quotation, as well as ideas and facts that are not generally known to the public-at-large, must be attributed to its author by means of the appropriate citation procedure. Citations may be made in footnotes or within the body of the text. Plagiarism also consists of passing off as one's own, segments or the total of another person's work.”
“Punishment for academic dishonesty will depend on the seriousness of the offense and may include receipt of an "F" with a numerical value of zero on the item submitted, and the "F" shall be used to determine the final course grade. It is the option of the instructor to assign the student a grade of "F" of "FF" (the latter indicating dishonesty) in the course.”
- Program This Course Supports: Ph.D. Curriculum and Instruction with emphasis in Educational Measurement and Evaluation
- Course Concurrence Information: Would not be required in any programs, but could be taken to enhance the program of studies of those interested in single-case experimental research in general and for those planning to use single-case experimental methods in their dissertation research. In past offerings as a special topics course (EDG 7931) it has been taken by students in school psychology, special education, reading and literacy studies, as well as students outside the college of education (e.g., psychology).