The use of student survey in higher education is vast and gathered for a variety of purposes, including accountability, assessment of learning outcomes, and increasingly, to measure student engagement (Pike, 2012; Porter, 2011). The crucial role student feedback plays in higher education begs the question, just how valid are college student surveys? More precisely, are they being validated in a way that is consistent with current validity theory and with the criteria agreed upon and put forth by professional organizations? Recent efforts to validate instruments appear to be minimal and reflect a lack of understanding of current validity theory. Whereas theorists tend to view validity as consisting of one general form of construct validity, practitioners continue to perceive different types of validity. The result is a gap between validity theory and how validity is assessed in practice.
For this research, the validity of the newly developed Community College Student Engagement Scale (CCSES) was assessed. The concept of student engagement has garnered much attention in recent years and has been linked to student retention, motivation, and academic achievement (Fredricks, Blumfield, & Paris, 2004). Given the effects attributed to student engagement on these important educational issues, the need for a reliable and accurate measure of it is great, especially at the community college level, where there is a lack of available student engagement measures.
Using Messick’s (1988, 1989, 1995) unified framework of construct validity, this research aimed to accomplish two things: (a) assess the validity of the student engagement scores in a way that is consistent with established guidelines and theory, and (b) document the barriers, if any, to assessing validity in a concerted way that could potentially speak to the existing gap between validity theory and practice.
A mixed-methods study was used to assess the content, structural, and external facets of construct validity. The sample consisted of community college instructors and students. Data collected from instructor interviews, student focus groups, and classroom observations indicate that there is sufficient evidence of the content facet of construct validity for the CCSES. Results also indicate that, structurally, the CCSES corresponds with the multi-dimensional nature of the student engagement construct, as defined by the engagement literature. Lastly, external results show convergence of student self-reported, instructor-reported, and researcher-reported engagement data. External results show that instructors’ ratings of their students’ class participation engagement was the only significant predictor of English GPA and overall fall 2014 GPA.
Future validity studies should move away from the fragmented and outdated framework that they continually rely on, one that states that there are three separate types of validity. Continued attempts to assess validity using a unified framework can potentially improve validity theory and practice, because they allow for the sharing of experiences, strategies, and lessons learned.