Our Approach to the Study of American Colleges and Universities: Considerations of Demography

In our just published book, The Real World of College, we report a great deal about life on today’s campuses in the United States. But we say little about findings as they pertain to conventional demographic categories.  This decision has raised concerns among some readers. Here we provide our rationale. 

First, a bit of background. 

In 2012, along with our colleague Richard Light, we launched an empirical study of higher education in the United States. Our primary goal was to collect data that would inform recommendations for the entire sector—one beset with problems, among them doubts about the value of higher education, financial sustainability, and safety issues on campus.  With an energetic research team, we investigated the perspectives of the major stakeholders across ten disparate institutions; these campuses were carefully selected so that we could compare findings on dimensions such as size, selectivity, geographical location, and/or type (public or private, residential or commuter). Specifically, for more than five years, we carried out in-depth, qualitative research. Our major source of data consisted of lengthy semi-structured interviews with over 2000 individuals, representing eight different constituencies—first-year students, graduating students, faculty, administrators, parents, young alums, trustees, and job recruiters. About half of the interviews were conducted in-person; the other half were carried out with FaceTime or Google Meets (before the COVID pandemic of 2020, before we knew of Zoom!).  

Our Research Approach: Our methodology was patterned on an earlier ten-year research project.  In the Good Work study we conducted approximately 1500 interviews with professionals from nine different lines of work. In the course of this research, we learned that in-depth, face-to-face semi-structured conversations create a genuine and candid rapport between interviewer and interviewee. Based on that earlier work, in our recent college study, we chose to develop an open-ended interview questionnaire consisting of approximately three dozen questions (we posed only two rank order questions).  

The interview experience was carefully designed to elicit perspectives on the topics and issues which were most important to the participant. As interviewers, we aimed to create a comfortable and relaxed environment.  We did not want individuals to feel as though they were being “studied”; rather, we wanted—and needed—individuals to feel that they could trust us with their honest and candid perspectives (they had all agreed to recording and note-taking). In this vein, we started the interview with straightforward warm-up questions; listened carefully to what participants talked about and what they did not talk about; followed up on intriguing and sometimes unexpected comments; and at the end, gave participants a chance to indicate whether there were any other issues that they wanted to bring up which we did not address. After these data were collected, as analysts, we looked for the similarities and differences across what participants included in the discussions, what they brought up unprompted, as as well as what they omitted. 

To our knowledge, this kind of qualitative study of college—across different types of institutions and a variety of stakeholders—has not been carried out in the last several decades.  Most recent studies on higher education have focused on a particular institution or type of institution (e.g., single sex institutions, highly selective institutions, religious institutions), or studies are based on survey data, mainly quantitative. The surveys are often developed by institutional researchers for their own campus purposes, or social scientists who want to collect data from a broad range of participants.  

In our research, we aspired to do something very different: to collect data based on impressions, stories, explanations, rather than on responses selected from a pre-designed list of options. The kind of   qualitative research that we embraced is indeed labor intensive (and costly); we hoped to elicit and then to probe rich and authentic points of view. We wanted to explore people’s thoughts about college, so that—as the title of our book suggests—we could describe the “real world of college.” 

Some details of our study of higher education:  

Selection of Participants: In selecting and inviting individuals to participate, we employed both opportunistic and strategic approaches. For example, with students, because we wanted a group of students who represented different facets of each campus, we first set up recruiting tables on campus and posted fliers on bulletin boards. At first, in “tabling,” we did not have any criteria other than being a first-year student or graduating student, and being enrolled in Arts and Sciences (with the exception of those in our one “comparison school”).  Students were given the choice of a $50 gift card or a contribution to a charity.  

Surprisingly, at some schools we had a hard time recruiting students. On these campuses, cookies, candy, and even the offer of monetary compensation did not always easily attract students. While students at some schools lined up to schedule an interview, on other campuses, students would walk by our table, staring at their phones, not willing to make eye contact. In fact, we could have conducted a whole study on student engagement based on the experience of recruiting for this study! 

After we completed approximately half of the student interviews on a particular campus, we investigated our sample. Thereafter, we sought students that we did not yet have (e.g., students in natural sciences, students involved with campus governance, students involved with athletics).  In effort to secure a group of students who represented different areas of the campus, we asked specific departments to post more fliers, if that was needed.  

We used similar opportunistic and strategic approaches with trustees, parents, young alums, and job recruiters; our goal was to secure a selection of participants who worked in a range of industries, and occupied a variety of roles. For faculty and administrators, we sought two criteria: individuals who represented a range of departments on campus and those individuals who had been at their respective institution for more than a few years. We wanted to converse with on-campus individuals who had considerable experience being at the campus. 

Importantly, because our research project was focused on the whole sector of higher education, and not just a single constituency (i.e., students or faculty), we were mostly concerned with recruiting individuals on the basis of their involvement and participation in the various facets of each respective institution—academics, co-curricular and extra-curricular opportunities, student and campus services (e.g., financial aid, mental health, institutional research, development). We prioritized these defining characteristics of any particular individual, over nearly all conventional demographic traits.  

In this context, as we launched our research, we made a deliberate decision not to systematically collect conventional demographic data (racial, ethnic background, socio-economic status, sexual orientation) about our interview participants.  

A reader of this blog—or our book—may wonder or question why we made this decision.  

Here, we outline our rationale: 

Study Design: 

By study design, we interviewed 1000 students in total, 100 students on each of the 10 campuses, 50 first-year students on each campus and 50 graduating students on each campus. Within each group, we sought to recruit students from various academic departments and activity profiles activities, as well as those who lived on and off campus.   

If we sought to make claims by demography in this study, we would have developed a different kind of study. To investigate trends by demography—and eventually report statistically significant findings—we would have needed to recruit several thousands of student participants, because there are so many groupings. Practically speaking, we could have spent our research funds on demographic variables alone, sacrificing other important differences that we were able to take into account, including the opportunity to compare the perspectives of individuals across stakeholder groups and campuses.   

Put sharply, if we wanted to compare individuals across different stakeholder groups and campuses, and across the range of demographic differences, we would have been gathering data for decades—ultimately losing the basis for comparison. As just one example, investigating the perspectives of students in 2012 as compared to the perspectives of students in 2022 would have produced a “cohort effect” for those students in college before the COVID-19 pandemic as compared to those students in college during the COVID-19 pandemic. 

Participant Recruitment: 

In recruiting participants for the study, we did not believe that as social scientists, we should be selecting or inviting—or declining the inclusion of—participants purely based on their demography. We did not want to turn people away because they represented a particular category, nor did we want to search for participants who did represent a particular category. (We recognize that for large-scale medical studies or voter turnout, it may well be useful to have data based on gender and race, though again such data can readily be misused.) 

We also did not want participants to feel that they were selected or invited to participate in the study for a particular demographic reason; we did not want them advertently or inadvertently to skew their responses with a certain demographical feature in mind. We sought participants’ spontaneous, genuine responses. If individuals raised the importance organically—on their own—of being a first-generation student in their college experience, or a student of lower socio-economic background, or a student of color or particular sexual orientation—or all these categories—as researchers, we would know that these characteristics were important facets of their perspectives. Similarly, if students did not raise or mention these demographics in discussion when we specifically asked open-ended questions about struggles, challenges, or problems on campus—or even opportunities on campus—we felt the omission was just as important.

Analysis of Data: 

As objective coders and researchers, we felt that it was important to accept what individuals told us at “face value,” and not question or doubt—or over-interpret—comments in light of a particular personal trait.  Every individual represents a blend of different demographies (e.g., a white, male student may be a first-generation student, or a black female student may be a gay student) and we had no interest in teasing these apart (especially because we did not recruit participants based on these traits), or asking our participants to tease apart the layers of their own identity. “Intersectionality”—the frequent overlap among these categories— limits our understanding of any single defining characteristic.

As careful researchers, we feel that it is our responsibility to protect the data from being misused by others, once findings and information is in the public sphere.  In this context, consider Howard’s own unfortunate, life-transforming experience with his theory of Multiple Intelligences.  When he introduced the theory in the early 1980s, he did not create any tests, nor did he speculate about differences in profiles of intelligence based on the demographic criteria just mentioned.  But soon enough, others created tests—typically, of poor quality; began to administer these instruments to various groups; and then made unwarranted statements about which intelligence a particular group had and which intelligence the same group lacked.  Howard made a decision at that point both to denounce this practice and to avoid such demographic delineation and dictatorship in his future work. Even if we had collected demographic information of our participants of the higher education study, we would not want to report them out of concern that these data could be used in such uninformed or harmful ways.

 Importantly, in this context, we decided that even when we noted particular traits (e.g., gender, first-generation status, geographical residence), we made a conscious decision not to report findings by these data.

 Concluding Note

In light of these decisions, it’s important to present two major findings detailed in The Real World of College:

 1.       Across 1000 students across ten disparate campuses, we find more similarities than differences, in terms of their words, phrases, goals, struggles, and descriptions of the college experience. The very fact that the range of institutions in our study yielded few differences, indicates that we are indeed describing the swathe of American colleges.

 2.       Across all participants representing different constituencies, we find alignment in the perspectives and goals between students and the off-campus groups (parents, young alums, and trustees), misalignment in the perspectives and goals between students and the on-campus groups (faculty and administrators). 

These data are important and consequential for the sector of higher education. If we had focused on demography, for all the reasons stated above, we might not have been able to step back and see the big picture, or for that matter, the real world of college.

© Wendy Fischman and Howard Gardner 2022

Previous
Previous

A Single Mission—For Everyone on Campus

Next
Next

A Performance Task on a Controversial Topic: “Legacy Admissions”