The Solutions News

What’s a College Degree Worth?

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

The imperfect science and contested methods of measuring the return on investment of college. By Scott Carlson

Higher education’s data crunchers have increasingly been training their sights on the postgraduation career path. It’s an often-unpredictable trajectory shaped by students’ aspirations, talents, and backgrounds, by economic conditions, and by institutions’ effectiveness as launching pads.

What can earnings data tell us, about which institutions, which disciplines, and which individual academic programs give students the best chances of earning money?

No shortage of organizations have been trying to figure that out, many of them drawing on data submitted for the College Scorecard, a tool that was developed by the Obama administration and that, two years ago, started including earnings for individual academic departments. That data collection offers transparency to college students and their families, advocates argue, and many hope it will hold colleges accountable for the performance of their graduates.

When plentiful data on a particular facet of college life are available, think tanks, academic centers, and even private companies scramble into a watchdog role — and, given the public’s intense focus on the payoff of college, we can expect more studies of the return on investment of college to join the current spate:

  • The Brookings Institution, for example, analyzed earnings of community-college students, and found (among other things, like many of these studies) that colleges with higher proportions of minority students tended to have fewer programs in high-paying fields.
  • A working paper from the National Bureau of Economic Research calculated how institutional resources and reputation affected graduates’ earnings, concluding that more institutional resources correlate with better results for graduates.
  • The Postsecondary Value Commission concluded that women and members of minority groups occupy disproportionately more low-wage, high-social-value positions, and are more likely to choose majors associated with those occupations.
  • Georgetown University’s Center on Education and the Workforce examined the effect of education on lifetime earnings, and found that 16 percent of workers with a high-school degree and 28 percent with an associate degree earned more money than half of workers with a bachelor’s degree.

The think tank Third Way also recently released its own analysis of College Scorecard data. Third Way’s study calculates the “Price-to-Earnings Premium,” or how long it takes students to recoup college costs, based on the wage premium those students should get with a college degree. Ten percent of bachelor’s-degree programs and 21 percent of associate-degree programs offer no return on investment, according to the study.

Studies like those inspire some standard objections — that a college experience isn’t simply about a return on investment, and that a focus on earnings reinforces the perception that higher education is merely about individual benefits.

But the biggest problem with some of those projections should be clear in the methodology: The College Scorecard has only two years of earnings data for graduates with bachelor’s degrees. Two years is a short launch window.

That’s why many of the studies show that programs that lead directly to employment in engineering and health fields wind up on the list of high returns, as one might expect. Disciplines that are less obviously practical populate the list of programs most likely to lead to “no economic ROI” — but it also might not matter if you went to a top college. For example, according to the data behind the Third Way report, if you graduated from Carleton College with a degree in biology, English, film arts, fine and studio arts, or social sciences, you may have thrown your money away. (For most of those students at Carleton — particularly the biology majors, who might have gone on to medical school — the picture is probably very different 10 years out.)

Michael Itzkowitz, a senior fellow in higher education at Third Way and an architect of the College Scorecard, acknowledges that the numbers are an “early indication.”

“A lot of us in the field are anxious for more years of data to become available, and the [Education] Department is steadily working towards that,” he says. “In the meantime, you know, we have what we have.”

Still, he believes the data are “actionable.” One of the frequently cited results from his study focuses on the returns on certificate programs, which are often marketed and designed to have a quick payoff — but less than half do. Certificates in criminal justice, nursing, precision metalworking, and transportation tend to be among the best bets, while programs in cosmetology, culinary arts, somatic bodywork, and veterinary technology are among the worst.

Over time, he points out, the data on earnings tied to programs will improve as results from more years are added, which will provide a better sense of how graduates of individual programs actually fare — and this could be a good thing for both students and colleges’ academic programs. That would encourage colleges to look under the hood of particular departments and colleges. Individual humanities and fine-arts programs with great mentorship and workplace connections could show how they aren’t poor investments, as generalizations might suggest.

Programs with continuing poor returns for students could be modified or cut, the analysts at the left-leaning Third Way argued. A similar point was made by the conservative Texas Public Policy Foundation last month, when it released a detailed report on earnings connected to programs and compared with student debt. “Colleges should consider shutting down programs that consistently lead to bad outcomes for their students,” the report concludes. Seventeen percent of college programs offer “mediocre” returns, which should give students and families pause, the report says. Five to 10 percent of programs offer “poor or worse outcomes,” and should face sanctions, including losing eligibility to participate in the federal student-loan system. (Both Third Way and the Texas Public Policy Foundation call for a revival of the gainful-employment rule, which set up debt-to-income standards for graduates. The Trump administration rescinded the rule in 2019.)

This kind of accountability is really important not only from a consumer perspective, but also from a public-policy perspective,” says Martin Van Der Werf, associate director of editorial and postsecondary policy at the Georgetown Center. The sort of data offered through the College Scorecard could give students a road map to programs that have the best early returns, perhaps most applicable to vocationally oriented degree programs and certificates.

But how useful is much of that college-to-career data to students and families? And to what extent does it skew their decision making?

“More information is generally a good thing — it’s just how it’s used,” says Van Der Werf, who is also a former Chronicle reporter. “Without context, it can be easily misused and misunderstood.” Many students don’t have a solid grasp of the nuances that would make much of the college-to-career data more meaningful — or help them avoid misinterpretations. For example, they might not see that the lists and rankings have only a couple of years’ worth of earnings data or that the data have been drawn from students who received federal financial aid (but not from students with private loans, nor from students who don’t have debt), facts buried in the methodology.

Most of all, many people are confused about the relationship between majors and the job market. Students and parents (and, frankly, many people who work for colleges, along with media pundits) tend to equate major with job, and have a hard time seeing the pathways to careers from, say, the humanities. Lists of the “most to least valuable college majors,” as Bankrate characterized its program rankings, based on data from the U.S. Census Bureau’s American Community Survey, reinforce a simplistic equation: Major in architectural engineering (which topped Bankrate’s list), and you could have a lucrative career as an architectural engineer; major in composition and speech, drama, fine arts, or other majors at the bottom of the list, and who knows what you’ll do — except struggle.

The lists may sway some students, or they may inspire parents to pressure their kids into something more “practical.” But on the whole, the choice of a major is often highly personal. It’s unlikely that a student who is interested in drama or communications is going to major in architectural engineering or construction services simply because it pays better — or if students do make that choice, they are less likely to be happy and successful at it.

What students may not need are more lists of the economic returns on specific majors, but help making something out of their interests and talents. The selection of a major is typically based on the whims and (mis)perceptions of students, and it’s not often corrected by the advising and mentorship that students get. Consider a student who consults with a high-school counselor, a mentor, or a college adviser with a handful of majors under consideration. In far too many cases, “they’re not going to tell you, ‘I think the best decision is X,’” says Van Der Werf. “They’ll say, ‘Wow, those look like great decisions. Good luck.’ We just don’t have a system in this country that helps you along.”

If done right, students and parents, policy advocates, and colleges wouldn’t simply focus on which disciplines or majors pay off the best, and how to encourage or push more students into them. Instead they’d focus on helping students see the relevant knowledge and useful skills they would learn in the disciplines they chose.

But that requires conversation and engagement, and perhaps more support for people working with students in advising offices and career counseling. Providing answers — or pressure — through data, in some ways, is easier.

The education world is saturated in data, collected in the belief that if we gather more information, we can identify and solve the sector’s many problems. Institutions have long been subjected to rounds of data collection, followed by the ratings, grades, and brackets that come from the data — whether that’s various lists of the greenest colleges, or the analyses of colleges under financial strain, or U.S. News & World Report’s rankings, or multiple companies issuing “safest campus” lists, based on crime statistics gathered under the Clery Act. Much like students who say that standardized tests and grades don’t accurately measure their attributes, institutions often complain that the data collection is flawed, or that various rankings can’t capture what they truly offer.

So, to fill out the picture, policy makers and analysts look for more statistics to plug in the holes. Surely, they say, more data will get them closer to the truth. But then what?

“Once you start collecting data, you very rarely stop collecting any of those data points. You just add new ones,” says Mark Salisbury, chief executive and founder of TuitionFit, which uses data from college applicants to help them compare costs between colleges. The nuances in tuition data offer another example of potential problems with College Scorecard studies — the data include only students who took on debt.

“The smaller the program, the smaller the number of graduates, the more likely the number you’ve used for the investment is off, and the more likely it’s off by a lot,” he says. And the small programs are often in the humanities, already targets of administrators and trustees. “They read these reports and say, Here’s another reason for us to just disband the philosophy program.”

With data, there is always the question of how it is going to change behavior. Salisbury worked at the Center for Research on Undergraduate Education at the University of Iowa and in institutional research at Augustana College, in Illinois, before founding TuitionFit. Institutional research started out as a position close to the president, he says, but gradually it has moved down the organizational chart. Salisbury believes that reflects the growing importance of strategic planning over number crunching, but it might also represent the ways that data are disregarded within institutions.

“How many times over the past 20 years have there been calls for institutions to use data to inform decision making?” says Salisbury. “It’s just such an absurd thing to say.” The reality is that midlevel institutional researchers won’t wave contrary data in front of a president who has put a flag in the ground for a particular initiative. People have been fired for that, Salisbury says.

And he has seen instances in higher ed where administrators subtly fudge data to make an institution look better — for example, to inflate application numbers to make a college seem more selective, or to claim that 90 percent of a college’s graduates get their degrees within four years. (That is technically true if the college’s four-year graduation rate is 70 percent and its six-year rate is 77 percent.) “Telling institutions to use data is just being completely ignorant of the politics on a campus,” says Salisbury, “because the power structure is the thing that defines which data gets used, and how it gets used.”

Ignoring inconvenient metrics is one problem. But the data can also drive priorities or behaviors for the sake of the data, a trap summed up in Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” Educators have seen that effect with No Child Left Behind, a 2001 federal law that sharply expanded high-stakes testing in schools and tied federal money to the results. Parents complained that a focus on scores had distorted priorities in the schools; educational reformers wondered if the data-driven focus on accountability had changed things for the better. Schoolteachers complained that they spent more time on assessments and gathering data than actually teaching.

Nicholas Tampio, a professor of political science at Fordham University, sees a similar dynamic coming for higher education. Data will be used for accountability, and then those metrics will start to sway behavior, such as how students find an interest or passion, choose a field of study, and participate in society. Recently, he wrote a commentary about a database, pushed by the Bill & Melinda Gates Foundation, to track individual financial outcomes for college graduates.

“Are they going to do a No Child Left Behind Act for higher education?” he says. “That’s what I’m on the lookout for.” The College Transparency Act, reintroduced this year with bipartisan support, would be “a key piece of the puzzle,” he says. It would mandate data collection on factors like student enrollment and completion rates, and would allow the Department of Education to work with the Internal Revenue Service and the Social Security Administration to calculate students’ financial outcomes.“Education and political philosophers have realized from Plato that there’s an intimate connection between education and politics,” says Tampio. So what are policy makers signaling to students with a college version of No Child Left Behind, with its focus not on math scores but credit scores?

Scott Carlson is a senior writer who explores where higher education is headed. Follow him on Twitter @carlsonics, or write him at scott.carlson@chronicle.com.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent News

Editor's Pick